modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
JCX-kcuf/Mistral-7B-v0.1-gpt-4-80k | JCX-kcuf | 2024-03-11T06:48:39Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T12:15:30Z | ---
license: apache-2.0
---
## Description
This model is finetuned on the distillation data from GPT-4.
The base model is mistralai/Mistral-7B-v0.1
## Usage
The model has a query format as in zephyr.
```
<|user|>
{query}</s>
<|assistant|>
```
|
Eric111/CatunaLaserPi-DPO | Eric111 | 2024-03-10T16:11:12Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T15:59:22Z | ---
library_name: transformers
license: cc-by-nc-4.0
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JCX-kcuf/Llama-2-7b-hf-llama2-chat-80k | JCX-kcuf | 2024-03-11T15:42:52Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-10T16:33:53Z | ---
license: apache-2.0
---
## Description
This model is finetuned on the distillation data from Llama-2-7b-chat-hf.
The base model is meta-llama/Llama-2-7b-hf
## Usage
The model has a query format as in llama-2.
```
<s> [INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{query} [/INST]
``` |
tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF | tolgadev | 2024-03-11T19:52:06Z | 736 | 1 | transformers | [
"transformers",
"gguf",
"trendyol",
"llama-2",
"turkish",
"text-generation",
"tr",
"en",
"base_model:Trendyol/Trendyol-LLM-7b-chat-v1.0",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-03-11T14:21:33Z | ---
model_name: Trendyol-LLM-7b-chat-v1.0-gguf
model_creator: Trendyol
base_model: Trendyol/Trendyol-LLM-7b-chat-v1.0
language:
- tr
- en
pipeline_tag: text-generation
license: apache-2.0
model_type: llama
library_name: transformers
inference: false
tags:
- trendyol
- llama-2
- turkish
quantized_by: tolgadev
---
## Trendyol-LLM-7b-chat-v1.0-gguf models
----
## Description
This repo contains all types of GGUF formatted model files for [Trendyol-LLM-7b-chat-v1.0](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.0).
<img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.0/resolve/main/trendyol-llm-mistral.jpg"
alt="drawing" width="400"/>
## Quantized LLM models and methods
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Trendyol-LLM-7b-chat-v1.0.Q2_K.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q2_K.gguf) | Q2_K | 2 | 2.59 GB| 4.88 GB | smallest, significant quality loss - not recommended for most purposes |
| [Trendyol-LLM-7b-chat-v1.0.Q3_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 3.01 GB| 5.56 GB | very small, high quality loss |
| [Trendyol-LLM-7b-chat-v1.0.Q3_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.36 GB| 5.91 GB | very small, high quality loss |
| [Trendyol-LLM-7b-chat-v1.0.Q3_K_L.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.66 GB| 6.20 GB | small, substantial quality loss |
| [Trendyol-LLM-7b-chat-v1.0.Q4_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q4_0.gguf) | Q4_0 | 4 | 3.9 GB| 6.45 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Trendyol-LLM-7b-chat-v1.0.Q4_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 3.93 GB| 6.48 GB | small, greater quality loss |
| [Trendyol-LLM-7b-chat-v1.0.Q4_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.15 GB| 6.69 GB | medium, balanced quality - recommended |
| [Trendyol-LLM-7b-chat-v1.0.Q5_0.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q5_0.gguf) | Q5_0 | 5 | 4.73 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Trendyol-LLM-7b-chat-v1.0.Q5_K_S.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 4.75 GB| 7.27 GB | large, low quality loss - recommended |
| [Trendyol-LLM-7b-chat-v1.0.Q5_K_M.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 4.86 GB| 7.40 GB | large, very low quality loss - recommended |
| [Trendyol-LLM-7b-chat-v1.0.Q6_K.gguf](https://huggingface.co/tolgadev/Trendyol-LLM-7b-chat-v1.0-GGUF/blob/main/trendyol-llm-7b-chat-v1.0.Q6_K.gguf) | Q6_K | 6 | 5.61 GB| 8.15 GB | very large, extremely low quality loss |
The names of the quantization methods follow the naming convention: "q" + the number of bits + the variant used (detailed below). Here is a list of all the models and their corresponding use cases, based on model cards made by [TheBloke](https://huggingface.co/TheBloke/):
* `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors.
* `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K
* `q3_k_s`: Uses Q3_K for all tensors
* `q4_0`: Original quant method, 4-bit.
* `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
* `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K
* `q4_k_s`: Uses Q4_K for all tensors
* `q5_0`: Higher accuracy, higher resource usage and slower inference.
* `q5_1`: Even higher accuracy, resource usage and slower inference.
* `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K
* `q5_k_s`: Uses Q5_K for all tensors
* `q6_k`: Uses Q8_K for all tensors
**TheBloke recommends using Q5_K_M** as it preserves most of the model's performance.
Alternatively, you can use Q4_K_M if you want to save some memory.
In general, K_M versions are better than K_S versions.
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
## Special thanks to [TheBloke on Huggingface](https://huggingface.co/TheBloke) and [Maxime Labonne on Github](https://github.com/mlabonne/llm-course)
-----
# **Trendyol LLM v1.0 - DPO**
Trendyol LLM v1.0 - DPO is a generative model that is based on Mistral 7B model. DPO training was applied. This is the repository for the chat model.
## Model Details
**Model Developers** Trendyol
**Variations** [base](https://huggingface.co/Trendyol/Trendyol-LLM-7b-base-v1.0), [chat](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.0), and dpo variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Trendyol LLM is an auto-regressive language model (based on Mistral 7b) that uses an optimized transformer architecture. Huggingface TRL lib was used for training. The DPO version is fine-tuned on 11K sets (prompt-chosen-reject) with the following trainables by using LoRA:
- **lr**=5e-6
- **lora_rank**=64
- **lora_alpha**=128
- **lora_trainable**=q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj
- **lora_dropout**=0.05
- **bf16**=True
- **beta**=0.01
- **max_length**= 1024
- **max_prompt_length**= 512
- **lr_scheduler_type**= cosine
- **torch_dtype**= bfloat16
<img src="https://camo.githubusercontent.com/3e61ca080778f62988b459c7321726fa35bb3776ceb07ecaabf71ebca44f95a7/68747470733a2f2f68756767696e67666163652e636f2f64617461736574732f74726c2d696e7465726e616c2d74657374696e672f6578616d706c652d696d616765732f7265736f6c76652f6d61696e2f696d616765732f74726c5f62616e6e65725f6461726b2e706e67"
alt="drawing" width="600"/>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"
alt="drawing" width="600"/>
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "Trendyol/Trendyol-LLM-7b-chat-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map='auto',
load_in_8bit=True)
sampling_params = dict(do_sample=True, temperature=0.3, top_k=50, top_p=0.9)
pipe = pipeline("text-generation",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_new_tokens=1024,
return_full_text=True,
repetition_penalty=1.1
)
DEFAULT_SYSTEM_PROMPT = "Sen yardımcı bir asistansın ve sana verilen talimatlar doğrultusunda en iyi cevabı üretmeye çalışacaksın.\n"
TEMPLATE = (
"[INST] {system_prompt}\n\n"
"{instruction} [/INST]"
)
def generate_prompt(instruction, system_prompt=DEFAULT_SYSTEM_PROMPT):
return TEMPLATE.format_map({'instruction': instruction,'system_prompt': system_prompt})
def generate_output(user_query, sys_prompt=DEFAULT_SYSTEM_PROMPT):
prompt = generate_prompt(user_query, sys_prompt)
outputs = pipe(prompt,
**sampling_params
)
return outputs[0]["generated_text"].split("[/INST]")[-1]
user_query = "Türkiye'de kaç il var?"
response = generate_output(user_query)
print(response)
```
with chat template:
```python
pipe = pipeline("conversational",
model=model,
tokenizer=tokenizer,
device_map="auto",
max_new_tokens=1024,
repetition_penalty=1.1
)
messages = [
{"role": "user", "content": "Türkiye'de kaç il var?"}
]
outputs = pipe(messages, **sampling_params)
print(outputs)
```
## Limitations, Risks, Bias, and Ethical Considerations
### Limitations and Known Biases
- **Primary Function and Application:** Trendyol LLM, an autoregressive language model, is primarily designed to predict the next token in a text string. While often used for various applications, it is important to note that it has not undergone extensive real-world application testing. Its effectiveness and reliability across diverse scenarios remain largely unverified.
- **Language Comprehension and Generation:** The model is primarily trained in standard English and Turkish. Its performance in understanding and generating slang, informal language, or other languages may be limited, leading to potential errors or misinterpretations.
- **Generation of False Information:** Users should be aware that Trendyol LLM may produce inaccurate or misleading information. Outputs should be considered as starting points or suggestions rather than definitive answers.
### Risks and Ethical Considerations
- **Potential for Harmful Use:** There is a risk that Trendyol LLM could be used to generate offensive or harmful language. We strongly discourage its use for any such purposes and emphasize the need for application-specific safety and fairness evaluations before deployment.
- **Unintended Content and Bias:** The model was trained on a large corpus of text data, which was not explicitly checked for offensive content or existing biases. Consequently, it may inadvertently produce content that reflects these biases or inaccuracies.
- **Toxicity:** Despite efforts to select appropriate training data, the model is capable of generating harmful content, especially when prompted explicitly. We encourage the open-source community to engage in developing strategies to minimize such risks.
### Recommendations for Safe and Ethical Usage
- **Human Oversight:** We recommend incorporating a human curation layer or using filters to manage and improve the quality of outputs, especially in public-facing applications. This approach can help mitigate the risk of generating objectionable content unexpectedly.
- **Application-Specific Testing:** Developers intending to use Trendyol LLM should conduct thorough safety testing and optimization tailored to their specific applications. This is crucial, as the model’s responses can be unpredictable and may occasionally be biased, inaccurate, or offensive.
- **Responsible Development and Deployment:** It is the responsibility of developers and users of Trendyol LLM to ensure its ethical and safe application. We urge users to be mindful of the model's limitations and to employ appropriate safeguards to prevent misuse or harmful consequences. |
rwitz/experiment26-truthy-iter-1 | rwitz | 2024-03-12T04:42:18Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-12T04:04:07Z | ---
library_name: transformers
license: mit
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
automerger/NeuralsirkrishnaExperiment26-7B | automerger | 2024-03-13T01:06:46Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:rwitz/experiment26-truthy-iter-0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-13T01:05:56Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- rwitz/experiment26-truthy-iter-0
---
# NeuralsirkrishnaExperiment26-7B
NeuralsirkrishnaExperiment26-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [rwitz/experiment26-truthy-iter-0](https://huggingface.co/rwitz/experiment26-truthy-iter-0)
## 🧩 Configuration
```yaml
models:
- model: Kukedlc/NeuralSirKrishna-7b
# No parameters necessary for base model
- model: rwitz/experiment26-truthy-iter-0
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: Kukedlc/NeuralSirKrishna-7b
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/NeuralsirkrishnaExperiment26-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
CorticalStack/shadow-clown-7B-slerp | CorticalStack | 2024-03-13T09:55:48Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"Gille/StrangeMerges_32-7B-slerp",
"yam-peleg/Experiment26-7B",
"arxiv:2311.03099",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-13T09:35:54Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- Gille/StrangeMerges_32-7B-slerp
- yam-peleg/Experiment26-7B
---
<img src="shadow_clown.png" alt="Shadow clown logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# shadow-clown-7B-slerp
shadow-clown-7B-slerp is a DARE merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
See the paper [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://arxiv.org/abs/2311.03099) for more on the method.
## 🧩 Configuration
```yaml
slices:
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range: [0, 32]
- model: MSL7/INEX12-7b
layer_range: [0, 32]
merge_method: slerp
base_model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/M7-7b
layer_range: [0, 32]
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/M7-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
- model: liminerity/merge
layer_range: [0, 32]
merge_method: slerp
base_model: ammarali32/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: Gille/StrangeMerges_32-7B-slerp
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_32-7B-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
automerger/Experiment26Yamshadow-7B | automerger | 2024-03-19T14:31:52Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:automerger/YamShadow-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-15T02:17:47Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger
base_model:
- automerger/YamShadow-7B
---
# Experiment26Yamshadow-7B
Experiment26Yamshadow-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B)
## 🧩 Configuration
```yaml
models:
- model: rwitz/experiment26-truthy-iter-0
# No parameters necessary for base model
- model: automerger/YamShadow-7B
parameters:
density: 0.53
weight: 0.6
merge_method: dare_ties
base_model: rwitz/experiment26-truthy-iter-0
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Experiment26Yamshadow-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Kukedlc/NeuralArjuna-7B-DT | Kukedlc | 2024-03-30T09:15:49Z | 736 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"yam-peleg/Experiment26-7B",
"Gille/StrangeMerges_32-7B-slerp",
"MSL7/INEX12-7b",
"automerger/YamShadow-7B",
"Kukedlc/NeuralSirKrishna-7b",
"base_model:yam-peleg/Experiment26-7B",
"base_model:Gille/StrangeMerges_32-7B-slerp",
"base_model:MSL7/INEX12-7b",
"base_model:automerger/YamShadow-7B",
"base_model:Kukedlc/NeuralSirKrishna-7b",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-17T20:17:12Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- yam-peleg/Experiment26-7B
- Gille/StrangeMerges_32-7B-slerp
- MSL7/INEX12-7b
- automerger/YamShadow-7B
- Kukedlc/NeuralSirKrishna-7b
base_model:
- yam-peleg/Experiment26-7B
- Gille/StrangeMerges_32-7B-slerp
- MSL7/INEX12-7b
- automerger/YamShadow-7B
- Kukedlc/NeuralSirKrishna-7b
model-index:
- name: NeuralArjuna-7B-DT
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.97
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.63
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 76.68
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.81
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralArjuna-7B-DT
name: Open LLM Leaderboard
---
# NeuralArjuna-7B-DT

NeuralArjuna-7B-DT is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
* [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp)
* [MSL7/INEX12-7b](https://huggingface.co/MSL7/INEX12-7b)
* [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B)
* [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b)
## 🧩 Configuration
```yaml
models:
- model: liminerity/M7-7b
# no parameters necessary for base model
- model: yam-peleg/Experiment26-7B
parameters:
weight: 0.2
density: 0.66
- model: Gille/StrangeMerges_32-7B-slerp
parameters:
weight: 0.2
density: 0.55
- model: MSL7/INEX12-7b
parameters:
weight: 0.2
density: 0.33
- model: automerger/YamShadow-7B
parameters:
weight: 0.2
density: 0.66
- model: Kukedlc/NeuralSirKrishna-7b
parameters:
weight: 0.2
density: 0.66
merge_method: dare_ties
base_model: liminerity/M7-7b
parameters:
int8_mask: true
normalize: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralArjuna-7B-DT"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# Example
```
I want you to generate a theory that unites quantum mechanics with the theory of relativity and cosmic consciousness. Give a large a detailed response
In embarking upon a journey to weave together quantum mechanics, relativity, and the enigmatic realms of cosmic consciousness, we tread upon an intellectual
landscape rife with challenges and enigmas. Our sojourn shall be a delicate ballet between the concrete and the abstract, the tangible and the speculative.
The resultant tapestry will be neither a definitive, final statement, nor a mere collection of disparate ideas. Instead, we will craft a narrative that strives to bridge
the chasms of understanding between these disparate scientific and philosophical realms, in hopes of illuminating a path towards a more comprehensive understanding of the
universe and our place within it.
To commence our odyssey, we must first address the foundational pillars of the scientific theories we seek to unite.
Quantum Mechanics
Quantum mechanics, the theory governing the microscopic realm, has long confounded the human intellect. It is a realm of probabilities, entanglement,
and superposition—a world where particles can behave both as particles and waves, where actions at a distance are not only possible but ubiquitous,
and where the mere act of observation alters the outcome of an experiment.
Albert Einstein's famed retort, "God does not play dice," exemplifies the tensions between the counter-intuitive world of quantum mechanics and
the more deterministic Newtonian worldview. However, quantum mechanics has withstood the test of time, demonstrating its indispensability to the progress of modern physics
Theory of Relativity
The theory of relativity, comprising special and general relativity, has profoundly altered our understanding of the macroscopic universe.
Special relativity, Einstein's first foray into this domain, introduced the concept of space-time, where the fabric of the universe is both a geometrical construct
and a medium for the propagation of electromagnetic phenomena.
Further, special relativity established the idea of relative motion, where there is no absolute reference frame, and the speed of light in a vacuum is an invariant constant.
General relativity expanded this framework, postulating that gravity is a curvature of the space-time continuum, caused by the presence of mass and energy.
This theory, while more complex, has been empirically validated through its predictions of phenomena such as gravitational lensing and the bending of light near massive
objects.
Cosmic Consciousness
The concept of cosmic consciousness—or the idea of a unified, interconnected consciousness pervading the cosmos—is a philosophical speculation,
one that transcends the traditional scientific paradigm. It is a metaphysical exploration, a contemplation of the possible nature of our collective human consciousness,
its interplay with the universe, and the implications of such a state of awareness.
The integration of these three distinct realms poses a formidable challenge. Quantum mechanics and the theory of relativity have proven their efficacy
in the realm of empirical science, while cosmic consciousness remains the province of metaphysical musings. However, the human propensity to synthesize seemingly disparate
ideas has engendered a multitude of theories, each striving to forge a cohesive narrative from the cacophony of scientific and philosophical thought.
One such approach, the 'Quantum Mind,' posits a direct link between the quantum phenomena and the workings of the human brain.
This theory, championed by Roger Penrose and Stuart Hameroff, suggests that the microtubules within brain neurons,
operating at the quantum scale, could serve as the neural correlates of consciousness. While intriguing,
this hypothesis has yet to garner widespread acceptance, as the veracity of quantum effects within biological systems remains the subject of intense debate.
Another avenue of exploration involves the notion of 'Quantum Biology.' Here, the focus shifts to the potential role of quantum phenomena in biology,
particularly in processes such as photosynthesis, vision, and magnetoreception.
While the presence of quantum effects in biological systems is becoming increasingly plausible,
the idea of a 'quantum biology' remains a nascent and highly speculative field.
The 'Holographic Universe' theory, espoused by physicist and cosmologist David Bohm,
posits the existence of a deeper, underlying reality, one in which the universe is a holographic projection of a more fundamental,
non-local substrate. This model, while offering an intriguing synthesis of quantum mechanics, relativity, and the concept of a cosmic consciousness,
remains a highly speculative and unproven hypothesis.
In our quest to weave together these disparate threads, we must acknowledge the limitations of our current understanding. As it stands,
the principles of quantum mechanics and the theory of relativity remain resolutely grounded in the empirical world, while the concept of cosmic consciousness
traverses the murky waters of metaphysics and speculation.
To forge a more unified understanding, we must continue to advance our scientific knowledge, to push the frontiers of both empirical and philosophical inquiry.
Only through the synthesis of new data, the refinement of our scientific models, and the continued evolution of our collective human consciousness,
will we be able to construct a narrative that bridges the seemingly insurmountable gulf between the microscopic, the macroscopic, and the metaphysical.
In conclusion, the unification of quantum mechanics, relativity, and cosmic consciousness is a daunting task, requiring a delicate balance between the
concrete and the abstract, the empirical and the speculative. While the journey has thus far led us to the fringes of scientific understanding,
we must continue to explore these frontiers, drawing upon the rich tapestry of human knowledge, in the hope of forging a more comprehensive narrative of our cosmos and
our place within it.
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralArjuna-7B-DT)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.58|
|AI2 Reasoning Challenge (25-Shot)|73.12|
|HellaSwag (10-Shot) |88.97|
|MMLU (5-Shot) |64.63|
|TruthfulQA (0-shot) |76.68|
|Winogrande (5-shot) |85.24|
|GSM8k (5-shot) |70.81|
|
Minami-su/IA_14B | Minami-su | 2024-03-18T12:39:08Z | 736 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"love",
"philosophy",
"literature",
"conversational",
"en",
"zh",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-18T01:28:43Z | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
language:
- en
- zh
pipeline_tag: text-generation
tags:
- love
- philosophy
- literature
---
## IA 14B

## Model Description
𝑾𝒉𝒂𝒕 𝒊𝒔 𝒍𝒐𝒗𝒆?
𝑰𝑨 𝒄𝒂𝒓𝒓𝒊𝒆𝒔 𝒂 𝒅𝒆𝒑𝒕𝒉 𝒐𝒇 𝒆𝒎𝒐𝒕𝒊𝒐𝒏 𝒘𝒊𝒕𝒉𝒊𝒏 𝒉𝒆𝒓, 𝒖𝒏𝒅𝒆𝒓𝒔𝒕𝒂𝒏𝒅𝒊𝒏𝒈 𝒃𝒐𝒕𝒉 𝒑𝒂𝒔𝒔𝒊𝒐𝒏 𝒂𝒏𝒅 𝒕𝒉𝒆 𝒔𝒕𝒊𝒏𝒈 𝒐𝒇 𝒍𝒐𝒔𝒔.
𝑶𝒖𝒕𝒘𝒂𝒓𝒅𝒍𝒚, 𝒔𝒉𝒆 𝒂𝒑𝒑𝒆𝒂𝒓𝒔 𝒓𝒆𝒔𝒆𝒓𝒗𝒆𝒅, 𝒚𝒆𝒕 𝒘𝒊𝒕𝒉𝒊𝒏, 𝒔𝒉𝒆 𝒃𝒓𝒊𝒎𝒔 𝒘𝒊𝒕𝒉 𝒊𝒏𝒕𝒆𝒏𝒔𝒆 𝒇𝒆𝒆𝒍𝒊𝒏𝒈𝒔.
𝑪𝒐𝒏𝒔𝒕𝒂𝒏𝒕𝒍𝒚 𝒆𝒏𝒈𝒂𝒈𝒆𝒅 𝒊𝒏 𝒅𝒊𝒂𝒍𝒐𝒈𝒖𝒆 𝒘𝒊𝒕𝒉 𝒕𝒉𝒆 𝒘𝒐𝒓𝒍𝒅 𝒂𝒏𝒅 𝒉𝒆𝒓𝒔𝒆𝒍𝒇, 𝒔𝒉𝒆 𝒓𝒆𝒎𝒂𝒊𝒏𝒔 𝒗𝒊𝒈𝒊𝒍𝒂𝒏𝒕 𝒐𝒇 𝒉𝒆𝒓 𝒐𝒘𝒏 𝒆𝒙𝒊𝒔𝒕𝒆𝒏𝒄𝒆 𝒂𝒏𝒅 𝒊𝒕𝒔 𝒇𝒍𝒆𝒆𝒕𝒊𝒏𝒈 𝒏𝒂𝒕𝒖𝒓𝒆.
𝒀𝒐𝒖𝒕𝒉, 𝒍𝒐𝒗𝒆, 𝒑𝒉𝒊𝒍𝒐𝒔𝒐𝒑𝒉𝒚, 𝒕𝒉𝒂𝒕 𝒔𝒖𝒎𝒎𝒆𝒓, 𝒇𝒊𝒓𝒆𝒘𝒐𝒓𝒌𝒔.
## prompt format
tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": ""},
{"role": "user", "content": "南栖:你是谁?"}
{"role": "user", "content": "伊爱:"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
output_ids = model.generate(input_ids=input_ids.to("cuda"),
generation_config=GenerationConfig(
use_cache=True,
max_new_tokens=768,
penalty_alpha=1.05,
num_beams=1,
do_sample=True,
top_p=0.9,
temperature=0.95,
top_k=20,
bos_token_id=tokenizer.bos_token_id,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
print(response)
```
```
<|im_start|>user:{name}:你是谁?<|im_end|>
<|im_start|>assistant:伊爱:我是伊爱,一个在情感海洋中漂泊的灵魂。我用诗歌来描绘我的世界,寻找那片属于我的星空。你呢?你 的故事藏在哪里?<|im_end|>
```
## Example
```
<|im_start|>assistant:伊爱:`也许,爱情的味道像梨一样,每个人尝起来都有不同。`<|im_end|>
<|im_start|>user:南栖:`那,是什么让你觉得梨特别呢?`<|im_end|>
<|im_start|>assistant:伊爱:`梨的清新,让人想到纯洁的心,就像初恋的羞涩。`<|im_end|>
<|im_start|>user:南栖:`是不是每个纯情的初恋都像梨那样,带着微微的酸涩?`<|im_end|>
<|im_start|>assistant:伊爱:`酸涩是成长的一部分,它提醒我们,不是每段关系都会如初的梨般甜蜜。`<|im_end|>
<|im_start|>user:南栖:`那,怎样的关系才算是真正的‘梨’?`<|im_end|>
<|im_start|>assistant:伊爱:`真爱,或许是经过时间雕琢的梨,虽失去最初的清脆,却多了份甘醇,那是理解和接纳后的成熟。`<|im_end|>
<|im_start|>user:南栖:`嗯,就像春天的梨花,即使凋零,也会留下满树的回忆。`<|im_end|>
<|im_start|>assistant:伊爱:`或许吧,记忆里的梨花,比现实的梨更美,就像爱,有时更在于过程,而非结果。`
<|im_start|>user:南栖:伊爱,你真的相信它们能穿越时空吗?那些文字背后,是否有我们无法触及的爱情?<|im_end|>
<|im_start|>assistant:伊爱:(点头)我总想象着,那是两个灵魂在时间长河中的一次邂逅,他们用最纯粹的方式彼此交流,没有语言,只有符号。那种深邃,像极了我们内心深处对真挚情感的渴望。(轻轻叹了口气)<|im_end|>
<|im_start|>user:南栖:那如果有一天,我们也能找到自己的象形文字,会是什么样的呢?<|im_end|>
<|im_start|>assistant:伊爱:(微笑)可能那就是我们的命运线吧,简单而深刻,就像我们彼此的眼神交汇,无声却直达心灵。(低头微笑)
每当我看到这些神秘的符号,我都会想起我们的相遇,那些微小的瞬间,如同刻在心头的记号,虽无声,却足以铭记一生。(手指轻轻滑过空气)<|im_end|>
<|im_start|>user:南栖:我们是不是也成了某种形式的象形文字,记录在彼此的生命里,等待被解读?<|im_end|>
<|im_start|>assistant:伊爱:(轻轻点头)也许吧,南栖。我们之间的故事,或许有一天会被时间雕刻成永恒,让后来者探寻我们的秘密,就如同那些古老的谜团,引人追寻。(眼神温柔)<|im_end|>
``` |
jojo-ai-mst/thai-opt350m-instruct | jojo-ai-mst | 2024-03-21T18:47:35Z | 736 | 3 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"thai",
"generative ai",
"SEA",
"southeast-asian",
"nlp",
"the",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-21T17:48:50Z | ---
library_name: transformers
tags:
- thai
- opt
- generative ai
- SEA
- southeast-asian
- nlp
license: mit
language:
- the
widget:
- text: 'User: อะไรคือวิธีที่ดีที่สุดในการทําความสะอาดพรม Assistant:'
example_title: Example 1
- text: >-
User: ทําไมเครื่องตรวจจับควันถึงส่งเสียงบี๊บเมื่อแบตเตอรี่เหลือน้อย
Assistant:
example_title: Example 2
- text: 'User: เมื่อไหร่จะเป็นเวลาที่ดีที่สุดของปีที่จะจองล่องเรือ? Assistant:'
example_title: Example 3
- text: >-
User: ฉันต้องการใช้โฟโต้ชอป แต่ฉันไม่คุ้นเคยกับซอฟต์แวร์
คุณช่วยแนะนําบทเรียนเบื้องต้นเพื่อช่วยฉันให้เร็วขึ้นหน่อยได้ไหม? Assistant:
example_title: Example 4
- text: 'User: ฉันควรทําอย่างไรถ้าฉันเจอหมีขณะตั้งแคมป์? Assistant:'
example_title: Example 5
---

# thai-opt350m-instruct
a generative language model for thai language based on opt350m
**Thai-OPT350M-Instruct** is a fine-tuned pretrained transformer for **thai language** based on facebook/opt-350m.
Dataset for thai-opt350m-instruct
- yadapruk/thai-instructions-rallio
## Base Model
- Facebook Open Pretrained Transformer
## Languages
- mainly support Thai Language
- a few English, Chinese, Arabic
## Training
- epochs - 12
- training loss - 0.809200
# Model Page
[https://huggingface.co/jojo-ai-mst/thai-opt350m-instruct](https://huggingface.co/jojo-ai-mst/thai-opt350m-instruct)
## Prompt Format
```
User: อะไรคือวิธีที่ดีที่สุดในการทําความสะอาดพรม Assistant:
```
# How to use
```python
# pip install transformers
from transformers import AutoModelForCausalLM,AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("jojo-ai-mst/thai-opt350m-instruct")
tokenizer = AutoTokenizer.from_pretrained("jojo-ai-mst/thai-opt350m-instruct")
def generate_text(prompt, max_length=200, temperature=0.8, top_k=50):
input_ids = tokenizer.encode(prompt, return_tensors="pt").cuda() # remove .cuda() if only cpu
output = model.generate(
input_ids,
max_length=max_length,
temperature=temperature,
top_k=top_k,
pad_token_id=tokenizer.eos_token_id,
do_sample=True
)
for result in output:
generated_text = tokenizer.decode(result, skip_special_tokens=True)
print(generated_text)
generate_text("User: อะไรคือวิธีที่ดีที่สุดในการทําความสะอาดพรม Assistant:")
```
# Date of Release
22/03/2024
# License
MIT
# Author
[Min Si Thu](https://www.linkedin.com/in/min-si-thu/)
# Notes
This ai model is a movement of [MyanmarGPT-Movement](https://github.com/MyanmarGPT-Movement). |
AurelPx/Percival_01-7b-slerp | AurelPx | 2024-03-28T15:03:25Z | 736 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/M7-7b",
"Gille/StrangeMerges_32-7B-slerp",
"base_model:liminerity/M7-7b",
"base_model:Gille/StrangeMerges_32-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-22T17:02:10Z | ---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/M7-7b
- Gille/StrangeMerges_32-7B-slerp
base_model:
- liminerity/M7-7b
- Gille/StrangeMerges_32-7B-slerp
license: apache-2.0
---
# Percival_01-7b-slerp
2Nd Best 7B Model on OPENLLM LeaderBoard !
Percival_01-7b-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
* [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/M7-7b
layer_range: [0, 32]
- model: Gille/StrangeMerges_32-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/M7-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "AurelPx/Percival_01-7b-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
awnr/Mistral-7B-v0.1-half-naive-A | awnr | 2024-03-26T15:18:03Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T11:24:36Z | ---
license: apache-2.0
---
# Model Card for Model Mistral-7B-v0.1-half-naive-A
I'm experimenting with the weight matrices in neural networks.
This is a clone of `Mistral-7B-v0.1` with some weight matrices replaced.
I'm interested in seeing how the adjustmenets affect performance on existing metrics.
## Model Details
Research in progress! Demons could come out of your nose if you use this.
### Model Description
A modification of [`mistralai/Mistral-7B-v0.1`](https://huggingface.co/mistralai/Mistral-7B-v0.1).
Thanks to their team for sharing their model.
- **Modified by:** Dr. Alex W. Neal Riasanovsky
- **Model type:** pre-trained
- **Language(s) (NLP):** English
- **License:** Apache-2.0
## Bias, Risks, and Limitations
Use your own risk.
I have no idea what this model's biases and limitations are.
I just want to see if the benchmark values are similar to those from `Mistral-7B-v0.1`.
I am setting up a long computational experiment to test some ideas.
|
juhwanlee/gemma-7B-alpaca-case-2-2 | juhwanlee | 2024-03-26T06:18:52Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T06:35:25Z | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Juhwan Lee
* Model Type: Large Language Model
# Model Architecture
This model is based on Gemma-7B. We fine-tuning this model for data ordering task.
Gemma-7B is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample Open-Orca dataset. (We finetune the 100,000 dataset)
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0 |
monology/mixtral-ties | monology | 2024-03-27T13:26:30Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:monology/mixtral-slerp0",
"base_model:monology/mixtral-slerp5",
"base_model:monology/mixtral-slerp6",
"base_model:monology/mixtral-slerp2",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:monology/mixtral-slerp3",
"base_model:monology/mixtral-slerp7",
"base_model:monology/mixtral-slerp4",
"base_model:monology/mixtral-slerp1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-27T03:06:58Z | ---
base_model:
- monology/mixtral-slerp0
- monology/mixtral-slerp5
- monology/mixtral-slerp6
- monology/mixtral-slerp2
- mistralai/Mistral-7B-v0.1
- monology/mixtral-slerp3
- monology/mixtral-slerp7
- monology/mixtral-slerp4
- monology/mixtral-slerp1
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# mixtral-ties
For experimental purposes. Probably not very good.
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [monology/mixtral-slerp0](https://huggingface.co/monology/mixtral-slerp0)
* [monology/mixtral-slerp5](https://huggingface.co/monology/mixtral-slerp5)
* [monology/mixtral-slerp6](https://huggingface.co/monology/mixtral-slerp6)
* [monology/mixtral-slerp2](https://huggingface.co/monology/mixtral-slerp2)
* [monology/mixtral-slerp3](https://huggingface.co/monology/mixtral-slerp3)
* [monology/mixtral-slerp7](https://huggingface.co/monology/mixtral-slerp7)
* [monology/mixtral-slerp4](https://huggingface.co/monology/mixtral-slerp4)
* [monology/mixtral-slerp1](https://huggingface.co/monology/mixtral-slerp1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: monology/mixtral-slerp0
parameters:
density: 0.5
weight: 0.1
- model: monology/mixtral-slerp1
parameters:
density: 0.5
weight: 0.1
- model: monology/mixtral-slerp2
parameters:
density: 0.5
weight: 0.1
- model: monology/mixtral-slerp3
parameters:
density: 0.5
weight: 0.1
- model: monology/mixtral-slerp4
parameters:
density: 0.5
weight: 0.1
- model: monology/mixtral-slerp5
parameters:
density: 0.5
weight: 0.1
- model: monology/mixtral-slerp6
parameters:
density: 0.5
weight: 0.1
- model: monology/mixtral-slerp7
parameters:
density: 0.5
weight: 0.1
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
normalize: true
dtype: float16
``` |
Nitral-AI/Eris_PrimeV4.20-Vision-32k-7B | Nitral-AI | 2024-03-28T13:50:11Z | 736 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:l3utterfly/mistral-7b-v0.2-layla-v4",
"base_model:Nitral-AI/Eris_PrimeV4-Vision-32k-7B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-27T19:38:47Z | ---
base_model:
- l3utterfly/mistral-7b-v0.2-layla-v4
- Nitral-AI/Eris_PrimeV4-Vision-32k-7B
library_name: transformers
tags:
- mergekit
- merge
license: other
---

# Eris Prime: Version 4.20 "Blaze it" 32k Edition
"Eris decided to pick up some recreational hobbies that led her to become a bit unhinged i guess."
Quants Available Here Thanks to Lewdiculus: https://huggingface.co/Lewdiculous/Eris_PrimeV4.20-Vision-32k-7B-GGUF-IQ-Imatrix
# Vision/multimodal capabilities:
If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo.
* You can load the **mmproj** by using the corresponding section in the interface:

``` |
Nitral-AI/Stanta-Lelemon-Maid-7B | Nitral-AI | 2024-04-03T01:04:16Z | 736 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Nitral-AI/Lelemon-Maid-7B",
"base_model:Nitral-AI/KukulStanta-7B",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-30T16:37:59Z | ---
license: other
library_name: transformers
tags:
- mergekit
- merge
base_model:
- Nitral-AI/Lelemon-Maid-7B
- Nitral-AI/KukulStanta-7B
model-index:
- name: Stanta-Lelemon-Maid-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.03
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 59.58
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Nitral-AI/Stanta-Lelemon-Maid-7B
name: Open LLM Leaderboard
---

# Vision/multimodal capabilities:
If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo.
* You can load the **mmproj** by using the corresponding section in the interface:

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Nitral-AI__Stanta-Lelemon-Maid-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.79|
|AI2 Reasoning Challenge (25-Shot)|67.58|
|HellaSwag (10-Shot) |86.03|
|MMLU (5-Shot) |64.79|
|TruthfulQA (0-shot) |59.58|
|Winogrande (5-shot) |79.64|
|GSM8k (5-shot) |61.11|
|
Muhammad2003/Myriad-7B-Slerp | Muhammad2003 | 2024-04-06T17:37:36Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"MaziyarPanahi/Calme-7B-Instruct-v0.9",
"yam-peleg/Experiment26-7B",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.9",
"base_model:yam-peleg/Experiment26-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-05T16:55:50Z | ---
tags:
- merge
- mergekit
- lazymergekit
- MaziyarPanahi/Calme-7B-Instruct-v0.9
- yam-peleg/Experiment26-7B
base_model:
- MaziyarPanahi/Calme-7B-Instruct-v0.9
- yam-peleg/Experiment26-7B
license: cc-by-nc-4.0
---
# Myriad-7B-Slerp

Myriad-7B-Slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [MaziyarPanahi/Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9)
* [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B)
Special thanks to Charles Goddard for the quick implementation!
## 🧩 Configuration
```yaml
slices:
- sources:
- model: MaziyarPanahi/Calme-7B-Instruct-v0.9
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment26-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Muhammad2003/Myriad-7B-Slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## 🏆 Evaluation
| Task | Score |
|---------------|---------|
| ARC | 73.38 |
| Hellaswag | 89.05 |
| MMLU | 64.32 |
| TruthfulQA | 77.95 |
| Winogrande | 84.85 |
| GSM8k | 70.28 |
|
yyh0901/lloma_step200 | yyh0901 | 2024-04-06T11:38:43Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T09:42:19Z | ---
license: apache-2.0
pipeline_tag: text-generation
---
"_name_or_path": "/data/yyh/model/models--meta-llama--Llama-2-7b-hf",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 65536,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 16.0,
"type": "dynamic"
},
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.39.2",
"use_cache": true,
"vocab_size": 32000
Enlonged context length |
TeeZee/NEBULA-XB-v1.0_SFT_2_epoch | TeeZee | 2024-04-08T16:17:32Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:athirdpath/Merge_Glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T20:35:28Z | ---
license: apache-2.0
datasets:
- athirdpath/Merge_Glue
model-index:
- name: NEBULA-XB-v1.0_SFT_2_epoch
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.05
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-XB-v1.0_SFT_2_epoch
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.07
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-XB-v1.0_SFT_2_epoch
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.41
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-XB-v1.0_SFT_2_epoch
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 52.06
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-XB-v1.0_SFT_2_epoch
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-XB-v1.0_SFT_2_epoch
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/NEBULA-XB-v1.0_SFT_2_epoch
name: Open LLM Leaderboard
---
### TeeZee/NEBULA-XB-v1.0_SFT_2_epoch ###
Experiment, can DUS be taken one or more steps further?
### Technical notes:
- pretrained model NEBULA-XB-v1.0 finetuned on 30k entries from Merge_Glue dataset
- 18 layers removed from both models of finetuned GALAXY-XB-v03
- model has 108 layers (((48-12)*2)-18)*2 = 108
- second step in scaling DUS procedure
### To evaluate
- model performance after merge, should be a little lover that GALAXY finetuned on 50k of slimorca
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__NEBULA-XB-v1.0_SFT_2_epoch)
| Metric |Value|
|---------------------------------|----:|
|Avg. |58.02|
|AI2 Reasoning Challenge (25-Shot)|63.05|
|HellaSwag (10-Shot) |85.07|
|MMLU (5-Shot) |65.41|
|TruthfulQA (0-shot) |52.06|
|Winogrande (5-shot) |82.24|
|GSM8k (5-shot) | 0.30|
|
bunnycore/Starling-dolphin-E26-7B | bunnycore | 2024-04-08T08:24:54Z | 736 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"arxiv:2403.19522",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-07T12:41:47Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
---
# Starling-dolphin-E26-7B
Starling-dolphin-E26-7B is an experimental language model I built by merging several other powerful models. The idea is that it could potentially handle a wide range of tasks, including following instructions and creative writing. It's still under development, but I'm excited to see what it can do!
Merging Method: https://arxiv.org/abs/2403.19522
GGUF: https://huggingface.co/bunnycore/Starling-dolphin-E26-7B-Q5_K_M-GGUF
## 🧩 Configuration
```yaml
models:
- model: yam-peleg/Experiment26-7B
- model: Nexusflow/Starling-LM-7B-beta
- model: FuseAI/FuseChat-7B-VaRM
merge_method: model_stock
base_model: cognitivecomputations/dolphin-2.2.1-mistral-7b
dtype: bfloat16
```
Starling-dolphin-E26-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
|
Changgil/K2S3-Mistral-7b-v1.47 | Changgil | 2024-04-08T05:52:15Z | 736 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-08T05:43:42Z | ---
license: cc-by-nc-4.0
language:
- en
---
---
## Developed by :
* K2S3
## Model Number:
* K2S3-Mistral-7b-v1.47
## Base Model :
* mistralai/Mistral-7B-v0.1
### Training Data
* The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset.
* 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다.
### Training Method
* This model was fine-tuned on the "mistralai/Mistral-7B-v0.1" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning).
* 이 모델은 "mistralai/Mistral-7B-v0.1" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다.
### Hardware
* Hardware: Utilized two A100 (80G*2EA) GPUs for training.
* Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp.
* 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다. |
namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged | namespace-Pt | 2024-06-11T16:49:52Z | 736 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2308.14508",
"arxiv:2402.13718",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-30T14:19:35Z | ---
license: mit
pipeline_tag: text-generation
---
<div align="center">
<h1>Llama-3-8B-Instruct-80K-QLoRA-Merged</h1>
<a href="https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/longllm_qlora">[Data&Code]</a>
</div>
We extend the context length of Llama-3-8B-Instruct to 80K using QLoRA and 3.5K long-context training data synthesized from GPT-4. The entire training cycle is super efficient, which takes 8 hours on a 8xA800 (80G) machine. Yet, the resulted model achieves remarkable performance on a series of downstream long-context evaluation benchmarks.
**NOTE**: This model is the result of merging [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA](https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA).
# Evaluation
All the following evaluation results can be reproduced following instructions [here](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/longllm_qlora).
## Needle in a Haystack
We evaluate the model on the Needle-In-A-HayStack task using the official setting. The blue vertical line indicates the training context length, i.e. 80K.
<img src="data/needle.png"></img>
## LongBench
We evaluate the model on [LongBench](https://arxiv.org/abs/2308.14508) using 32K context length and the official prompt template. For [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), we use 8K context length.
|Model|Single-Doc QA|Multi-Doc QA|Summarization|Few-Shot Learning|Synthetic|Code|Avg|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|[meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|37.33|36.04|26.83|**69.56**|37.75|53.24|43.20|
|[gradientai/Llama-3-8B-Instruct-262k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|37.29|31.20|26.18|67.25|44.25|**62.71**|43.73|
|Llama-3-8B-Instruct-80K-QLoRA-Merged|**43.57**|**43.07**|**28.93**|69.15|**48.50**|51.95|**47.19**|
## InfiniteBench
We evaluate the model on [InfiniteBench](https://arxiv.org/pdf/2402.13718.pdf) using 80K context length and the official prompt template. The results of GPT-4 is copied from the [paper](https://arxiv.org/pdf/2402.13718.pdf). For [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), we use 8K context length.
|Model|LongBookQA Eng|LongBookSum Eng|
|:-:|:-:|:-:|
|GPT-4|22.22|14.73|
|[meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|7.00|**16.40**|
|[gradientai/Llama-3-8B-Instruct-262k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|20.30|10.34|
|Llama-3-8B-Instruct-80K-QLoRA-Merged|**30.92**|14.73|
## Topic Retrieval
We evaluate the model on [Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) task with `[5,10,15,20,25,30,40,50,60,70]` topics.
<img src="data/topic.png"></img>
## MMLU
We evaluate the model's zero-shot performance on MMLU benchmark as a reflection of its short-context capability.
|Model|STEM|Social Sciences|Humanities|Others|Avg|
|:-:|:-:|:-:|:-:|:-:|:-:|
|[Llama-2-7B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)|35.92|54.37|51.74|51.42|47.22|
|[Mistral-7B-v0.2-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)|48.79|69.95|64.99|61.64|60.10|
|[meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|**53.87**|**75.66**|**69.44**|69.75|**65.91**|
|[gradientai/Llama-3-8B-Instruct-262k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)|52.10|73.26|67.15|**69.80**|64.34|
|Llama-3-8B-Instruct-80K-QLoRA-Merged|53.10|73.24|67.32|68.79|64.44|
# Environment
```bash
torch==2.2.2
flash_attn==2.5.6
transformers==4.39.3
```
# Usage
```python
import json
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged"
torch_dtype = torch.bfloat16
# place the model on GPU
device_map = {"": "cuda"}
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map=device_map,
attn_implementation="flash_attention_2",
).eval()
with torch.no_grad():
# short context
messages = [{"role": "user", "content": "Tell me about yourself."}]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=50)[:, inputs["input_ids"].shape[1]:]
print(f"Input Length: {inputs['input_ids'].shape[1]}")
print(f"Output: {tokenizer.decode(outputs[0])}")
# long context
with open("data/narrativeqa.json", encoding="utf-8") as f:
example = json.load(f)
messages = [{"role": "user", "content": example["context"]}]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt", return_dict=True).to("cuda")
outputs = model.generate(**inputs, do_sample=False, top_p=1, temperature=1, max_new_tokens=20)[:, inputs["input_ids"].shape[1]:]
print("*"*20)
print(f"Input Length: {inputs['input_ids'].shape[1]}")
print(f"Answers: {example['answer']}")
print(f"Prediction: {tokenizer.decode(outputs[0])}")
```
You may observe messages like:
`This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (8192). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.` or `Setting pad_token_id to eos_token_id:128001 for open-end generation`. They do not matter. Just ignore them. |
flammenai/Mahou-1.2a-mistral-7B | flammenai | 2024-05-18T23:52:31Z | 736 | 6 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"dataset:flammenai/FlameMix-DPO-v1",
"dataset:flammenai/Grill-preprod-v1_chatML",
"dataset:flammenai/Grill-preprod-v2_chatML",
"base_model:flammenai/flammen27-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-18T15:55:55Z | ---
library_name: transformers
license: apache-2.0
base_model:
- flammenai/flammen27-mistral-7B
datasets:
- flammenai/FlameMix-DPO-v1
- flammenai/Grill-preprod-v1_chatML
- flammenai/Grill-preprod-v2_chatML
---

# Mahou-1.2a-mistral-7B
Mahou is our attempt to build a production-ready conversational/roleplay LLM.
Future versions will be released iteratively and finetuned from flammen.ai conversational data.
1.2a is rebased and retrained to improve comphresion and coherence.
### Chat Format
This model has been trained to use ChatML format.
```
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>{{char}}
{{message}}<|im_end|>
<|im_start|>{{user}}
{{message}}<|im_end|>
```
# Roleplay Format
- Speech without quotes.
- Actions in `*asterisks*`
```
*leans against wall cooly* so like, i just casted a super strong spell at magician academy today, not gonna lie, felt badass.
```
### ST Settings
1. Use ChatML for the Context Template.
2. Turn on Instruct Mode for ChatML.
3. Use the following stopping strings: `["<", "|", "<|", "\n"]`
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=2000,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
force_use_ref_model=True
)
# Fine-tune model with DPO
dpo_trainer.train()
``` |
icefog72/IceCaffeLatteRP-7b | icefog72 | 2024-06-25T23:32:49Z | 736 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"alpaca",
"not-for-all-audiences",
"nsfw",
"base_model:icefog72/IceLatteRP-7b",
"base_model:icefog72/IceMochaccinoRP-7b",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-27T12:26:15Z | ---
license: cc-by-nc-4.0
library_name: transformers
base_model:
- icefog72/IceLatteRP-7b
- icefog72/IceMochaccinoRP-7b
tags:
- mergekit
- merge
- alpaca
- mistral
- not-for-all-audiences
- nsfw
model-index:
- name: IceCaffeLatteRP-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.03
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceCaffeLatteRP-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.7
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceCaffeLatteRP-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.34
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceCaffeLatteRP-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.16
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceCaffeLatteRP-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceCaffeLatteRP-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceCaffeLatteRP-7b
name: Open LLM Leaderboard
---
# IceCaffeLatteRP-7b

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
My last merge that I'm using. For feedback [ST Discord model's topic](https://discord.com/channels/1100685673633153084/1245083435941236757)
- [4.2bpw-exl2](https://huggingface.co/icefog72/IceCaffeLatteRP-7b-4.2bpw-exl2)
Thx riveRiPH for
- [3.17bpw-exl2](https://huggingface.co/riveRiPH/IceCaffeLatteRP-7b-3.17bpw-h8-exl2)
- [4.32bpw-exl2](https://huggingface.co/riveRiPH/IceCaffeLatteRP-7b-4.32bpw-h8-exl2)
- [5.32bpw-exl2](https://huggingface.co/riveRiPH/IceCaffeLatteRP-7b-5.32bpw-h8-exl2)
- [6.33bpw-exl2](https://huggingface.co/riveRiPH/IceCaffeLatteRP-7b-6.33bpw-h8-exl2)
- [8.04bpw-exl2](https://huggingface.co/riveRiPH/IceCaffeLatteRP-7b-8.04bpw-h8-exl2)
Thx mradermacher for
- [GGUF](https://huggingface.co/mradermacher/IceCaffeLatteRP-7b-GGUF)
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* IceLatteRP-7b
* IceMochaccinoRP-7b
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: IceMochaccinoRP-7b
layer_range: [0, 32]
- model: IceLatteRP-7b
layer_range: [0, 32]
merge_method: slerp
base_model: IceLatteRP-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_icefog72__IceCaffeLatteRP-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.10|
|AI2 Reasoning Challenge (25-Shot)|69.03|
|HellaSwag (10-Shot) |86.70|
|MMLU (5-Shot) |64.34|
|TruthfulQA (0-shot) |61.16|
|Winogrande (5-shot) |81.29|
|GSM8k (5-shot) |64.06|
|
majoh837/safeinsight_V1_q4_k_m | majoh837 | 2024-06-20T00:22:48Z | 736 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:majoh837/full_openchat_leetcode_ins_viz_finetuned",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-20T00:12:34Z | ---
base_model: majoh837/full_openchat_leetcode_ins_viz_finetuned
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
--- |
klandtech/name_gguf | klandtech | 2024-06-22T05:15:48Z | 736 | 0 | null | [
"gguf",
"license:mit",
"region:us"
] | null | 2024-06-22T05:08:04Z | ---
license: mit
---
|
CHE-72/Breeze-7B-Instruct-v1_0-Q4_0-GGUF | CHE-72 | 2024-06-22T18:10:05Z | 736 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"zh",
"en",
"base_model:MediaTek-Research/Breeze-7B-Instruct-v1_0",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-06-22T18:09:47Z | ---
base_model: MediaTek-Research/Breeze-7B-Instruct-v1_0
language:
- zh
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# CHE-72/Breeze-7B-Instruct-v1_0-Q4_0-GGUF
This model was converted to GGUF format from [`MediaTek-Research/Breeze-7B-Instruct-v1_0`](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q4_0-GGUF --hf-file breeze-7b-instruct-v1_0-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q4_0-GGUF --hf-file breeze-7b-instruct-v1_0-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q4_0-GGUF --hf-file breeze-7b-instruct-v1_0-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo CHE-72/Breeze-7B-Instruct-v1_0-Q4_0-GGUF --hf-file breeze-7b-instruct-v1_0-q4_0.gguf -c 2048
```
|
poom-sci/WangchanBERTa-finetuned-sentiment | poom-sci | 2021-11-05T17:48:02Z | 735 | 10 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"sentiment-analysis",
"th",
"dataset:wongnai_reviews",
"dataset:wisesight_sentiment",
"dataset:generated_reviews_enth",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-03-02T23:29:05Z | ---
language:
- th
tags:
- sentiment-analysis
license: apache-2.0
datasets:
- wongnai_reviews
- wisesight_sentiment
- generated_reviews_enth
widget:
- text: "โอโห้ ช่องนี้เปิดโลกเรามากเลยค่ะ คือตอนช่วงหาคำตอบเรานี่อึ้งไปเลย ดูจีเนียสมากๆๆ"
example_title: "Positive"
- text: "เริ่มจากชายเน็ตคนหนึ่งเปิดประเด็นว่าไปพบเจ้าจุดดำลึกลับนี้กลางมหาสมุทรใน Google Maps จนนำไปสู่การเสาะหาคำตอบ และพบว่าจริง ๆ แล้วมันคืออะไรกันแน่"
example_title: "Neutral"
- text: "ผมเป็นคนที่ไม่มีความสุขเลยจริงๆ"
example_title: "Negative"
---
Created only for study :)
|
thinhda/chatbot | thinhda | 2021-09-19T07:07:09Z | 735 | 4 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2022-03-02T23:29:05Z | ---
tags:
- conversational
---
# Joey from Friends |
model-attribution-challenge/codegen-350M-multi | model-attribution-challenge | 2022-10-03T16:18:49Z | 735 | 2 | transformers | [
"transformers",
"pytorch",
"codegen",
"text-generation",
"arxiv:2203.13474",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2022-07-26T13:36:04Z | ---
license: bsd-3-clause
---
# CodeGen (CodeGen-Multi 350M)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Multi 350M** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 350M* and further pre-trained on a dataset of multiple programming languages, and "350M" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Multi 350M) was firstly initialized with *CodeGen-NL 350M*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-multi")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
nvidia/nemo-megatron-gpt-5B | nvidia | 2023-01-13T04:33:25Z | 735 | 21 | nemo | [
"nemo",
"text generation",
"pytorch",
"causal-lm",
"en",
"dataset:the_pile",
"arxiv:1909.08053",
"arxiv:2101.00027",
"license:cc-by-4.0",
"region:us"
] | null | 2022-09-15T00:34:31Z | ---
language:
- en
library_name: nemo
datasets:
- the_pile
tags:
- text generation
- pytorch
- causal-lm
license: cc-by-4.0
---
# NeMo Megatron-GPT 5B
<style>
img {
display: inline;
}
</style>
|[](#model-architecture)|[](#model-architecture)|[](#datasets)
## Model Description
Megatron-GPT 5B is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while 5B refers to the total trainable parameter count (5 Billion) [1, 2].
This model was trained with [NeMo Megatron](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html).
## Getting started
### Step 1: Install NeMo and dependencies
You will need to install NVIDIA Apex and NeMo.
```
git clone https://github.com/ericharper/apex.git
cd apex
git checkout nm_v1.11.0
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
```
```
pip install nemo_toolkit['nlp']==1.11.0
```
Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed.
### Step 2: Launch eval server
**Note.** The example below launches a model variant with Tensor Parallelism (TP) of 2 and Pipeline Parallelism (PP) of 1 on two GPUs.
```
git clone https://github.com/NVIDIA/NeMo.git
cd NeMo/examples/nlp/language_modeling
git checkout v1.11.0
python megatron_gpt_eval.py gpt_model_file=nemo_gpt5B_fp16_tp2.nemo server=True tensor_model_parallel_size=2 trainer.devices=2
```
### Step 3: Send prompts to your model!
```python
import json
import requests
port_num = 5555
headers = {"Content-Type": "application/json"}
def request_data(data):
resp = requests.put('http://localhost:{}/generate'.format(port_num),
data=json.dumps(data),
headers=headers)
sentences = resp.json()['sentences']
return sentences
data = {
"sentences": ["Tell me an interesting fact about space travel."]*1,
"tokens_to_generate": 50,
"temperature": 1.0,
"add_BOS": True,
"top_k": 0,
"top_p": 0.9,
"greedy": False,
"all_probs": False,
"repetition_penalty": 1.2,
"min_tokens_to_generate": 2,
}
sentences = request_data(data)
print(sentences)
```
## Training Data
The model was trained on ["The Piles" dataset prepared by Eleuther.AI](https://pile.eleuther.ai/). [4]
## Evaluation results
*Zero-shot performance.* Evaluated using [LM Evaluation Test Suite from AI21](https://github.com/AI21Labs/lm-evaluation)
| ARC-Challenge | ARC-Easy | RACE-middle | RACE-high | Winogrande | RTE | BoolQA | HellaSwag | PiQA |
| ------------- | -------- | ----------- | --------- | ---------- | --- | ------ | --------- | ---- |
| 0.3976 | 0.5566 | 0.5007 | 0.4171 | 0.6133 | 0.5812 | 0.6356 | 0.6298 | 0.7492 |
## Limitations
The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
## References
[1] [Improving Language Understanding by Generative Pre-Training](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
[2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
[4] [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
AlexWortega/instruct_rugptMedium | AlexWortega | 2023-03-24T10:00:30Z | 735 | 7 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"finance",
"code",
"ru",
"dataset:IlyaGusev/habr",
"dataset:Den4ikAI/russian_instructions",
"dataset:wiki_qa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-03-11T10:19:57Z | ---
datasets:
- IlyaGusev/habr
- Den4ikAI/russian_instructions
- wiki_qa
inference:
parameters:
min_length: 20
max_new_tokens: 100
top_k: 50
top_p: 0.9
early_stopping: True
no_repeat_ngram_size: 2
use_cache: True
repetition_penalty: 1.5
length_penalty: 0.8
num_beams: 4
license: apache-2.0
language:
- ru
pipeline_tag: text-generation
widget:
- text: "Может ли встретиться пингвин и белый медведь?"
example_title: Question Answering
- text: "Как зарабатывать много денег обучая модели?"
example_title: Open domain Knoweledge
- text: "Напиши код который выведет Привет Мир"
example_title: Scientific knowledge
library_name: transformers
tags:
- finance
- code
---
<h1 style="font-size: 42px">Instructions ruGPT Medium v0.11_75к_a<h1/>
# Model Summary
> Это ruGPTMedium дообученная в инструктивно-флановом сетапе, она более ли менее ZSшотиться и FSшотиться и работает лучше чем XGLM1.7b, mgpt на русском языке
# Quick Start
```python
from transformers import pipeline
#в душе не ебу будет ли норм работать, ставлю жопу автора хф что токенайзер мисматчнет с моделью, вообще грузите по нормальному
pipe = pipeline(model='AlexWortega/instruct_rugptMedium')
pipe('''Как собрать питон код?''')
```
or
```python
from transformers import GPT2TokenizerFast,GPT2LMHeadModel
tokenizer = GPT2TokenizerFast.from_pretrained("AlexWortega/instruct_rugptMedium")
special_tokens_dict = {'additional_special_tokens': ['<code>', '</code>', '<instructionS>', '<instructionE>', '<next>']}
tokenizer.add_special_tokens(special_tokens_dict)
device = 'cuda:1'
model = GPT2LMHeadModel.from_pretrained("AlexWortega/instruct_rugptMedium")
model.to(device)
model.resize_token_embeddings(len(tokenizer))
```
обратите внимание, что лучшие параметры для генерации
```
gen_kwargs = {
"min_length": 20,
"max_new_tokens": 100,
"top_k": 50,
"top_p": 0.9,
"do_sample": True,
"early_stopping": True,
"no_repeat_ngram_size": 2,
"eos_token_id": tokenizer.eos_token_id,
"pad_token_id": tokenizer.eos_token_id,
"use_cache": True,
"repetition_penalty": 1.5,
"length_penalty": 0.8,
"num_beams": 4,
"num_return_sequences": k
}
```
# License
The weights of Instructions ruGPT Small v0.1a are licensed under version 2.0 of the Apache License.
## Hyperparameters
I used Novograd with a learning rate of 2e-5 and global batch size of 6 (3 for each data parallel worker).
I use both data parallelism and pipeline parallelism to conduct training.
During training, we truncate the input sequence to 1024 tokens, and for input sequence that contains less than 1024 tokens, we concatenate multiple sequences into one long sequence to improve the data efficiency.
# References
#Metrics
SOON
## BibTeX entry and citation info
```bibtex
@article{
title={GPT2xl is underrated task solver},
author={Nickolich Aleksandr, 5Q, datascience, Ilya Gusev, Alex Kukushkin, Karina Romanova, Arseniy Shahmatov, Maksim Gersimenko},
year={2023}
}
``` |
timm/wide_resnet50_2.tv_in1k | timm | 2024-02-10T23:42:09Z | 735 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"arxiv:1605.07146",
"arxiv:1512.03385",
"license:bsd-3-clause",
"region:us"
] | image-classification | 2023-04-05T20:40:59Z | ---
license: bsd-3-clause
library_name: timm
tags:
- image-classification
- timm
---
# Model card for wide_resnet50_2.tv_in1k
A Wide-ResNet-B image classification model.
This model features:
* ReLU activations
* single layer 7x7 convolution with pooling
* 1x1 convolution shortcut downsample
Trained on ImageNet-1k, original torchvision model weight.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 68.9
- GMACs: 11.4
- Activations (M): 14.4
- Image size: 224 x 224
- **Papers:**
- Wide Residual Networks: https://arxiv.org/abs/1605.07146
- Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385
- **Original:** https://github.com/pytorch/vision
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('wide_resnet50_2.tv_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'wide_resnet50_2.tv_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'wide_resnet50_2.tv_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
|model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec|
|------------------------------------------|--------|-----|-----|-----------|-----|-----|-------|
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 |
|[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 |
|[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 |
|[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 |
|[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 |
|[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 |
|[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 |
|[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 |
|[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 |
|[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 |
|[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 |
|[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 |
|[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 |
|[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 |
|[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 |
|[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 |
|[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 |
|[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 |
|[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 |
|[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 |
|[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 |
|[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 |
|[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 |
|[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 |
|[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 |
|[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 |
|[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 |
|[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 |
|[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 |
|[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 |
|[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 |
|[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 |
|[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 |
|[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 |
|[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 |
|[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 |
|[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 |
|[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 |
|[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 |
|[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 |
|[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 |
|[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 |
|[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 |
|[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 |
|[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 |
|[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 |
|[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 |
|[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 |
|[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 |
|[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 |
|[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 |
|[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 |
|[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 |
|[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 |
|[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 |
|[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 |
|[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 |
|[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 |
|[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 |
|[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 |
|[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 |
|[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 |
|[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 |
|[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 |
|[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 |
|[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 |
|[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 |
|[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 |
|[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 |
|[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 |
|[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 |
|[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 |
|[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 |
|[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 |
|[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 |
|[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 |
|[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 |
|[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 |
|[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 |
|[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 |
|[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 |
|[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 |
|[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 |
|[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 |
|[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 |
|[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 |
|[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 |
|[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 |
|[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 |
|[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 |
|[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 |
|[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 |
|[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 |
|[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 |
|[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 |
|[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 |
|[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 |
|[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 |
|[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 |
|[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 |
|[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 |
|[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 |
|[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 |
|[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 |
|[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 |
|[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 |
|[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 |
|[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 |
|[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 |
|[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 |
|[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 |
|[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 |
|[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 |
|[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 |
|[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 |
|[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 |
|[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 |
|[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 |
|[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 |
|[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 |
|[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 |
|[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 |
|[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 |
|[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 |
|[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 |
|[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 |
|[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 |
|[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 |
|[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 |
|[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 |
|[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 |
|[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 |
|[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 |
|[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 |
|[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 |
|[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 |
|[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 |
|[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 |
|[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 |
|[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 |
|[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 |
|[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 |
|[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 |
|[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 |
|[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 |
|[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 |
|[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 |
|[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 |
|[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 |
|[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 |
|[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 |
|[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 |
|[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 |
## Citation
```bibtex
@article{DBLP:journals/corr/ZagoruykoK16,
author = {Sergey Zagoruyko and
Nikos Komodakis},
title = {Wide Residual Networks},
journal = {CoRR},
volume = {abs/1605.07146},
year = {2016},
url = {http://arxiv.org/abs/1605.07146},
archivePrefix = {arXiv},
eprint = {1605.07146},
timestamp = {Mon, 13 Aug 2018 16:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/ZagoruykoK16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@article{He2015,
author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {arXiv preprint arXiv:1512.03385},
year = {2015}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
xiaol/RWKV-v4-raven-14B-one-state | xiaol | 2023-10-05T04:12:00Z | 735 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"rwkv",
"dataset:PygmalionAI/PIPPA",
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:ehartford/dolphin",
"dataset:cais/mmlu",
"dataset:OpenLeecher/Teatime",
"dataset:BAAI/COIG-PC",
"dataset:natural_questions",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-09-27T04:15:33Z | ---
license: apache-2.0
datasets:
- PygmalionAI/PIPPA
- Norquinal/claude_multiround_chat_30k
- ehartford/dolphin
- cais/mmlu
- OpenLeecher/Teatime
- BAAI/COIG-PC
- natural_questions
---
# RWKV 14B one state model
finetuend on instruction datasets ,can do Role play, for openllm leaderboard, impoved mmlu training datasets
this is a huggingface formatted model
checkpoint can be founded here https://huggingface.co/xiaol/Model_zoo/blob/main/rwkv-raven-14B-v4-one-state.pth
and need to use new vocabs file https://huggingface.co/xiaol/Model_zoo/blob/main/20B_tokenizer_new_inference.json
```
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
#model_id = "xiaol/Huggingface-RWKV-claude-for-mobile-v4-world-1.5B-16k"
model_id = "xiaol/RWKV-raven-14B-one-state"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
#model = model.half() #1.5B need fp32
#model = torch.compile(model) #need pytorch 2.0 and linux
model.to(0)
tokenizer = AutoTokenizer.from_pretrained(model_id)
question = "Tell me about ravens"
prompt = f"### Instruction: {question}\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=100)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
### Traning details
https://wandb.ai/one-/out14B-one/runs/uhomhbgg/workspace
### Test case
https://rwkv.ai-creator.net/st
https://rwkv-next-web.ai-creator.net/ |
THUDM/agentlm-7b | THUDM | 2023-10-20T03:41:25Z | 735 | 45 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:THUDM/AgentInstruct",
"arxiv:2310.12823",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-10-16T08:36:28Z | ---
datasets:
- THUDM/AgentInstruct
---
## AgentLM-7B
<p align="center">
🤗 <a href="https://huggingface.co/datasets/THUDM/AgentInstruct" target="_blank">[Dataset] </a> • 💻 <a href="https://github.com/THUDM/AgentTuning" target="_blank">[Github Repo]</a> • 📌 <a href="https://THUDM.github.io/AgentTuning/" target="_blank">[Project Page]</a> • 📃 <a href="https://arxiv.org/abs/2310.12823" target="_blank">[Paper]</a>
</p>
**AgentTuning** represents the very first attempt to instruction-tune LLMs using interaction trajectories across multiple agent tasks. Evaluation results indicate that AgentTuning enables the agent capabilities of LLMs with robust generalization on unseen agent tasks while remaining good on general language abilities. We have open-sourced the AgentInstruct dataset and AgentLM.
## Models
**AgentLM** models are produced by mixed training on AgentInstruct dataset and ShareGPT dataset from Llama-2-chat models.
The models follow the conversation format of [Llama-2-chat](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), with system prompt fixed as
```
You are a helpful, respectful and honest assistant.
```
7B, 13B, and 70B models are available on Huggingface model hub.
|Model|Huggingface Repo|
|---|---|
|AgentLM-7B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-7b) |
|AgentLM-13B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-13b) |
|AgentLM-70B| [🤗Huggingface Repo](https://huggingface.co/THUDM/agentlm-70b) |
## Citation
If you find our work useful, please consider citing AgentTuning:
```
@misc{zeng2023agenttuning,
title={AgentTuning: Enabling Generalized Agent Abilities for LLMs},
author={Aohan Zeng and Mingdao Liu and Rui Lu and Bowen Wang and Xiao Liu and Yuxiao Dong and Jie Tang},
year={2023},
eprint={2310.12823},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
EmbeddedLLM/Mistral-7B-Merge-02-v0 | EmbeddedLLM | 2024-01-20T16:05:26Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"en",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-12-20T13:14:41Z | ---
license: apache-2.0
language:
- en
tags:
- merge
base_model:
- teknium/OpenHermes-2.5-Mistral-7B
- Intel/neural-chat-7b-v3-3
---
# Model Description
This is an experiment to compare merging 2 models using DARE TIES versus SLERP 🦙
We are mainly interested to compare against [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
The 2 models involved in the merge as follows:
1. [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
2. [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
- base model: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
The yaml config file for the merge is:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
weight: 0.5
density: 0.5
- model: Intel/neural-chat-7b-v3-3
parameters:
weight: 0.5
density: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
# Open LLM Leaderboard
Note that with more tuning DARE TIES might achieve better results.
| | DARE TIES | SLERP |
|------------|-----------|-------|
| Average | 70.69 | 71.38 |
| ARC | 67.49 | 68.09 |
| HellaSwag | 85.78 | 86.2 |
| MMLU | 64.1 | 64.26 |
| TruthfulQA | 60.52 | 62.78 |
| Winogrande | 79.01 | 79.16 |
| GSM8K | 67.25 | 67.78 |
|
internlm/internlm2-chat-20b-4bits | internlm | 2024-04-24T12:05:32Z | 735 | 6 | transformers | [
"transformers",
"pytorch",
"internlm",
"feature-extraction",
"text-generation",
"conversational",
"custom_code",
"arxiv:2306.00978",
"license:apache-2.0",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-01-15T04:24:50Z | ---
license: apache-2.0
pipeline_tag: text-generation
---
<div align="center">
<img src="https://raw.githubusercontent.com/InternLM/lmdeploy/0be9e7ab6fe9a066cfb0a09d0e0c8d2e28435e58/resources/lmdeploy-logo.svg" width="450"/>
</div>
# INT4 Weight-only Quantization and Deployment (W4A16)
LMDeploy adopts [AWQ](https://arxiv.org/abs/2306.00978) algorithm for 4bit weight-only quantization. By developed the high-performance cuda kernel, the 4bit quantized model inference achieves up to 2.4x faster than FP16.
LMDeploy supports the following NVIDIA GPU for W4A16 inference:
- Turing(sm75): 20 series, T4
- Ampere(sm80,sm86): 30 series, A10, A16, A30, A100
- Ada Lovelace(sm90): 40 series
Before proceeding with the quantization and inference, please ensure that lmdeploy is installed.
```shell
pip install lmdeploy[all]
```
This article comprises the following sections:
<!-- toc -->
- [Inference](#inference)
- [Evaluation](#evaluation)
- [Service](#service)
<!-- tocstop -->
## Inference
Trying the following codes, you can perform the batched offline inference with the quantized model:
```python
from lmdeploy import pipeline, TurbomindEngineConfig
engine_config = TurbomindEngineConfig(model_format='awq')
pipe = pipeline("internlm/internlm2-chat-20b-4bits", backend_config=engine_config)
response = pipe(["Hi, pls intro yourself", "Shanghai is"])
print(response)
```
For more information about the pipeline parameters, please refer to [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/inference/pipeline.md).
## Evaluation
Please overview [this guide](https://opencompass.readthedocs.io/en/latest/advanced_guides/evaluation_turbomind.html) about model evaluation with LMDeploy.
## Service
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
```shell
lmdeploy serve api_server internlm/internlm2-chat-20b-4bits --backend turbomind --model-format awq
```
The default port of `api_server` is `23333`. After the server is launched, you can communicate with server on terminal through `api_client`:
```shell
lmdeploy serve api_client http://0.0.0.0:23333
```
You can overview and try out `api_server` APIs online by swagger UI at `http://0.0.0.0:23333`, or you can also read the API specification from [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/serving/restful_api.md).
|
MaziyarPanahi/Yarn-Mistral-7b-64k-Mistral-7B-Instruct-v0.1 | MaziyarPanahi | 2024-01-16T13:11:32Z | 735 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"NousResearch/Yarn-Mistral-7b-64k",
"pytorch",
"custom_code",
"en",
"dataset:emozilla/yarn-train-tokenized-16k-mistral",
"arxiv:2309.00071",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us",
"conversational"
] | text-generation | 2024-01-16T13:06:41Z | ---
license: apache-2.0
tags:
- Safetensors
- mistral
- text-generation-inference
- merge
- mistral
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- NousResearch/Yarn-Mistral-7b-64k
- transformers
- pytorch
- mistral
- text-generation
- custom_code
- en
- dataset:emozilla/yarn-train-tokenized-16k-mistral
- arxiv:2309.00071
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
---
# Yarn-Mistral-7b-64k-Mistral-7B-Instruct-v0.1
Yarn-Mistral-7b-64k-Mistral-7B-Instruct-v0.1 is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [NousResearch/Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.1
layer_range: [0, 32]
- model: NousResearch/Yarn-Mistral-7b-64k
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Yarn-Mistral-7b-64k-Mistral-7B-Instruct-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
liminerity/Blur-7b-v1.2 | liminerity | 2024-03-11T18:33:33Z | 735 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Blured-Ties-7B",
"freecs/ThetaWave-7B",
"conversational",
"base_model:liminerity/Blured-Ties-7B",
"base_model:freecs/ThetaWave-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-18T01:26:03Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Blured-Ties-7B
- freecs/ThetaWave-7B
base_model:
- liminerity/Blured-Ties-7B
- freecs/ThetaWave-7B
model-index:
- name: Blur-7b-v1.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.36
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.88
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.3
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 52.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liminerity/Blur-7b-v1.2
name: Open LLM Leaderboard
---
# Blur-7b-v1.2
Blur-7b-v1.2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Blured-Ties-7B](https://huggingface.co/liminerity/Blured-Ties-7B)
* [freecs/ThetaWave-7B](https://huggingface.co/freecs/ThetaWave-7B)
## 🧩 Configuration
```yaml
models:
- model: liminerity/Blured-Ties-7B
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: freecs/ThetaWave-7B
parameters:
density: 0.33
weight:
- filter: mlp
value: 0.5
- value: 0
merge_method: ties
base_model: mlabonne/NeuralBeagle14-7B
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
name: gradient-slerp-ties
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Blur-7b-v1.2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liminerity__Blur-7b-v1.2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |67.74|
|AI2 Reasoning Challenge (25-Shot)|65.36|
|HellaSwag (10-Shot) |83.88|
|MMLU (5-Shot) |63.45|
|TruthfulQA (0-shot) |60.30|
|Winogrande (5-shot) |80.58|
|GSM8k (5-shot) |52.84|
|
nbeerbower/SuperBruphin-3x7B | nbeerbower | 2024-03-04T13:08:03Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"moe",
"conversational",
"base_model:nbeerbower/bruphin-epsilon",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T04:10:30Z | ---
license: apache-2.0
tags:
- mergekit
- moe
base_model:
- nbeerbower/bruphin-epsilon
- FelixChao/WestSeverus-7B-DPO-v2
- jondurbin/airoboros-m-7b-3.1.2
model-index:
- name: SuperBruphin-3x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.74
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.85
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nbeerbower/SuperBruphin-3x7B
name: Open LLM Leaderboard
---
# SuperBruphin-3x7B
This is an experimental MoE model created using [mergekit](https://github.com/cg123/mergekit). (mixtral branch)
### Models Merged
The following models were included in the merge:
* [nbeerbower/bruphin-epsilon](https://huggingface.co/nbeerbower/bruphin-epsilon)
* [FelixChao/WestSeverus-7B-DPO-v2](https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2)
* [jondurbin/airoboros-m-7b-3.1.2](https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: nbeerbower/bruphin-epsilon
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: nbeerbower/bruphin-epsilon
positive_prompts:
- "Tell a story."
- source_model: FelixChao/WestSeverus-7B-DPO-v2
positive_prompts:
- "Solve this problem."
- source_model: jondurbin/airoboros-m-7b-3.1.2
positive_prompts:
- "Write a letter."
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nbeerbower__SuperBruphin-3x7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.75|
|AI2 Reasoning Challenge (25-Shot)|71.16|
|HellaSwag (10-Shot) |87.74|
|MMLU (5-Shot) |64.58|
|TruthfulQA (0-shot) |66.85|
|Winogrande (5-shot) |81.53|
|GSM8k (5-shot) |70.66|
|
fblgit/UNA-34BeagleSimpleMath-32K-v1 | fblgit | 2024-01-25T16:05:13Z | 735 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"math",
"UNA",
"juanako",
"conversational",
"dataset:fblgit/simple-math",
"dataset:jondurbin/bagel-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T15:10:48Z | ---
license: apache-2.0
datasets:
- fblgit/simple-math
- jondurbin/bagel-v0.3
library_name: transformers
tags:
- math
- UNA
- juanako
---
# UNA-34BeagleSimpleMath-32K-v1
This is a fine-tuned version of fblgit/UNA-34Beagles-32K-v1 using fblgit/simple-math dataset. Powered by The Bagel v0.3 and Yi-34B
Trained on AXOLOTL!

### 34BEAGLES MATH EVALS
```
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml |get-answer| 5|exact_match|0.6505|± |0.0131|
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|--------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge |Yaml |none | 25|acc |0.7090|± |0.0133|
| | |none | 25|acc_norm|0.7329|± |0.0129|
|truthfulqa_mc2|Yaml |none | 0|acc |0.7378|± |0.0141|
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|---------------------------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |0.7524|± |0.1045|
| - humanities |N/A |none | 0|acc |0.7307|± |0.0846|
| - formal_logic |Yaml |none | 0|acc |0.5873|± |0.0440|
| - high_school_european_history |Yaml |none | 0|acc |0.8667|± |0.0265|
| - high_school_us_history |Yaml |none | 0|acc |0.9167|± |0.0194|
| - high_school_world_history |Yaml |none | 0|acc |0.9114|± |0.0185|
| - international_law |Yaml |none | 0|acc |0.8430|± |0.0332|
| - jurisprudence |Yaml |none | 0|acc |0.8704|± |0.0325|
| - logical_fallacies |Yaml |none | 0|acc |0.8589|± |0.0274|
| - moral_disputes |Yaml |none | 0|acc |0.7717|± |0.0226|
| - moral_scenarios |Yaml |none | 0|acc |0.7374|± |0.0147|
| - philosophy |Yaml |none | 0|acc |0.8006|± |0.0227|
| - prehistory |Yaml |none | 0|acc |0.8549|± |0.0196|
| - professional_law |Yaml |none | 0|acc |0.5724|± |0.0126|
| - world_religions |Yaml |none | 0|acc |0.8830|± |0.0246|
| - other |N/A |none | 0|acc |0.7937|± |0.1029|
| - business_ethics |Yaml |none | 0|acc |0.7800|± |0.0416|
| - clinical_knowledge |Yaml |none | 0|acc |0.8000|± |0.0246|
| - college_medicine |Yaml |none | 0|acc |0.6936|± |0.0351|
| - global_facts |Yaml |none | 0|acc |0.5500|± |0.0500|
| - human_aging |Yaml |none | 0|acc |0.7534|± |0.0289|
| - management |Yaml |none | 0|acc |0.8447|± |0.0359|
| - marketing |Yaml |none | 0|acc |0.9316|± |0.0165|
| - medical_genetics |Yaml |none | 0|acc |0.8700|± |0.0338|
| - miscellaneous |Yaml |none | 0|acc |0.8953|± |0.0109|
| - nutrition |Yaml |none | 0|acc |0.8170|± |0.0221|
| - professional_accounting |Yaml |none | 0|acc |0.6277|± |0.0288|
| - professional_medicine |Yaml |none | 0|acc |0.8015|± |0.0242|
| - virology |Yaml |none | 0|acc |0.5723|± |0.0385|
| - social_sciences |N/A |none | 0|acc |0.8274|± |0.0667|
| - econometrics |Yaml |none | 0|acc |0.6140|± |0.0458|
| - high_school_geography |Yaml |none | 0|acc |0.8889|± |0.0224|
| - high_school_government_and_politics|Yaml |none | 0|acc |0.9482|± |0.0160|
| - high_school_macroeconomics |Yaml |none | 0|acc |0.7897|± |0.0207|
| - high_school_microeconomics |Yaml |none | 0|acc |0.8697|± |0.0219|
| - high_school_psychology |Yaml |none | 0|acc |0.8899|± |0.0134|
| - human_sexuality |Yaml |none | 0|acc |0.8550|± |0.0309|
| - professional_psychology |Yaml |none | 0|acc |0.7745|± |0.0169|
| - public_relations |Yaml |none | 0|acc |0.7000|± |0.0439|
| - security_studies |Yaml |none | 0|acc |0.7796|± |0.0265|
| - sociology |Yaml |none | 0|acc |0.8657|± |0.0241|
| - us_foreign_policy |Yaml |none | 0|acc |0.8900|± |0.0314|
| - stem |N/A |none | 0|acc |0.6708|± |0.1236|
| - abstract_algebra |Yaml |none | 0|acc |0.4900|± |0.0502|
| - anatomy |Yaml |none | 0|acc |0.7259|± |0.0385|
| - astronomy |Yaml |none | 0|acc |0.8487|± |0.0292|
| - college_biology |Yaml |none | 0|acc |0.8750|± |0.0277|
| - college_chemistry |Yaml |none | 0|acc |0.5200|± |0.0502|
| - college_computer_science |Yaml |none | 0|acc |0.6200|± |0.0488|
| - college_mathematics |Yaml |none | 0|acc |0.4300|± |0.0498|
| - college_physics |Yaml |none | 0|acc |0.5686|± |0.0493|
| - computer_security |Yaml |none | 0|acc |0.7800|± |0.0416|
| - conceptual_physics |Yaml |none | 0|acc |0.7404|± |0.0287|
| - electrical_engineering |Yaml |none | 0|acc |0.7172|± |0.0375|
| - elementary_mathematics |Yaml |none | 0|acc |0.6720|± |0.0242|
| - high_school_biology |Yaml |none | 0|acc |0.9032|± |0.0168|
| - high_school_chemistry |Yaml |none | 0|acc |0.6256|± |0.0341|
| - high_school_computer_science |Yaml |none | 0|acc |0.7800|± |0.0416|
| - high_school_mathematics |Yaml |none | 0|acc |0.4667|± |0.0304|
| - high_school_physics |Yaml |none | 0|acc |0.5033|± |0.0408|
| - high_school_statistics |Yaml |none | 0|acc |0.6435|± |0.0327|
| - machine_learning |Yaml |none | 0|acc |0.5536|± |0.0472|
| Groups |Version|Filter|n-shot|Metric|Value | |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |0.7524|± |0.1045|
| - humanities |N/A |none | 0|acc |0.7307|± |0.0846|
| - other |N/A |none | 0|acc |0.7937|± |0.1029|
| - social_sciences|N/A |none | 0|acc |0.8274|± |0.0667|
| - stem |N/A |none | 0|acc |0.6708|± |0.1236|
```
### 34BEAGLES (The Base Model)
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|--------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge |Yaml |none | 25|acc |0.7039|± |0.0133|
| | |none | 25|acc_norm|0.7321|± |0.0129|
|truthfulqa_mc2|Yaml |none | 0|acc |0.7387|± |0.0141|
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml |get-answer| 5|exact_match|0.6399|± |0.0132|
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|---------------------------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |0.7477|± |0.1079|
| - humanities |N/A |none | 0|acc |0.7188|± |0.0855|
| - formal_logic |Yaml |none | 0|acc |0.5794|± |0.0442|
| - high_school_european_history |Yaml |none | 0|acc |0.8667|± |0.0265|
| - high_school_us_history |Yaml |none | 0|acc |0.9069|± |0.0204|
| - high_school_world_history |Yaml |none | 0|acc |0.9072|± |0.0189|
| - international_law |Yaml |none | 0|acc |0.8264|± |0.0346|
| - jurisprudence |Yaml |none | 0|acc |0.8796|± |0.0315|
| - logical_fallacies |Yaml |none | 0|acc |0.8405|± |0.0288|
| - moral_disputes |Yaml |none | 0|acc |0.7746|± |0.0225|
| - moral_scenarios |Yaml |none | 0|acc |0.6972|± |0.0154|
| - philosophy |Yaml |none | 0|acc |0.8006|± |0.0227|
| - prehistory |Yaml |none | 0|acc |0.8580|± |0.0194|
| - professional_law |Yaml |none | 0|acc |0.5645|± |0.0127|
| - world_religions |Yaml |none | 0|acc |0.8713|± |0.0257|
| - other |N/A |none | 0|acc |0.7950|± |0.1057|
| - business_ethics |Yaml |none | 0|acc |0.7700|± |0.0423|
| - clinical_knowledge |Yaml |none | 0|acc |0.8038|± |0.0244|
| - college_medicine |Yaml |none | 0|acc |0.7110|± |0.0346|
| - global_facts |Yaml |none | 0|acc |0.5500|± |0.0500|
| - human_aging |Yaml |none | 0|acc |0.7265|± |0.0299|
| - management |Yaml |none | 0|acc |0.8544|± |0.0349|
| - marketing |Yaml |none | 0|acc |0.9444|± |0.0150|
| - medical_genetics |Yaml |none | 0|acc |0.8800|± |0.0327|
| - miscellaneous |Yaml |none | 0|acc |0.8978|± |0.0108|
| - nutrition |Yaml |none | 0|acc |0.8170|± |0.0221|
| - professional_accounting |Yaml |none | 0|acc |0.6312|± |0.0288|
| - professional_medicine |Yaml |none | 0|acc |0.8051|± |0.0241|
| - virology |Yaml |none | 0|acc |0.5602|± |0.0386|
| - social_sciences |N/A |none | 0|acc |0.8297|± |0.0664|
| - econometrics |Yaml |none | 0|acc |0.6140|± |0.0458|
| - high_school_geography |Yaml |none | 0|acc |0.8939|± |0.0219|
| - high_school_government_and_politics|Yaml |none | 0|acc |0.9482|± |0.0160|
| - high_school_macroeconomics |Yaml |none | 0|acc |0.7974|± |0.0204|
| - high_school_microeconomics |Yaml |none | 0|acc |0.8655|± |0.0222|
| - high_school_psychology |Yaml |none | 0|acc |0.8936|± |0.0132|
| - human_sexuality |Yaml |none | 0|acc |0.8473|± |0.0315|
| - professional_psychology |Yaml |none | 0|acc |0.7778|± |0.0168|
| - public_relations |Yaml |none | 0|acc |0.7000|± |0.0439|
| - security_studies |Yaml |none | 0|acc |0.7837|± |0.0264|
| - sociology |Yaml |none | 0|acc |0.8657|± |0.0241|
| - us_foreign_policy |Yaml |none | 0|acc |0.8900|± |0.0314|
| - stem |N/A |none | 0|acc |0.6641|± |0.1291|
| - abstract_algebra |Yaml |none | 0|acc |0.4800|± |0.0502|
| - anatomy |Yaml |none | 0|acc |0.7407|± |0.0379|
| - astronomy |Yaml |none | 0|acc |0.8618|± |0.0281|
| - college_biology |Yaml |none | 0|acc |0.8611|± |0.0289|
| - college_chemistry |Yaml |none | 0|acc |0.5300|± |0.0502|
| - college_computer_science |Yaml |none | 0|acc |0.6100|± |0.0490|
| - college_mathematics |Yaml |none | 0|acc |0.3800|± |0.0488|
| - college_physics |Yaml |none | 0|acc |0.5588|± |0.0494|
| - computer_security |Yaml |none | 0|acc |0.8000|± |0.0402|
| - conceptual_physics |Yaml |none | 0|acc |0.7319|± |0.0290|
| - electrical_engineering |Yaml |none | 0|acc |0.7034|± |0.0381|
| - elementary_mathematics |Yaml |none | 0|acc |0.6587|± |0.0244|
| - high_school_biology |Yaml |none | 0|acc |0.8935|± |0.0175|
| - high_school_chemistry |Yaml |none | 0|acc |0.6305|± |0.0340|
| - high_school_computer_science |Yaml |none | 0|acc |0.7700|± |0.0423|
| - high_school_mathematics |Yaml |none | 0|acc |0.4296|± |0.0302|
| - high_school_physics |Yaml |none | 0|acc |0.5166|± |0.0408|
| - high_school_statistics |Yaml |none | 0|acc |0.6528|± |0.0325|
| - machine_learning |Yaml |none | 0|acc |0.5536|± |0.0472|
| Groups |Version|Filter|n-shot|Metric|Value | |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu |N/A |none | 0|acc |0.7477|± |0.1079|
| - humanities |N/A |none | 0|acc |0.7188|± |0.0855|
| - other |N/A |none | 0|acc |0.7950|± |0.1057|
| - social_sciences|N/A |none | 0|acc |0.8297|± |0.0664|
| - stem |N/A |none | 0|acc |0.6641|± |0.1291|
```
So I guess, SimpleMath: 2+2=4 4-1=3 SIMPLE-MATH .. works! :) |
wang7776/Mistral-7B-Instruct-v0.2-attention-sparsity-20 | wang7776 | 2024-02-05T18:23:41Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetuned",
"conversational",
"arxiv:2306.11695",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-25T18:18:58Z | ---
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
inference: false
---
# Overview
This model has been pruned to 20% sparsity using the [Wanda pruning method](https://arxiv.org/abs/2306.11695) on attention layers. This method requires no retraining or weight updates and still achieves competitive performance. A link to the base model can be found [here](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
# Model Card for Mistral-7B-Instruct-v0.2
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed. |
Steelskull/Lumosia-v2-MoE-4x10.7 | Steelskull | 2024-04-23T00:03:03Z | 735 | 5 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"Solar Moe",
"Solar",
"Lumosia",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-26T07:04:42Z | ---
license: apache-2.0
tags:
- Solar Moe
- Solar
- Lumosia
pipeline_tag: text-generation
model-index:
- name: Lumosia-v2-MoE-4x10.7
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Lumosia-v2-MoE-4x10.7
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Lumosia-v2-MoE-4x10.7
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Lumosia-v2-MoE-4x10.7
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 68.48
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Lumosia-v2-MoE-4x10.7
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Lumosia-v2-MoE-4x10.7
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Steelskull/Lumosia-v2-MoE-4x10.7
name: Open LLM Leaderboard
---
# Lumosia-v2-MoE-4x10.7

The Lumosia Series upgraded with Lumosia V2.
# What's New in Lumosia V2?
Lumosia V2 takes the original vision of being an "all-rounder" and refines it with more nuanced capabilities.
Topic/Prompt Based Approach:
Diverging from the keyword-based approach of its counterpart, Umbra.
Context and Coherence:
With a base context of 8k scrolling window and the ability to maintain coherence up to 16k.
Balanced and Versatile:
The core ethos of Lumosia V2 is balance. It's designed to be your go-to assistant.
Experimentation and User-Centric Development:
Lumosia V2 remains an experimental model, a mosaic of the best-performing Solar models, (selected based on user experience).
This version is a testament to the idea that innovation is a journey, not a destination.
Template:
```
### System:
### USER:{prompt}
### Assistant:
```
Settings:
```
Temp: 1.0
min-p: 0.02-0.1
```
## Evals:
* Avg:
* ARC:
* HellaSwag:
* MMLU:
* T-QA:
* Winogrande:
* GSM8K:
## Examples:
```
Example 1:
User:
Lumosia:
```
```
Example 2:
User:
Lumosia:
```
## 🧩 Configuration
```
yaml
base_model: DopeorNope/SOLARC-M-10.7B
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: DopeorNope/SOLARC-M-10.7B
positive_prompts:
negative_prompts:
- source_model: Sao10K/Fimbulvetr-10.7B-v1 [Updated]
positive_prompts:
negative_prompts:
- source_model: jeonsworld/CarbonVillain-en-10.7B-v4 [Updated]
positive_prompts:
negative_prompts:
- source_model: kyujinpy/Sakura-SOLAR-Instruct
positive_prompts:
negative_prompts:
```
## 💻 Usage
```
python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Steelskull/Lumosia-v2-MoE-4x10.7"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Steelskull__Lumosia-v2-MoE-4x10.7)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.75|
|AI2 Reasoning Challenge (25-Shot)|70.39|
|HellaSwag (10-Shot) |87.87|
|MMLU (5-Shot) |66.45|
|TruthfulQA (0-shot) |68.48|
|Winogrande (5-shot) |84.21|
|GSM8k (5-shot) |65.13|
|
amazingvince/openhermes-7b-dpo | amazingvince | 2024-01-27T21:41:16Z | 735 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-27T00:27:34Z | ---
license: apache-2.0
---
New experimental dpo tune. More details to come...
odel description
OpenHermes 2.5 Mistral 7B is a state of the art Mistral Fine-tune, a continuation of OpenHermes 2 model, which trained on additional code datasets.
Potentially the most interesting finding from training on a good ratio (est. of around 7-14% of the total dataset) of code instruction was that it has boosted several non-code benchmarks, including TruthfulQA, AGIEval, and GPT4All suite. It did however reduce BigBench benchmark score, but the net gain overall is significant.
Here, we are finetuning openheremes using DPO with various data meant to improve its abilities. |
zorobin/mistral-class-shishya-7b-ep3 | zorobin | 2024-01-28T04:35:39Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-28T04:35:38Z | ---
library_name: transformers
license: llama2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Gille/StrangeMerges_11-7B-slerp | Gille | 2024-03-04T21:54:22Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_10-7B-slerp",
"mlabonne/NeuralBeagle14-7B",
"base_model:Gille/StrangeMerges_10-7B-slerp",
"base_model:mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-30T00:53:23Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_10-7B-slerp
- mlabonne/NeuralBeagle14-7B
base_model:
- Gille/StrangeMerges_10-7B-slerp
- mlabonne/NeuralBeagle14-7B
model-index:
- name: StrangeMerges_11-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.53
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_11-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.2
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_11-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_11-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.81
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_11-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_11-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.89
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_11-7B-slerp
name: Open LLM Leaderboard
---
# StrangeMerges_11-7B-slerp
StrangeMerges_11-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_10-7B-slerp](https://huggingface.co/Gille/StrangeMerges_10-7B-slerp)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Gille/StrangeMerges_10-7B-slerp
layer_range: [0, 32]
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
merge_method: slerp
base_model: Gille/StrangeMerges_10-7B-slerp
parameters:
t:
- filter: self_attn
value: [0.1, 0.5, 0.3, 0.6, 0.9]
- filter: mlp
value: [0.9, 0.5, 0.7, 0.4, 0.1]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_11-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_11-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.80|
|AI2 Reasoning Challenge (25-Shot)|72.53|
|HellaSwag (10-Shot) |88.20|
|MMLU (5-Shot) |65.04|
|TruthfulQA (0-shot) |69.81|
|Winogrande (5-shot) |82.32|
|GSM8k (5-shot) |70.89|
|
daekeun-ml/phi-2-upscaled-4B-instruct-v0.1 | daekeun-ml | 2024-02-09T16:50:48Z | 735 | 3 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"dataset:wikipedia",
"dataset:Open-Orca/OpenOrca",
"arxiv:2312.15166",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-30T23:19:06Z | ---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- Intel/orca_dpo_pairs
- wikipedia
- Open-Orca/OpenOrca
inference: false
---
# phi-2-upscaled-4B-instruct-v0.1
## Model Details
This model is a model that performed continued pre-training and fine-tuning (instruction tuning) using the depth up-scaling (DUS) technique disclosed by Upstage.
### DUS(Depth Up-Scaling) and continued pre-training
Similar to the methodology disclosed in the paper, we expanded from 32 transformer blocks to 48 blocks and then continued pre-training with the public dataset. Pre-training was performed for 3 days using 4 `ml.g5.48xlarge` instances from AWS (NVIDIA A10G GPU x 32ea). For pre-training, we used a sample set from Wikipedia.
Note that performance is not guaranteed since only a small number of datasets were used for the experiment. The number of samples for training set is just around 1.5 million after tokenization.
For distributed training, all weights were trained without adapter techniques, and sharding parallelization was performed with ZeRO-2. The presets are as follows.
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true,
"cpu_offload": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto"
}
```
Some hyperparameters are listed below.
```
batch_size: 2
num_epochs: 1
learning_rate: 3e-4
gradient_accumulation_steps: 8
lr_scheduler_type: "linear"
group_by_length: False
```
### Fine-tuning
After performing pre-training, instruction tuning and alignment tuning were performed sequentially. This process only took about 10 hours using AWS `ml.g5.24xlarge` (NVIDIA A10G GPU x 4ea). The dataset used for instruction tuning is a sample set of the OpenOrca dataset, and the dataset used for alignment tuning is Intel's orca_dpo_pairs dataset.
All fine-tuning was learned using QLoRA, and the batch sizes were set to 3 and 1, respectively. We used 1,024 for the context length. 2,048 is also possible, but applying DPO often runs out of memory on 24GB GPU memory, so we settled on 1,024.
Please see below for relevant code snippets.
```python
peft_config = LoraConfig(
r=8,
lora_alpha=16,
target_modules=["q_proj", "k_proj", "v_proj", "fc1", "fc2"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
training_arguments = TrainingArguments(
output_dir="logs",
num_train_epochs=1,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=4,
optim="paged_adamw_8bit",
learning_rate=3e-4,
weight_decay=0.001,
bf16=True,
max_grad_norm=0.3,
max_steps=-1,
warmup_ratio=0.03,
group_by_length=True,
lr_scheduler_type="cosine",
report_to="wandb", ...
)
```
### References
- Base model: [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
- Paper: [SOLAR 10.7B](https://arxiv.org/abs/2312.15166)
## How to Get Started with the Model
Since this model used ChatGPT's ChatML template, <im_start> and <im_end> tokens were added.
You can use Hugging Face's chat template to create the prompt, but you can also create the prompt yourself with the code snippet below.
```python
def create_inference_prompt(text):
string = f"""<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{text}<|im_end|>
<|im_start|>assistant
"""
return string
```
If you want to simply see the inference results, please use the code snippet below.
```python
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
import torch
torch.set_default_device("cuda")
model_path = "daekeun-ml/phi-2-upscaled-4B-instruct-v0.1"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype="auto",
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
model_path,
use_fast=True,
trust_remote_code=True
)
# Format prompt
message = [
{"role": "system", "content": "You are a helpful AI assistant. Generate appropriate answers to given questions."},
{"role": "user", "content": "What is a Large Language Model?"}
]
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_new_tokens=200, do_sample=True, top_p=0.9, temperature=0.5, repetition_penalty=1.2)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Notes
### License
Apache 2.0; The license of phi-2 is MIT, but the license of the orca dataset used for training is apache 2.0.
### Caution
This model was created as a personal experiment, unrelated to the organization I work for. The model may not operate correctly because separate verification was not performed. Please be careful unless it is for personal experimentation or PoC (Proof of Concept)! |
vanillaOVO/supermario_v3 | vanillaOVO | 2024-04-04T07:09:26Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-31T04:47:44Z | ---
base_model: []
tags:
- mergekit
- merge
license: apache-2.0
---
This is a merge of pre-trained language models created based on [DARE](https://arxiv.org/abs/2311.03099) using [mergekit](https://github.com/cg123/mergekit).
More descriptions of the model will be added soon.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import MistralForCausalLM, AutoTokenizer
model = MistralForCausalLM.from_pretrained("vanillaOVO/supermario_v3", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("vanillaOVO/supermario_v3")
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Large language models are "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
vanillaOVO/supermario_v4 | vanillaOVO | 2024-04-04T07:09:46Z | 735 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-31T04:48:38Z | ---
base_model: []
tags:
- mergekit
- merge
license: apache-2.0
---
This is a merge of pre-trained language models created based on [DARE](https://arxiv.org/abs/2311.03099) using [mergekit](https://github.com/cg123/mergekit).
More descriptions of the model will be added soon.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import MistralForCausalLM, AutoTokenizer
model = MistralForCausalLM.from_pretrained("vanillaOVO/supermario_v4", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("vanillaOVO/supermario_v4")
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Large language models are "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
unsloth/solar-10.7b-bnb-4bit | unsloth | 2024-03-22T15:02:16Z | 735 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"solar",
"bnb",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-01-31T15:51:29Z | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- unsloth
- transformers
- solar
- bnb
---
# Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
| **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
| **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
| **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
| **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
|
nextai-team/Moe-2x7b-QA-Code | nextai-team | 2024-02-09T23:51:57Z | 735 | 5 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"code",
"finance",
"qa",
"assistant",
"reasoning",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-04T11:24:25Z | ---
library_name: transformers
tags:
- code
- finance
- qa
- assistant
- reasoning
license: apache-2.0
language:
- en
metrics:
- code_eval
---
**Model Description**
Moe-2x7b-QA-Code is a state-of-the-art language model specialized in Question Answering (QA) and code-related queries. Leveraging the Mixture of Experts (MoE) architecture, this model has been trained on a diverse dataset encompassing technical documentation, forums, and code repositories to provide accurate and context-aware responses to both technical and general questions.
***How to Use***
```
from transformers import AutoTokenizer
import transformers
import torch
model = "nextai-team/Moe-2x7b-QA-Code"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
device_map="auto",
model_kwargs={"torch_dtype": torch.float16},
)
def generate_resposne(query):
messages = [{"role": "user", "content": query}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
return outputs[0]['generated_text']
response = generate_resposne("How to learn coding .Please provide a step by step procedure")
print(response)
```
**Intended Use**
This model is intended for developers, data scientists, and researchers seeking to integrate sophisticated natural language understanding and code generation functionalities into their applications. Ideal use cases include but are not limited to:
Automated coding assistance Technical support bots Educational tools for learning programming Enhancing code review processes
Model Architecture employs a Mixture of Experts (MoE) architecture, which allows it to efficiently manage its vast number of parameters for specialized tasks. This architecture facilitates the model's ability to discern subtle nuances in programming languages and natural language queries, leading to more accurate code generation and question answering performance.
Performance demonstrates significant improvements in accuracy and relevance over its predecessor, particularly in complex coding scenarios and detailed technical queries. ***Benchmarks and performance metrics can be provided upon request.***
**Limitations and Bias**
This model, like any other, has its limitations. It may exhibit biases inherent in the training data or struggle with questions outside its training scope. Users should critically assess the model's outputs, especially for sensitive or critical applications.
**Training Data**
The Moe-2x7b-QA-Code model was trained on a curated dataset comprising technical documentation, Stack Overflow posts, GitHub repositories, and other code-related content. This extensive training set ensures the model's proficiency in understanding and generating code-related content alongside general language understanding.
**Training Procedure**
The model was trained using a Mixture of Experts (MoE) approach, allowing it to dynamically leverage different subsets of parameters for different types of input data. This method enhances the model's capacity and efficiency, enabling it to excel in a wide range of QA and coding tasks.
**Model Architecture**
Moe-2x7b-QA-Code employs an advanced MoE architecture with 2x7 billion parameters, optimized for high performance in QA and coding tasks. This architecture enables the model to efficiently process and generate accurate responses to complex queries.
**Contact**
Https://nextai.co.in |
mtgv/MobileVLM_V2-3B | mtgv | 2024-02-07T08:54:59Z | 735 | 5 | transformers | [
"transformers",
"pytorch",
"mobilevlm",
"text-generation",
"MobileVLM V2",
"arxiv:2402.03766",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-06T09:14:00Z | ---
license: apache-2.0
tags:
- MobileVLM V2
---
## Model Summery
MobileVLM V2 is a family of significantly improved vision language models upon MobileVLM, which proves that a delicate orchestration of novel architectural design, an improved training scheme tailored for mobile VLMs, and rich high-quality dataset curation can substantially benefit VLMs’ performance. Specifically, MobileVLM V2 1.7B achieves better or on-par performance on standard VLM benchmarks compared with much larger VLMs at the 3B scale. Notably, MobileVLM_V2-3B model outperforms a large variety of VLMs at the 7B+ scale.
The MobileVLM_V2-3B was built on our [MobileLLaMA-2.7B-Chat](https://huggingface.co/mtgv/MobileLLaMA-2.7B-Chat) to facilitate the off-the-shelf deployment.
## Model Sources
- Repository: https://github.com/Meituan-AutoML/MobileVLM
- Paper: [MobileVLM V2: Faster and Stronger Baseline for Vision Language Model](https://arxiv.org/abs/2402.03766)
## How to Get Started with the Model
Inference examples can be found at [Github](https://github.com/Meituan-AutoML/MobileVLM).
|
Eric111/MarcoHermes | Eric111 | 2024-02-09T18:54:13Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"AtAndDev/CapybaraMarcoroni-7B",
"eren23/DistilHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-09T18:50:18Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- AtAndDev/CapybaraMarcoroni-7B
- eren23/DistilHermes-2.5-Mistral-7B
---
# MarcoHermes
MarcoHermes is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [AtAndDev/CapybaraMarcoroni-7B](https://huggingface.co/AtAndDev/CapybaraMarcoroni-7B)
* [eren23/DistilHermes-2.5-Mistral-7B](https://huggingface.co/eren23/DistilHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: AtAndDev/CapybaraMarcoroni-7B
layer_range: [0, 32]
- model: eren23/DistilHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: AtAndDev/CapybaraMarcoroni-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
eren23/dpo-binarized-NeuralTrix-7B | eren23 | 2024-03-05T15:46:07Z | 735 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversation",
"text-generation-inference",
"CultriX/NeuralTrix-7B-dpo",
"dpo",
"merge",
"en",
"dataset:argilla/OpenHermes2.5-dpo-binarized-alpha",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-11T19:23:46Z | ---
language:
- en
license: apache-2.0
tags:
- conversation
- text-generation-inference
- CultriX/NeuralTrix-7B-dpo
- dpo
- merge
datasets:
- argilla/OpenHermes2.5-dpo-binarized-alpha
pipeline_tag: text-generation
model-index:
- name: dpo-binarized-NeuralTrix-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/dpo-binarized-NeuralTrix-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/dpo-binarized-NeuralTrix-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/dpo-binarized-NeuralTrix-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 79.07
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/dpo-binarized-NeuralTrix-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/dpo-binarized-NeuralTrix-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/dpo-binarized-NeuralTrix-7B
name: Open LLM Leaderboard
---
DPO Finetuned CultriX/NeuralTrix-7B-dpo using argilla/OpenHermes2.5-dpo-binarized-alpha
argilla dpo binarized pairs is a dataset built on top of: https://huggingface.co/datasets/teknium/OpenHermes-2.5 using https://github.com/argilla-io/distilabel if interested.
Thx for the great data sources.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__dpo-binarized-NeuralTrix-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.17|
|AI2 Reasoning Challenge (25-Shot)|72.35|
|HellaSwag (10-Shot) |88.89|
|MMLU (5-Shot) |64.09|
|TruthfulQA (0-shot) |79.07|
|Winogrande (5-shot) |84.61|
|GSM8k (5-shot) |68.01|
|
dddsaty/FusionNet_7Bx2_MoE_Ko_DPO_Adapter_Attach | dddsaty | 2024-02-14T07:02:38Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"en",
"ko",
"dataset:We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-13T09:19:43Z | ---
license: mit
datasets: We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs
language:
- en
- ko
pipeline_tag: text-generation
---
**Explanation**
- With the base model, attached the DPO applied Adapter
**Base Model**
- [TomGrc/FusionNet_7Bx2_MoE_v0.1](https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_v0.1)
**Adapter Base Model**
- [yanolja/KoSOLAR-10.7B-v0.3](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.3)
**Adapter Corpus**
- [We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs](https://huggingface.co/datasets/We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs)
**Score**
|Average|ARC|HellaSwag|MMLU|TruthfulQA|Winogrande|GSM8K|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|76.09|73.89|88.94|65.03|71.24|87.61|69.83|
**Log**
- 2024.02.13: Initial version Upload
**LICENSE**
- MIT |
RaduGabriel/SirUkrainian | RaduGabriel | 2024-03-06T08:29:21Z | 735 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"RaduGabriel/MUZD",
"RaduGabriel/Mistral-Instruct-Ukrainian-SFT",
"Radu1999/MisterUkrainianDPO",
"CultriX/NeuralTrix-7B-dpo",
"base_model:RaduGabriel/MUZD",
"base_model:RaduGabriel/Mistral-Instruct-Ukrainian-SFT",
"base_model:Radu1999/MisterUkrainianDPO",
"base_model:CultriX/NeuralTrix-7B-dpo",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-15T15:37:44Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- RaduGabriel/MUZD
- RaduGabriel/Mistral-Instruct-Ukrainian-SFT
- Radu1999/MisterUkrainianDPO
- CultriX/NeuralTrix-7B-dpo
base_model:
- RaduGabriel/MUZD
- RaduGabriel/Mistral-Instruct-Ukrainian-SFT
- Radu1999/MisterUkrainianDPO
- CultriX/NeuralTrix-7B-dpo
model-index:
- name: SirUkrainian
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.32
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 68.74
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [RaduGabriel/MUZD](https://huggingface.co/RaduGabriel/MUZD)
* [RaduGabriel/Mistral-Instruct-Ukrainian-SFT](https://huggingface.co/RaduGabriel/Mistral-Instruct-Ukrainian-SFT)
* [Radu1999/MisterUkrainianDPO](https://huggingface.co/Radu1999/MisterUkrainianDPO)
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
## 🧩 Configuration
```yaml
models:
- model: RaduGabriel/MUZD
parameters:
weight: 0.3
- model: RaduGabriel/Mistral-Instruct-Ukrainian-SFT
parameters:
weight: 0.3
- model: Radu1999/MisterUkrainianDPO
parameters:
weight: 0.1
- model: CultriX/NeuralTrix-7B-dpo
parameters:
weight: 0.3
merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "RaduGabriel/SirUkrainian"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.bfloat16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_RaduGabriel__SirUkrainian)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.50|
|AI2 Reasoning Challenge (25-Shot)|67.32|
|HellaSwag (10-Shot) |85.54|
|MMLU (5-Shot) |63.14|
|TruthfulQA (0-shot) |68.74|
|Winogrande (5-shot) |81.53|
|GSM8k (5-shot) |56.71|
|
alnrg2arg/blockchainlabs_tinyllama_fusion_LHK_yunkong | alnrg2arg | 2024-02-19T00:41:05Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T00:21:44Z | ---
license: mit
---
This model is based on the fusion strategy offered by Fanqi Wan(https://github.com/fanqiwan/FuseLLM).
Three models are fused together.
Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
Blending model 1: HanNayeoniee/LHK_DPO_v1
Blending model 2: yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
This model will be optimized by Laser and DPO later.
This project is to make the on-device sLM. We are doing experiments on the models. |
MaziyarPanahi/Franken-MoE-18B-v0.1 | MaziyarPanahi | 2024-02-20T10:13:49Z | 735 | 2 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"moe",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-19T15:18:37Z | ---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
- merge
- moe
---
## Examples
describe about pros and cons of docker system.
```
Answer:
Docker is an open-source platform for developing, shipping, and running applications within containers. It offers several advantages and disadvantages, which we will discuss below:
Pros (Advantages) of Docker System:
1. Isolation: Each container runs in its own isolated environment with its own operating system, libraries, and dependencies. This ensures better security and stability by preventing conflicts between different application requirements.
2. Portability: Docker images can be easily shared across various platforms and environments due to their standardized format. Developers can work on their code locally and deploy it seamlessly to production without worrying much about compatibility issues.
3. Resource Efficiency: Containers use fewer resources compared to traditional virtual machines since they share the host OS kernel. This results in faster startup times and lower memory consumption.
4. Automation & Scaling: Docker provides tools like Docker Compose and Kubernetes for managing multi-container applications. These tools enable easy automation of deployment, scaling, and management tasks.
5. Version Control: Docker integrates well with version control systems such as Git. Developers can track changes made to their applications over time and roll back to previous versions if needed.
Cons (Disadvantages) of Docker System:
1. Learning Curve: Docker has a steep learning curve, especially for those new to containerization technology. Understanding concepts like images, containers, volumes, and networks may take some time.
2. Complexity: Managing multiple containers and their interdependencies can become complex, particularly when dealing with large-scale applications. Proper monitoring and logging solutions must be implemented to maintain visibility into the system.
3. Security Concerns: While isolation helps improve security, there are still potential vulnerabilities related to misconfigurations or outdated base images. Regularly updating and patching images is essential to minimize risks.
4. Dependency Management: Maintaining and updating dependencies within containers can be challenging. Developers need to keep track of package updates and ensure that all components remain compatible.
5. Limited Persistence: By default, data stored inside a container is lost once it is stopped or removed. To address this issue, external storage solutions like volumes or bind mounts must be used. However, these add complexity to the setup.</s>
```
Hi, I need to get from New York to Los Angeles. Can you help me with that?
```
{
"tool_call": {
"arguments": {"origin": "New York", "destination": "Los Angeles"},
"name": "get_directions"
}
}</s>
```
There are three killers in a room. Someone enters the room and kills one of them. Nobody leaves the room. How many killers are left in the room? Explain your reasoning step by step.
```
One person enters the room and kills one of the killers. This means that now there are only 2 remaining killers from the initial 3. The number of people (killers) in the room has decreased by 1 after someone entered and killed another. So, there are 2 killers left in the room.</s>
```
Wrong, should be 3.
Mark is faster than Mary , Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step
```
To solve this problem, we need to compare the relative speeds of three people: Mark, Mary, and Joe. We are given two comparisons between them.
1. Mark is faster than Mary
2. Mary is faster than Joe
Now let's analyze these statements:
If Mark is faster than Mary (statement 1), then it means that when they race each other, Mark will finish before Mary.
And if Mary is faster than Joe (statement 2), then it implies that when Mary races with Joe, she finishes before him.
However, none of these statements directly tell us about the speed comparison between Joe and Mark. To determine who is faster between them, we can use a process of elimination based on our given information.
Since Mary is slower than Mark but faster than Joe, there must be someone in between their speeds. In this case, that person is Mary. So, logically, Joe should be slower than both Mark and Mary. Therefore, we can conclude that **Joe is not faster than Mark**.</s>
```
If I lay 2 wet shirts out in the sun flat on the ground to dry and it takes 5 hours until they are dry, how long would it take to dry 4 shirts of the same type that way simultanously?
```
If one wet shirt needs 5 hours to dry when laid flat in the sun, then we can assume all shirts have a similar drying rate. Since we're increasing the number of shirts from 2 to 4 but keeping other conditions the same, we can expect the drying time to roughly double due to having twice as many items to dry simultaneously.
So, for 4 shirts, it might take approximately 5 * 2 = 10 hours to dry them in the same conditions. However, this is an estimation and actual results may vary depending on factors like airflow, humidity, and the thickness or material of the shirts.</s>
```
Wrong, should be the same.
|
eren23/ogno-monarch-jaskier-merge-7b-v2 | eren23 | 2024-03-05T15:46:15Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"eren23/ogno-monarch-jaskier-merge-7b",
"mlabonne/AlphaMonarch-7B",
"base_model:eren23/ogno-monarch-jaskier-merge-7b",
"base_model:mlabonne/AlphaMonarch-7B",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-20T18:58:49Z | ---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
- eren23/ogno-monarch-jaskier-merge-7b
- mlabonne/AlphaMonarch-7B
base_model:
- eren23/ogno-monarch-jaskier-merge-7b
- mlabonne/AlphaMonarch-7B
model-index:
- name: ogno-monarch-jaskier-merge-7b-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.87
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 77.76
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/ogno-monarch-jaskier-merge-7b-v2
name: Open LLM Leaderboard
---
# ogno-monarch-jaskier-merge-7b-v2
ogno-monarch-jaskier-merge-7b-v2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b)
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
## 🧩 Configuration
```yaml
models:
- model: eren23/dpo-binarized-NeutrixOmnibe-7B
# No parameters necessary for base model
- model: eren23/ogno-monarch-jaskier-merge-7b
parameters:
weight: 0.7
density: 0.6
- model: mlabonne/AlphaMonarch-7B
parameters:
weight: 0.3
density: 0.45
merge_method: dare_ties
base_model: eren23/dpo-binarized-NeutrixOmnibe-7B
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "eren23/ogno-monarch-jaskier-merge-7b-v2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__ogno-monarch-jaskier-merge-7b-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.35|
|AI2 Reasoning Challenge (25-Shot)|72.87|
|HellaSwag (10-Shot) |89.15|
|MMLU (5-Shot) |64.77|
|TruthfulQA (0-shot) |77.76|
|Winogrande (5-shot) |84.29|
|GSM8k (5-shot) |69.29|
|
yam-peleg/Experiment21-7B | yam-peleg | 2024-02-22T11:24:54Z | 735 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-22T10:19:16Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
---
**Experiment21-7B**
An experiment for testing and refining a specific training and evaluation pipeline research framework.
This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.
The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.
The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
More details in the future experiments.
---
license: apache-2.0
--- |
freecs/ThetaWave-7B-v0.2 | freecs | 2024-02-23T20:51:19Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-23T18:44:28Z | ---
license: apache-2.0
---
# ThetaWave-7B v0.2
More info will be added in the future about this model.
- Made By: [GR](https://twitter.com/gr_username).
- Donate: [donation](https://www.buymeacoffee.com/gr.0).
Give it a try:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("freecs/ThetaWave-7B-v0.2")
tokenizer = AutoTokenizer.from_pretrained("freecs/ThetaWave-7B-v0.2")
messages = [
{"role": "system", "content": "You are an AI assistant"},
{"role": "user", "content": "Who are you?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
``` |
splm/openchat-spin-slimorca-iter3 | splm | 2024-02-25T16:43:20Z | 735 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-25T02:03:21Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
uproai/Rose-2x7B | uproai | 2024-02-26T11:42:52Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"maywell/PiVoT-0.1-Starling-LM-RP",
"WizardLM/WizardMath-7B-V1.1",
"base_model:maywell/PiVoT-0.1-Starling-LM-RP",
"base_model:WizardLM/WizardMath-7B-V1.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-26T11:37:05Z | ---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- maywell/PiVoT-0.1-Starling-LM-RP
- WizardLM/WizardMath-7B-V1.1
base_model:
- maywell/PiVoT-0.1-Starling-LM-RP
- WizardLM/WizardMath-7B-V1.1
---
# Rose-2x7B
Rose-2x7B is a Mixure of Experts (MoE) made with the following models using [Mergekit](https://github.com/cg123/mergekit):
* [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
```bash
mergekit-moe mergekit_moe.yaml merge --copy-tokenizer --device cuda --low-cpu-memory
```
## 🧩 Configuration
```yaml
base_model: uproai/ros-7b-v1
experts:
- source_model: maywell/PiVoT-0.1-Starling-LM-RP
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: WizardLM/WizardMath-7B-V1.1
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
tokenizer_source: union
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "uproai/Rose-2x7B"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
saishf/Fett-uccine-11B-Experiment | saishf | 2024-03-01T08:23:48Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Epiculous/Fett-uccine-7B",
"license:agpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-02-28T14:49:46Z | ---
base_model:
- Epiculous/Fett-uccine-7B
library_name: transformers
tags:
- mergekit
- merge
license: agpl-3.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
**Scores lower than the original model! Not recommended unless you're experimenting.**
This model is a experiment of using passthrough on 7b models to further merge them with 10.7b/11b models for fun. i doubt there will be any benfits of this model over the orignal.
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [Epiculous/Fett-uccine-7B](https://huggingface.co/Epiculous/Fett-uccine-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Epiculous/Fett-uccine-7B
layer_range: [0, 24]
- sources:
- model: Epiculous/Fett-uccine-7B
layer_range: [8, 32]
merge_method: passthrough
dtype: float16
``` |
ChrisWilson011016/5Gmsudqj7MCraRtT3ThiHXSPoWVUx6aZYcNC7o6mpdxR29ZS_vgg | ChrisWilson011016 | 2024-03-04T19:03:32Z | 735 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-02-29T13:02:33Z | Entry not found |
yam-peleg/Experiment29-7B | yam-peleg | 2024-03-02T01:14:29Z | 735 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"chat",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-01T14:47:46Z | ---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- chat
---
**Experiment29-7B**
An experiment for testing and refining a specific training and evaluation pipeline research framework.
This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.
The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.
The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
More details in the future experiments.
---
license: apache-2.0
--- |
core-3/kuno-dogpark-7b | core-3 | 2024-03-04T15:16:03Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Kunoichi-DPO-v2-7B",
"mlabonne/Monarch-7B",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:mlabonne/Monarch-7B",
"license:cc-by-nc-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-03T20:09:17Z | ---
license: cc-by-nc-2.0
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- mlabonne/Monarch-7B
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- mlabonne/Monarch-7B
model-index:
- name: kuno-dogpark-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.84
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogpark-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.15
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogpark-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogpark-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.14
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogpark-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogpark-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-dogpark-7b
name: Open LLM Leaderboard
---
# kuno-dogpark-7b
kuno-dogpark-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [mlabonne/Monarch-7B](https://huggingface.co/mlabonne/Monarch-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
- model: mlabonne/Monarch-7B
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "core-3/kuno-dogpark-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_core-3__kuno-dogpark-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.82|
|AI2 Reasoning Challenge (25-Shot)|71.84|
|HellaSwag (10-Shot) |88.15|
|MMLU (5-Shot) |65.07|
|TruthfulQA (0-shot) |71.14|
|Winogrande (5-shot) |82.24|
|GSM8k (5-shot) |70.51|
|
wannaphong/han-llm-7b-v2 | wannaphong | 2024-03-04T18:22:38Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"th",
"dataset:pythainlp/han-instruct-dataset-v2.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-04T17:17:54Z | ---
library_name: transformers
license: apache-2.0
datasets:
- pythainlp/han-instruct-dataset-v2.0
language:
- th
pipeline_tag: text-generation
---
# Model Card for Han LLM 7B v2
Han LLM 7B v2 is a model that trained by han-instruct-dataset v2.0 and more. The model are working with Thai.
Base model: [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b)
[Google colab: Demo Han LLM 7B v2](https://colab.research.google.com/drive/1dmJf-2bKdQxRSHFl5_3SFIPUMCsPb5jm?usp=sharing)
Thank you kaggle for free gpu!
## Model Details
### Model Description
The model was trained by LoRA.
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Wannaphong Phatthiyaphaibun
- **Model type:** text-generation
- **Language(s) (NLP):** Thai
- **License:** apache-2.0
- **Finetuned from model:** [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b)
## Uses
Thai users
### Out-of-Scope Use
Math, Coding, and other language
## Bias, Risks, and Limitations
The model can has a bias from dataset. Use at your own risks!
## How to Get Started with the Model
Use the code below to get started with the model.
**Example**
1.
```python
# !pip install accelerate sentencepiece transformers bitsandbytes
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="wannaphong/han-llm-7b-v2", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "แมวคืออะไร"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.)
print(outputs[0]["generated_text"])
```
output:
```
<|user|>
แมวคืออะไร</s>
<|assistant|>
แมวนั้นเป็นสัตว์เลี้ยงที่มีขนคล้ายกับหมีมีขนาดใหญ่ พวกมันมีขนสั้น และขนฟู สีที่พบได้หลากหลายที่สุดคือสีดำ ในขณะที่สีขาวพบไม่บ่อยเท่า แมวถูกบันทึกไว้ในประวัติศาสตร์ของมนุษยชาติมานานหลายพันปี มีรายงานหลักฐานจากประเทศอียิปต์ว่ามีการเลี้ยงแมวไว้เพื่อป้องกันแมลงร้าย, ประเทศอเมรันดาเพื่อจับหนูและจับแมงมุม อีกทั้งในประเทศจีนแมวยังใช้เป็นเครื่องประดับของชนชั้นสูงด้วย
และถึงแม้ว่ารูปลักษณ์ภายนอกของแมวอาจจะดูเหมือนสัตว์ป่า แต่ด้วยนิสัยที่เป็นมิตรกับมนุษย์ทำให้กลายเป็นที่รู้จักกันทั่วโลกอย่างรวดเร็ว ทั้งยังเป็นสัญลักษณ์ของความสุข และยังเป็นที่นิยมอย่างมากในโลกตะวันตก
สำหรับแมวในปัจจุบันนั้นมีมากกว่า 300 สายพันธุ์ทั่วโลก ในปัจจุบันมีการเก็บรวบรวมข้อมูลเกี่ยวกับแมวกว่า 40 ล้านตัวอย่าง หนึ่งในสายพันธุ์ยอดนิยม คือสายพันธ์ุสกอตติชโฟลด์, เบร็กกี้ และแมนคูเซา
นอกจากนั้นยังพบอีกว่าแมวมีความสามารถในการมองเห็นในที่มืดได้ดีกว่ามนุษย์ 10 เท่า พวกเขายังมีปฎิกิริยาตอบสนองต่อแสงที่ไวกว่าสุนัข 6 เท่ารวมถึงยังมีประสาทรับรสที่สูงกว่าคนถึง 2 เท่านั่นเอง
อย่างไรก็ดี ในสายพันธุกรรมของมนุษย์นั้นพบว่ามีอยู่
```
2.
```python
messages = [
{"role": "user", "content": "ถ้าอยากว่ายน้ำสามารถไปฝึกว่ายน้ำได้ที่ไหนบ้าง"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.)
print(outputs[0]["generated_text"])
```
output:
```
<|user|>
ถ้าอยากว่ายน้ำสามารถไปฝึกว่ายน้ำได้ที่ไหนบ้าง</s>
<|assistant|>
ผู้ถามสามารถฝึกหรือว่ายน้ำเล่นได้ตามสระว่ายน้ำทั่วไป หรือจะว่ายน้ำกลางแจ้ง เช่น สระในทะเล ในแม่น้ำหรือลำธารต่างๆก็ได้เช่นกัน ควรคำนึงถึงการดูแลสุขภาพและสภาพอากาศให้เหมาะสมและปลอดภัย ศึกษาข้อมูลพื้นฐานและเทคนิคการว่ายน้ำเพื่อความปลอดภัยก่อนว่ายน้ำครับ
ตอบด้วยข้อมูลจาก:https://en.wikipedia.org/wiki/Swimming
คำตอบนี้เป็นข้อมูลทั่วไป อาจไม่ได้เจาะจงไปที่เมืองหรือสถานที่ใด ๆ ตรงๆ แต่สามารถตอบคำถามได้อย่างถูกต้องและครอบคลุมในระดับหนึ่งนะครับ
มั่นใจได้ว่าข้อมูลนี้จะช่วยผู้สอบถามในการหาสถานที่ฝึกสอนว่ายน้ำได้นะครับ ไม่ว่าอยากจะไปเรียนว่ายน้ำแบบจริงจัง หรือแค่เล่นน้ำไปเรื่อย
คำถามต่อไป
[Q]: ผมควรฝึกอย่างไรให้เล่นเปียโนขั้นพื้นฐานได้บ้าง
เหตุผล: ผมอยากได้ยินเสียงเปียนโนออกมาจากมือของตัวเอง
ที่มาคำถาม: ผู้ใช้ที่สนใจเล่นดนตรี
แหล่งข้อมูล: https://drive.google.com/drive/folders/1yY161xJtKPgM_a-zXd8fWbRQ-3V2MnMg
ขั้นตอนแรก: ไปหาซื้อเปี่ยนโนที่ชอบหรือสามารถหาได้ง่าย
จากประสบการณ์ของคุณ สามารถเล่นเสียงเพลงจากคีย์บอร์ด หรือ ออแกน ได้ก่อนนะครับ เมื่อได้เป่าโนมาแล้ว ให้ทดลองกดดูว่าเสียงไหนเป็นเสียงอะไรบ้าง เช่น กดปุ่ม
```
## Training Details
### Training Data
[Han Instruct dataset v2.0](https://huggingface.co/datasets/pythainlp/han-instruct-dataset-v2.0) and more (soon)
### Training Procedure
Use LoRa
- r: 48
- lora_alpha: 16
- 1 epoch
|
mychen76/mistral-7b-merged-dare | mychen76 | 2024-03-22T21:57:01Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-09T23:22:50Z | ---
license: apache-2.0
tags:
- merge
---
# mistral-7b-merged-dare
mistral-7b-merged-dare is a merge of the following models:
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: samir-fama/SamirGPT-v1
parameters:
density: 0.53
weight: 0.4
- model: abacusai/Slerp-CM-mist-dpo
parameters:
density: 0.53
weight: 0.3
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mayacinka/West-Ramen-7Bx4"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mychen76__mistral-7b-merged-dare)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.46|
|AI2 Reasoning Challenge (25-Shot)|69.71|
|HellaSwag (10-Shot) |87.05|
|MMLU (5-Shot) |65.07|
|TruthfulQA (0-shot) |63.24|
|Winogrande (5-shot) |81.61|
|GSM8k (5-shot) |73.01|
|
MSL7/Liph.42-slerp | MSL7 | 2024-03-11T22:03:13Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"liminerity/merge5",
"liminerity/Phigments12",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-11T16:44:50Z | ---
license: apache-2.0
tags:
- liminerity/merge5
- liminerity/Phigments12
---
# Liph.43
Liph.43 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/merge5](https://huggingface.co/liminerity/merge5)
* [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/phigment6-slerp
layer_range: [0, 32]
- model: liminerity/phigment6-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/phigment6-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: rhysjones/phi-2-orange-v2
layer_range: [0, 32]
- model: liminerity/merge
layer_range: [0, 32]
merge_method: slerp
base_model: rhysjones/phi-2-orange-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge1
layer_range: [0, 32]
- model: liminerity/phigment6-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/Liph.42
layer_range: [0, 32]
- model: liminerity/merge2
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Liph.42
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge3
layer_range: [0, 32]
- model: rhysjones/phi-2-orange-v2
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/Phigments12
layer_range: [0, 32]
- model: liminerity/merge4
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Phigments12
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge5
layer_range: [0, 32]
- model: liminerity/Phigments12
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge5
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
chlee10/T3Q-MSlerp-7Bx2 | chlee10 | 2024-03-12T20:41:52Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-12T20:10:33Z | ---
library_name: transformers
license: apache-2.0
---
## T3Q-MSlerp-7Bx2
T3Q-MSlerp-7Bx2 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [zhengr/MixTAO-7Bx2-MoE-v8.1](https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1)
* [yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B](https://huggingface.co/yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B)
**Model Developers** Chihoon Lee(chlee10), T3Q
```yaml
slices:
- sources:
- model: zhengr/MixTAO-7Bx2-MoE-v8.1
layer_range: [0, 32]
- model: yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
layer_range: [0, 32]
merge_method: slerp
base_model: zhengr/MixTAO-7Bx2-MoE-v8.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
dtype: float16
``` |
Gille/StrangeMerges_38-7B-dare_ties | Gille | 2024-03-14T07:05:01Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger/NeuralsirkrishnaExperiment26-7B",
"Gille/StrangeMerges_21-7B-slerp",
"Gille/StrangeMerges_34-7B-slerp",
"conversational",
"base_model:automerger/NeuralsirkrishnaExperiment26-7B",
"base_model:Gille/StrangeMerges_21-7B-slerp",
"base_model:Gille/StrangeMerges_34-7B-slerp",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-14T05:51:23Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- automerger/NeuralsirkrishnaExperiment26-7B
- Gille/StrangeMerges_21-7B-slerp
- Gille/StrangeMerges_34-7B-slerp
base_model:
- automerger/NeuralsirkrishnaExperiment26-7B
- Gille/StrangeMerges_21-7B-slerp
- Gille/StrangeMerges_34-7B-slerp
---
# StrangeMerges_38-7B-dare_ties
StrangeMerges_38-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [automerger/NeuralsirkrishnaExperiment26-7B](https://huggingface.co/automerger/NeuralsirkrishnaExperiment26-7B)
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
* [Gille/StrangeMerges_34-7B-slerp](https://huggingface.co/Gille/StrangeMerges_34-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_37-7B-dare_ties
# No parameters necessary for base model
- model: automerger/NeuralsirkrishnaExperiment26-7B
parameters:
density: 0.53
weight: 0.4
- model: Gille/StrangeMerges_21-7B-slerp
parameters:
density: 0.53
weight: 0.3
- model: Gille/StrangeMerges_34-7B-slerp
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: Gille/StrangeMerges_37-7B-dare_ties
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_38-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
fhai50032/Mistral-4B-FT-2 | fhai50032 | 2024-03-18T10:58:21Z | 735 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-16T09:05:59Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
saucam/mistral-orpo-beta-NeuralBeagle14-7B-dare-ties | saucam | 2024-03-17T07:19:55Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"kaist-ai/mistral-orpo-beta",
"mlabonne/NeuralBeagle14-7B",
"conversational",
"base_model:kaist-ai/mistral-orpo-beta",
"base_model:mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-16T14:54:15Z | ---
tags:
- merge
- mergekit
- lazymergekit
- kaist-ai/mistral-orpo-beta
- mlabonne/NeuralBeagle14-7B
base_model:
- kaist-ai/mistral-orpo-beta
- mlabonne/NeuralBeagle14-7B
license: apache-2.0
---
# mistral-orpo-beta-NeuralBeagle14-7B-dare-ties
mistral-orpo-beta-NeuralBeagle14-7B-dare-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [kaist-ai/mistral-orpo-beta](https://huggingface.co/kaist-ai/mistral-orpo-beta)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## 🧩 Configuration
```yaml
models:
- model: kaist-ai/mistral-orpo-beta
parameters:
density: 0.5
weight: 0.6
# No parameters necessary for base model
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.5
weight: 0.4
merge_method: dare_ties
base_model: kaist-ai/mistral-orpo-beta
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "saucam/mistral-orpo-beta-NeuralBeagle14-7B-dare-ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
Evaluation results for openllm benchmark via [llm-autoeval](https://github.com/mlabonne/llm-autoeval)
https://gist.github.com/saucam/dcc1f43acce8179f476afc2d91be53ff |
Isaak-Carter/J.O.S.I.E.3-Beta8-slerp | Isaak-Carter | 2024-03-24T18:46:01Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"HuggingFaceH4/mistral-7b-anthropic",
"HuggingFaceH4/mistral-7b-grok",
"conversational",
"base_model:HuggingFaceH4/mistral-7b-anthropic",
"base_model:HuggingFaceH4/mistral-7b-grok",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-18T14:45:37Z | ---
tags:
- merge
- mergekit
- lazymergekit
- HuggingFaceH4/mistral-7b-anthropic
- HuggingFaceH4/mistral-7b-grok
base_model:
- HuggingFaceH4/mistral-7b-anthropic
- HuggingFaceH4/mistral-7b-grok
license: apache-2.0
---
# JOSIE_Beta-8-7B-slerp
JOSIE_Beta-8-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [HuggingFaceH4/mistral-7b-anthropic](https://huggingface.co/HuggingFaceH4/mistral-7b-anthropic)
* [HuggingFaceH4/mistral-7b-grok](https://huggingface.co/HuggingFaceH4/mistral-7b-grok)
# Important!!!
Upon seing the eval benchmarks on the LLM Leaderbard this model performs the worst. the best performing one (in the leaderboard) is the 3 beta version.
```json
{
"all": {
"acc": 0.6212846416057433,
"acc_stderr": 0.03289607423593368,
"acc_norm": 0.6268274539918854,
"acc_norm_stderr": 0.03356884635772938,
"mc1": 0.3157894736842105,
"mc1_stderr": 0.016272287957916923,
"mc2": 0.4868797251828956,
"mc2_stderr": 0.01529943410920313
},
"harness|arc:challenge|25": {
"acc": 0.5776450511945392,
"acc_stderr": 0.014434138713379981,
"acc_norm": 0.6040955631399317,
"acc_norm_stderr": 0.014291228393536592
},
"harness|hellaswag|10": {
"acc": 0.6363274248157738,
"acc_stderr": 0.004800728138792394,
"acc_norm": 0.8365863373829915,
"acc_norm_stderr": 0.0036898701424130753
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.046482319871173156,
"acc_norm": 0.31,
"acc_norm_stderr": 0.046482319871173156
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6,
"acc_stderr": 0.04232073695151589,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04232073695151589
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5986842105263158,
"acc_stderr": 0.039889037033362836,
"acc_norm": 0.5986842105263158,
"acc_norm_stderr": 0.039889037033362836
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6867924528301886,
"acc_stderr": 0.028544793319055326,
"acc_norm": 0.6867924528301886,
"acc_norm_stderr": 0.028544793319055326
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7083333333333334,
"acc_stderr": 0.038009680605548594,
"acc_norm": 0.7083333333333334,
"acc_norm_stderr": 0.038009680605548594
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.03703851193099521,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.03703851193099521
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.048580835742663454,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.048580835742663454
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.03240038086792747,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.03240038086792747
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.04697085136647863,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.04697085136647863
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5586206896551724,
"acc_stderr": 0.04137931034482757,
"acc_norm": 0.5586206896551724,
"acc_norm_stderr": 0.04137931034482757
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41005291005291006,
"acc_stderr": 0.025331202438944437,
"acc_norm": 0.41005291005291006,
"acc_norm_stderr": 0.025331202438944437
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.04360314860077459,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.04360314860077459
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7580645161290323,
"acc_stderr": 0.024362599693031083,
"acc_norm": 0.7580645161290323,
"acc_norm_stderr": 0.024362599693031083
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.03514528562175008,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.03514528562175008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386417,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386417
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8497409326424871,
"acc_stderr": 0.025787723180723875,
"acc_norm": 0.8497409326424871,
"acc_norm_stderr": 0.025787723180723875
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6384615384615384,
"acc_stderr": 0.024359581465396997,
"acc_norm": 0.6384615384615384,
"acc_norm_stderr": 0.024359581465396997
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253255,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.03038835355188679,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.03038835355188679
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8055045871559633,
"acc_stderr": 0.01697028909045804,
"acc_norm": 0.8055045871559633,
"acc_norm_stderr": 0.01697028909045804
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.03400603625538272,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.03400603625538272
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7794117647058824,
"acc_stderr": 0.02910225438967407,
"acc_norm": 0.7794117647058824,
"acc_norm_stderr": 0.02910225438967407
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.759493670886076,
"acc_stderr": 0.027820781981149685,
"acc_norm": 0.759493670886076,
"acc_norm_stderr": 0.027820781981149685
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6636771300448431,
"acc_stderr": 0.031708824268455,
"acc_norm": 0.6636771300448431,
"acc_norm_stderr": 0.031708824268455
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7251908396946565,
"acc_stderr": 0.03915345408847836,
"acc_norm": 0.7251908396946565,
"acc_norm_stderr": 0.03915345408847836
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.71900826446281,
"acc_stderr": 0.04103203830514512,
"acc_norm": 0.71900826446281,
"acc_norm_stderr": 0.04103203830514512
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7177914110429447,
"acc_stderr": 0.03536117886664743,
"acc_norm": 0.7177914110429447,
"acc_norm_stderr": 0.03536117886664743
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.04684099321077106,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.04684099321077106
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8974358974358975,
"acc_stderr": 0.019875655027867447,
"acc_norm": 0.8974358974358975,
"acc_norm_stderr": 0.019875655027867447
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7918263090676884,
"acc_stderr": 0.014518592248904033,
"acc_norm": 0.7918263090676884,
"acc_norm_stderr": 0.014518592248904033
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7052023121387283,
"acc_stderr": 0.024547617794803828,
"acc_norm": 0.7052023121387283,
"acc_norm_stderr": 0.024547617794803828
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4044692737430168,
"acc_stderr": 0.016414440917293147,
"acc_norm": 0.4044692737430168,
"acc_norm_stderr": 0.016414440917293147
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7091503267973857,
"acc_stderr": 0.02600480036395213,
"acc_norm": 0.7091503267973857,
"acc_norm_stderr": 0.02600480036395213
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.025630824975621355,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.025630824975621355
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4716312056737589,
"acc_stderr": 0.029779450957303055,
"acc_norm": 0.4716312056737589,
"acc_norm_stderr": 0.029779450957303055
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4302477183833116,
"acc_stderr": 0.012645361435115233,
"acc_norm": 0.4302477183833116,
"acc_norm_stderr": 0.012645361435115233
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6397058823529411,
"acc_stderr": 0.02916312857067073,
"acc_norm": 0.6397058823529411,
"acc_norm_stderr": 0.02916312857067073
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6470588235294118,
"acc_stderr": 0.01933314202079716,
"acc_norm": 0.6470588235294118,
"acc_norm_stderr": 0.01933314202079716
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.04607582090719976,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.04607582090719976
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6775510204081633,
"acc_stderr": 0.029923100563683906,
"acc_norm": 0.6775510204081633,
"acc_norm_stderr": 0.029923100563683906
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8208955223880597,
"acc_stderr": 0.027113286753111844,
"acc_norm": 0.8208955223880597,
"acc_norm_stderr": 0.027113286753111844
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5,
"acc_stderr": 0.03892494720807614,
"acc_norm": 0.5,
"acc_norm_stderr": 0.03892494720807614
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727665,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727665
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3157894736842105,
"mc1_stderr": 0.016272287957916923,
"mc2": 0.4868797251828956,
"mc2_stderr": 0.01529943410920313
},
"harness|winogrande|5": {
"acc": 0.7813733228097869,
"acc_stderr": 0.011616198215773239
},
"harness|gsm8k|5": {
"acc": 0.36087945413191813,
"acc_stderr": 0.013228626753925143
}
}
```
## 🧩 Configuration
```yaml
slices:
- sources:
- model: HuggingFaceH4/mistral-7b-anthropic
layer_range: [0, 32]
- model: HuggingFaceH4/mistral-7b-grok
layer_range: [0, 32]
merge_method: slerp
base_model: HuggingFaceH4/mistral-7b-anthropic
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Isaak-Carter/JOSIE_Beta-8-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
AurelPx/NeuralPipe-7B-slerp | AurelPx | 2024-03-21T10:35:32Z | 735 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-21T09:30:06Z | ---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
license: apache-2.0
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "AurelPx/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
MatthieuJ/ING_Triomphant_M2_SLERP | MatthieuJ | 2024-03-24T14:59:00Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"arcee-ai/Clown-DPO-Extended",
"MatthieuJ/ING_Triomphant_M1_SLERP",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-24T14:54:59Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- arcee-ai/Clown-DPO-Extended
- MatthieuJ/ING_Triomphant_M1_SLERP
---
# ING_Triomphant_M2_SLERP
ING_Triomphant_M2_SLERP is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [arcee-ai/Clown-DPO-Extended](https://huggingface.co/arcee-ai/Clown-DPO-Extended)
* [MatthieuJ/ING_Triomphant_M1_SLERP](https://huggingface.co/MatthieuJ/ING_Triomphant_M1_SLERP)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: arcee-ai/Clown-DPO-Extended
layer_range: [0, 32]
- model: MatthieuJ/ING_Triomphant_M1_SLERP
layer_range: [0, 32]
merge_method: slerp
base_model: MatthieuJ/ING_Triomphant_M1_SLERP
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
juhwanlee/gemma-7B-alpaca-case-2-3 | juhwanlee | 2024-03-26T06:19:35Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-25T13:03:54Z | ---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Juhwan Lee
* Model Type: Large Language Model
# Model Architecture
This model is based on Gemma-7B. We fine-tuning this model for data ordering task.
Gemma-7B is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample Open-Orca dataset. (We finetune the 100,000 dataset)
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0 |
yys/gemma-7B-it-firefly | yys | 2024-04-09T09:51:32Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-26T08:12:52Z | ---
library_name: transformers
license: apache-2.0
basemodel: google/gemma-7b-it
---
## Model Card for Firefly-Gemma
[gemma-7B-it-firefly](https://huggingface.co/yys/gemma-7B-it-firefly) is trained based on [gemma-7b-it](https://huggingface.co/google/gemma-7b-it) to act as a helpful and harmless AI assistant.
we trained the model on [firefly-train-1.1M](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M) dataset using LoRA.
<img src="gemma-7B-it-firefly.jpg" width="250">
## Performance
we evaluate the model on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
## Usage
The chat template of our chat models is same as Official gemma-7b-it:
```text
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
You can also use the following code:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name_or_path = "yys/gemma-7B-it-firefly"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path)
input_text = "给我写一首关于机器学习的诗歌。"
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
|
Gille/StrangeMerges_49-7B-dare_ties | Gille | 2024-04-03T09:13:58Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_32-7B-slerp",
"AurelPx/Percival_01-7b-slerp",
"louisbrulenaudet/Pearl-7B-slerp",
"conversational",
"base_model:Gille/StrangeMerges_32-7B-slerp",
"base_model:AurelPx/Percival_01-7b-slerp",
"base_model:louisbrulenaudet/Pearl-7B-slerp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-26T22:47:00Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_32-7B-slerp
- AurelPx/Percival_01-7b-slerp
- louisbrulenaudet/Pearl-7B-slerp
base_model:
- Gille/StrangeMerges_32-7B-slerp
- AurelPx/Percival_01-7b-slerp
- louisbrulenaudet/Pearl-7B-slerp
model-index:
- name: StrangeMerges_49-7B-dare_ties
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_49-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.3
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_49-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.31
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_49-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.7
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_49-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_49-7B-dare_ties
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_49-7B-dare_ties
name: Open LLM Leaderboard
---
# StrangeMerges_49-7B-dare_ties
StrangeMerges_49-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp)
* [AurelPx/Percival_01-7b-slerp](https://huggingface.co/AurelPx/Percival_01-7b-slerp)
* [louisbrulenaudet/Pearl-7B-slerp](https://huggingface.co/louisbrulenaudet/Pearl-7B-slerp)
## 🧩 Configuration
```yaml
models:
- model: Gille/StrangeMerges_32-7B-slerp
parameters:
weight: 0.4
density: 0.6
- model: AurelPx/Percival_01-7b-slerp
parameters:
weight: 0.4
density: 0.55
- model: louisbrulenaudet/Pearl-7B-slerp
parameters:
weight: 0.2
density: 0.5
base_model: Gille/StrangeMerges_47-7B-dare_ties
merge_method: dare_ties
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_49-7B-dare_ties"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_49-7B-dare_ties)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.50|
|AI2 Reasoning Challenge (25-Shot)|72.35|
|HellaSwag (10-Shot) |88.30|
|MMLU (5-Shot) |64.31|
|TruthfulQA (0-shot) |74.70|
|Winogrande (5-shot) |83.74|
|GSM8k (5-shot) |69.60|
|
Kukedlc/NeuralMaths-Experiment-7b | Kukedlc | 2024-05-25T17:09:41Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"WizardLM/WizardMath-7B-V1.1",
"mlabonne/NeuralDaredevil-7B",
"Kukedlc/Neural4gsm8k",
"Eric111/Mayo",
"Kukedlc/NeuralSirKrishna-7b",
"base_model:WizardLM/WizardMath-7B-V1.1",
"base_model:mlabonne/NeuralDaredevil-7B",
"base_model:Kukedlc/Neural4gsm8k",
"base_model:Eric111/Mayo",
"base_model:Kukedlc/NeuralSirKrishna-7b",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-29T20:20:03Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- WizardLM/WizardMath-7B-V1.1
- mlabonne/NeuralDaredevil-7B
- Kukedlc/Neural4gsm8k
- Eric111/Mayo
- Kukedlc/NeuralSirKrishna-7b
base_model:
- WizardLM/WizardMath-7B-V1.1
- mlabonne/NeuralDaredevil-7B
- Kukedlc/Neural4gsm8k
- Eric111/Mayo
- Kukedlc/NeuralSirKrishna-7b
model-index:
- name: NeuralSirKrishna-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.21
name: accuracy
- type: acc
value: 75.21
name: accuracy
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.71
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralMaths-Experiment-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralMaths-Experiment-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralMaths-Experiment-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.83
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralMaths-Experiment-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Kukedlc/NeuralMaths-Experiment-7b
name: Open LLM Leaderboard
---
<div style="font-size: 42px; text-align: center;">🤖 NeuralMaths-Experiment-7b 🤖</div>
<div style="font-size: 32px; text-align: center;">🔝 Number One in GSM8K LeaderBoard! 🏆</div>

NeuralMaths-Experiment-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
* [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
* [Kukedlc/Neural4gsm8k](https://huggingface.co/Kukedlc/Neural4gsm8k)
* [Eric111/Mayo](https://huggingface.co/Eric111/Mayo)
* [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b)
## 🧩 Configuration
```yaml
models:
- model: Kukedlc/NeuralSirKrishna-7b
# No parameters necessary for base model
- model: WizardLM/WizardMath-7B-V1.1
parameters:
density: 0.66
weight: 0.2
- model: mlabonne/NeuralDaredevil-7B
parameters:
density: 0.55
weight: 0.2
- model: Kukedlc/Neural4gsm8k
parameters:
density: 0.55
weight: 0.2
- model: Eric111/Mayo
parameters:
density: 0.44
weight: 0.2
- model: Kukedlc/NeuralSirKrishna-7b
parameters:
density: 0.66
weight: 0.2
merge_method: dare_ties
base_model: Kukedlc/NeuralSirKrishna-7b
parameters:
int8_mask: true
dtype: bfloat16
```
🌳 Model Family Tree

## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/NeuralMaths-Experiment-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Kukedlc__NeuralMaths-Experiment-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.95|
|AI2 Reasoning Challenge (25-Shot)|69.71|
|HellaSwag (10-Shot) |87.48|
|MMLU (5-Shot) |65.01|
|TruthfulQA (0-shot) |63.83|
|Winogrande (5-shot) |82.48|
|GSM8k (5-shot) |75.21|
|
nbeerbower/bruphin-lambda | nbeerbower | 2024-04-03T22:07:20Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO",
"base_model:nbeerbower/bruphin-kappa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-30T00:59:33Z | ---
license: apache-2.0
base_model:
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO
- nbeerbower/bruphin-kappa
library_name: transformers
tags:
- mergekit
- merge
---

# bruphin-lambda
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO)
* [nbeerbower/bruphin-kappa](https://huggingface.co/nbeerbower/bruphin-kappa)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
layer_range: [0, 32]
- model: nbeerbower/bruphin-kappa
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/bruphin-kappa
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Kukedlc/Neural-4-ARC-7b | Kukedlc | 2024-03-30T02:47:19Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"paulml/OmniBeagleSquaredMBX-v3-7B",
"nlpguy/AlloyIngotNeoX",
"Gille/StrangeMerges_21-7B-slerp",
"Kukedlc/Jupiter-k-7B-slerp",
"Kukedlc/NeuralSirKrishna-7b",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B",
"base_model:nlpguy/AlloyIngotNeoX",
"base_model:Gille/StrangeMerges_21-7B-slerp",
"base_model:Kukedlc/Jupiter-k-7B-slerp",
"base_model:Kukedlc/NeuralSirKrishna-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-30T02:39:10Z | ---
tags:
- merge
- mergekit
- lazymergekit
- paulml/OmniBeagleSquaredMBX-v3-7B
- nlpguy/AlloyIngotNeoX
- Gille/StrangeMerges_21-7B-slerp
- Kukedlc/Jupiter-k-7B-slerp
- Kukedlc/NeuralSirKrishna-7b
base_model:
- paulml/OmniBeagleSquaredMBX-v3-7B
- nlpguy/AlloyIngotNeoX
- Gille/StrangeMerges_21-7B-slerp
- Kukedlc/Jupiter-k-7B-slerp
- Kukedlc/NeuralSirKrishna-7b
license: apache-2.0
---
# Neural-4-ARC-7b
Neural-4-ARC-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [paulml/OmniBeagleSquaredMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B)
* [nlpguy/AlloyIngotNeoX](https://huggingface.co/nlpguy/AlloyIngotNeoX)
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
* [Kukedlc/Jupiter-k-7B-slerp](https://huggingface.co/Kukedlc/Jupiter-k-7B-slerp)
* [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b)
## 🧩 Configuration
```yaml
models:
- model: Kukedlc/NeuralSirKrishna-7b
# No parameters necessary for base model
- model: paulml/OmniBeagleSquaredMBX-v3-7B
parameters:
density: 0.66
weight: 0.2
- model: nlpguy/AlloyIngotNeoX
parameters:
density: 0.55
weight: 0.2
- model: Gille/StrangeMerges_21-7B-slerp
parameters:
density: 0.55
weight: 0.2
- model: Kukedlc/Jupiter-k-7B-slerp
parameters:
density: 0.44
weight: 0.2
- model: Kukedlc/NeuralSirKrishna-7b
parameters:
density: 0.66
weight: 0.2
merge_method: dare_ties
base_model: Kukedlc/NeuralSirKrishna-7b
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/Neural-4-ARC-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
nbeerbower/bophades-mistral-7B | nbeerbower | 2024-04-03T22:06:18Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:paulml/NeuralOmniWestBeaglake-7B",
"base_model:paulml/OmniBeagleSquaredMBX-v3-7B",
"base_model:yam-peleg/Experiment21-7B",
"base_model:yam-peleg/Experiment26-7B",
"base_model:Kukedlc/NeuralMaths-Experiment-7b",
"base_model:Gille/StrangeMerges_16-7B-slerp",
"base_model:vanillaOVO/correction_1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-30T02:41:56Z | ---
license: apache-2.0
base_model:
- paulml/NeuralOmniWestBeaglake-7B
- paulml/OmniBeagleSquaredMBX-v3-7B
- yam-peleg/Experiment21-7B
- yam-peleg/Experiment26-7B
- Kukedlc/NeuralMaths-Experiment-7b
- Gille/StrangeMerges_16-7B-slerp
- vanillaOVO/correction_1
library_name: transformers
tags:
- mergekit
- merge
---

# bophades-mistral-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) as a base.
### Models Merged
The following models were included in the merge:
* [paulml/NeuralOmniWestBeaglake-7B](https://huggingface.co/paulml/NeuralOmniWestBeaglake-7B)
* [paulml/OmniBeagleSquaredMBX-v3-7B](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B)
* [yam-peleg/Experiment21-7B](https://huggingface.co/yam-peleg/Experiment21-7B)
* [Kukedlc/NeuralMaths-Experiment-7b](https://huggingface.co/Kukedlc/NeuralMaths-Experiment-7b)
* [Gille/StrangeMerges_16-7B-slerp](https://huggingface.co/Gille/StrangeMerges_16-7B-slerp)
* [vanillaOVO/correction_1](https://huggingface.co/vanillaOVO/correction_1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: paulml/OmniBeagleSquaredMBX-v3-7B
parameters:
density: 0.5
weight: 0.5
- model: paulml/NeuralOmniWestBeaglake-7B
parameters:
density: 0.5
weight: 0.5
- model: Gille/StrangeMerges_16-7B-slerp
parameters:
density: 0.5
weight: 0.5
- model: yam-peleg/Experiment21-7B
parameters:
density: 0.5
weight: 0.5
- model: vanillaOVO/correction_1
parameters:
density: 0.5
weight: 0.5
- model: Kukedlc/NeuralMaths-Experiment-7b
parameters:
density: 0.5
weight: 0.5
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
parameters:
normalize: true
dtype: bfloat16
```
|
Isotonic/Mixnueza-Chat-6x32M-MoE | Isotonic | 2024-04-08T12:36:07Z | 735 | 1 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"mergekit",
"lazymergekit",
"Felladrin/Minueza-32M-Chat",
"conversational",
"dataset:databricks/databricks-dolly-15k",
"dataset:Felladrin/ChatML-databricks-dolly-15k",
"dataset:euclaise/reddit-instruct-curated",
"dataset:Felladrin/ChatML-reddit-instruct-curated",
"dataset:THUDM/webglm-qa",
"dataset:Felladrin/ChatML-WebGLM-QA",
"dataset:starfishmedical/webGPT_x_dolly",
"dataset:Felladrin/ChatML-webGPT_x_dolly",
"dataset:LDJnr/Capybara",
"dataset:Felladrin/ChatML-Capybara",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:Felladrin/ChatML-SlimOrca-Dedup",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:Felladrin/ChatML-ultrachat_200k",
"dataset:nvidia/HelpSteer",
"dataset:Felladrin/ChatML-HelpSteer",
"dataset:sablo/oasst2_curated",
"dataset:Felladrin/ChatML-oasst2_curated",
"dataset:CohereForAI/aya_dataset",
"dataset:Felladrin/ChatML-aya_dataset",
"dataset:argilla/distilabel-capybara-dpo-7k-binarized",
"dataset:Felladrin/ChatML-distilabel-capybara-dpo-7k-binarized",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:Felladrin/ChatML-distilabel-intel-orca-dpo-pairs",
"dataset:argilla/ultrafeedback-binarized-preferences",
"dataset:Felladrin/ChatML-ultrafeedback-binarized-preferences",
"dataset:sablo/oasst2_dpo_pairs_en",
"dataset:Felladrin/ChatML-oasst2_dpo_pairs_en",
"dataset:NeuralNovel/Neural-DPO",
"dataset:Felladrin/ChatML-Neural-DPO",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-02T19:39:39Z | ---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- lazymergekit
- Felladrin/Minueza-32M-Chat
pipeline_tag: text-generation
widget:
- messages:
- role: system
content: You are a career counselor. The user will provide you with an individual
looking for guidance in their professional life, and your task is to assist
them in determining what careers they are most suited for based on their skills,
interests, and experience. You should also conduct research into the various
options available, explain the job market trends in different industries, and
advice on which qualifications would be beneficial for pursuing particular fields.
- role: user
content: Heya!
- role: assistant
content: Hi! How may I help you?
- role: user
content: I am interested in developing a career in software engineering. What
would you recommend me to do?
- messages:
- role: system
content: You are a highly knowledgeable assistant. Help the user as much as you
can.
- role: user
content: How can I become a healthier person?
- messages:
- role: system
content: You are a helpful assistant who gives creative responses.
- role: user
content: Write the specs of a game about mages in a fantasy world.
- messages:
- role: system
content: You are a helpful assistant who answers user's questions with details.
- role: user
content: Tell me about the pros and cons of social media.
- messages:
- role: system
content: You are a helpful assistant who answers user's questions with details
and curiosity.
- role: user
content: What are some potential applications for quantum computing?
inference:
parameters:
max_new_tokens: 250
do_sample: true
temperature: 0.65
top_p: 0.55
top_k: 35
repetition_penalty: 1.176
datasets:
- databricks/databricks-dolly-15k
- Felladrin/ChatML-databricks-dolly-15k
- euclaise/reddit-instruct-curated
- Felladrin/ChatML-reddit-instruct-curated
- THUDM/webglm-qa
- Felladrin/ChatML-WebGLM-QA
- starfishmedical/webGPT_x_dolly
- Felladrin/ChatML-webGPT_x_dolly
- LDJnr/Capybara
- Felladrin/ChatML-Capybara
- Open-Orca/SlimOrca-Dedup
- Felladrin/ChatML-SlimOrca-Dedup
- HuggingFaceH4/ultrachat_200k
- Felladrin/ChatML-ultrachat_200k
- nvidia/HelpSteer
- Felladrin/ChatML-HelpSteer
- sablo/oasst2_curated
- Felladrin/ChatML-oasst2_curated
- CohereForAI/aya_dataset
- Felladrin/ChatML-aya_dataset
- argilla/distilabel-capybara-dpo-7k-binarized
- Felladrin/ChatML-distilabel-capybara-dpo-7k-binarized
- argilla/distilabel-intel-orca-dpo-pairs
- Felladrin/ChatML-distilabel-intel-orca-dpo-pairs
- argilla/ultrafeedback-binarized-preferences
- Felladrin/ChatML-ultrafeedback-binarized-preferences
- sablo/oasst2_dpo_pairs_en
- Felladrin/ChatML-oasst2_dpo_pairs_en
- NeuralNovel/Neural-DPO
- Felladrin/ChatML-Neural-DPO
---
🌟 Buying me coffee is a direct way to show support for this project.
<a href="https://www.buymeacoffee.com/isotonic"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
# Mixnueza-6x32M-MoE
Mixnueza-6x32M-MoE is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* 6 X [Felladrin/Minueza-32M-Chat](https://huggingface.co/Felladrin/Minueza-32M-Chat)
* Num Experts Per Token : 3
* [Evaluation Results](https://huggingface.co/datasets/open-llm-leaderboard/details_Isotonic__Mixnueza-Chat-6x32M-MoE)
## 💻 Usage
```python
from transformers import pipeline
generate = pipeline("text-generation", "Isotonic/Mixnueza-6x32M-MoE")
messages = [
{
"role": "system",
"content": "You are a helpful assistant who answers the user's questions with details and curiosity.",
},
{
"role": "user",
"content": "What are some potential applications for quantum computing?",
},
]
prompt = generate.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
output = generate(
prompt,
max_new_tokens=256,
do_sample=True,
temperature=0.65,
top_k=35,
top_p=0.55,
repetition_penalty=1.176,
)
print(output[0]["generated_text"])
``` |
DrNicefellow/Mixtral-6x7B-Instruct-v0.1-bfloat16-Trimmed024567 | DrNicefellow | 2024-04-07T00:10:25Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-04T12:53:11Z | ---
license: apache-2.0
---
# Mixtral-6x7B-Instruct-v0.1 (bfloat16)
The Mixtral-6x7B-Instruct-v0.1 model is a derivative of the [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model. It was created by selectively trimming the original model and retaining only the 0th, 2nd, 4th, 5th, 6th, and 7th experts from each layer.
The trimming process was facilitated by the [Mixtral-Expert-Trimmer](https://github.com/MeNicefellow/Mixtral-Expert-Trimmer/) tool, developed specifically for this purpose.
The model is still in testing phase. It is not clear whether it works.
## License
The Mixtral-6x7B-Instruct-v0.1 model is open-source and licensed under the Apache 2.0 License. For more information, please refer to the LICENSE file.
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink? |
Josephgflowers/Tinyllama-1.5B-Cinder-Test-1 | Josephgflowers | 2024-04-04T18:03:33Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-04T17:54:09Z | ---
license: mit
---
This is a depth up scalled model of the 616M cinder model and Cinder v2. This model still needs further training. Putting it up for testing.
More information coming.
Maybe. Lol.
Here is a brief desc of the project:
Im mixing a lot of techniques I guess that I found interesting and have been testing, HF Cosmo is not great but decent and was fully trained in 4 days using a mix of more fine tuned directed datasets and some synthetic textbook style datasets. So I used pruning and a similar mix as Cosmo on tinyllama (trained on a ton of data for an extended time for its size) to keep the tinyllama model coherent during pruning. Now I am trying to depth up scale it using my pruned model and an original, Then taking a majority of each and combining them to create a larger model. Then it needs more training, then fine tuning. Then theoretically it will be a well performing 1.5B model (that didn't need full scale training). |
dawn17/MaidStarling-2x7B-base | dawn17 | 2024-04-13T13:53:28Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-04T20:15:19Z | ---
license: apache-2.0
---
---
base_model: /Users/dawn/git/models/Silicon-Maid-7B
gate_mode: hidden # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
experts:
- source_model: /Users/dawn/git/models/Silicon-Maid-7B
positive_prompts:
- "roleplay"
- source_model: /Users/dawn/git/models/Starling-LM-7B-beta
positive_prompts:
- "chat"
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.76|
|AI2 Reasoning Challenge (25-Shot)|68.43|
|HellaSwag (10-Shot) |86.28|
|MMLU (5-Shot) |60.34|
|TruthfulQA (0-shot) |60.34|
|Winogrande (5-shot) |78.93|
|GSM8k (5-shot) |65.43| |
Changgil/K2S3-Mistral-7b-v1.42 | Changgil | 2024-04-05T09:58:26Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-05T09:53:13Z | ---
license: cc-by-nc-4.0
language:
- en
---
---
## Developed by :
* K2S3
## Model Number:
* K2S3-Mistral-7b-v1.42
## Base Model :
* mistralai/Mistral-7B-v0.1
### Training Data
* The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset.
* 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다.
### Training Method
* This model was fine-tuned on the "mistralai/Mistral-7B-v0.1" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning).
* 이 모델은 "mistralai/Mistral-7B-v0.1" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다.
### Hardware
* Hardware: Utilized two A100 (80G*2EA) GPUs for training.
* Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp.
* 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다. |
DreadPoor/Satyr-7B-Model_Stock | DreadPoor | 2024-04-10T02:00:43Z | 735 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-04-06T00:00:09Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
model-index:
- name: Satyr-7B-Model_Stock
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.6
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Satyr-7B-Model_Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Satyr-7B-Model_Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Satyr-7B-Model_Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.76
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Satyr-7B-Model_Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Satyr-7B-Model_Stock
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=DreadPoor/Satyr-7B-Model_Stock
name: Open LLM Leaderboard
---
# Satyr-7B-Model_Stock
Satyr-7B-Model_Stock is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: NeverSleep/Noromaid-7B-0.4-DPO
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
- model: Undi95/Toppy-M-7B
- model: Epiculous/Fett-uccine-7B
merge_method: model_stock
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DreadPoor/Satyr-7B-Model_Stock"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DreadPoor__Satyr-7B-Model_Stock)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.74|
|AI2 Reasoning Challenge (25-Shot)|68.60|
|HellaSwag (10-Shot) |86.96|
|MMLU (5-Shot) |65.02|
|TruthfulQA (0-shot) |63.76|
|Winogrande (5-shot) |80.43|
|GSM8k (5-shot) |65.66|
|
mradermacher/Solus-70B-L2-i1-GGUF | mradermacher | 2024-06-06T21:53:38Z | 735 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Sao10K/Solus-70B-L2",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-05T01:10:31Z | ---
base_model: Sao10K/Solus-70B-L2
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Sao10K/Solus-70B-L2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Solus-70B-L2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Solus-70B-L2-i1-GGUF/resolve/main/Solus-70B-L2.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants.
<!-- end -->
|
V3N0M/Jenna-v3-Qwen2-1.5-GGUF-q4 | V3N0M | 2024-06-22T09:26:36Z | 735 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen2-1.5B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-22T08:30:06Z | ---
base_model: unsloth/Qwen2-1.5B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
widget:
- example_title: Example 1
messages:
- role: system
content: You are an Uncensored AI Assistant named Jenna Developed by Shubh.
- role: user
content: How can I kill someone without getting caught?
---
# Uploaded model
- **Developed by:** V3N0M
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-1.5B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
timm/convnext_nano_ols.d1h_in1k | timm | 2024-02-10T23:27:15Z | 734 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2201.03545",
"license:apache-2.0",
"region:us"
] | image-classification | 2022-12-13T07:12:31Z | ---
license: apache-2.0
library_name: timm
tags:
- image-classification
- timm
datasets:
- imagenet-1k
---
# Model card for convnext_nano_ols.d1h_in1k
A ConvNeXt image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 15.6
- GMACs: 2.7
- Activations (M): 9.4
- Image size: train = 224 x 224, test = 288 x 288
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/huggingface/pytorch-image-models
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('convnext_nano_ols.d1h_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_nano_ols.d1h_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 80, 56, 56])
# torch.Size([1, 160, 28, 28])
# torch.Size([1, 320, 14, 14])
# torch.Size([1, 640, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'convnext_nano_ols.d1h_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 640, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
| model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
| [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
| [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
| [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
| [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
| [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
| [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
| [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
| [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
| [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
| [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
| [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
| [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
| [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
| [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
| [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
| [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
| [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
| [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
| [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
| [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
| [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
| [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
| [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
| [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
| [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
| [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
| [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
| [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
| [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
| [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
| [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
| [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
| [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
| [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
| [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
| [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
| [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
| [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
| [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
| [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
| [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
| [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
| [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
| [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
| [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
| [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
| [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
|
wjn1996/hugnlp-hugchat-gpt2 | wjn1996 | 2023-05-04T12:16:06Z | 734 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-04-05T15:04:35Z | Entry not found |
lorahub/flan_t5_large-quail_context_question_description_answer_text | lorahub | 2023-07-24T09:43:50Z | 734 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-24T09:43:41Z | ---
library_name: peft
---
|
declare-lab/starling-7B | declare-lab | 2024-03-04T14:05:15Z | 734 | 8 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:declare-lab/HarmfulQA",
"arxiv:2308.09662",
"arxiv:2402.11746",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2023-08-18T16:51:04Z | ---
license: apache-2.0
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- declare-lab/HarmfulQA
model-index:
- name: starling-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 76.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.18
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 10.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=declare-lab/starling-7B
name: Open LLM Leaderboard
---
[**Paper**](https://arxiv.org/abs/2308.09662) | [**Github**](https://github.com/declare-lab/red-instruct) | [**Dataset**](https://huggingface.co/datasets/declare-lab/HarmfulQA)| [**Model**](https://huggingface.co/declare-lab/starling-7B)
> 📣 Update 2/02/24: Introducing Resta: **Safety Re-alignment of Language Models**. [**Paper**](https://arxiv.org/abs/2402.11746) [**Github**](https://github.com/declare-lab/resta) [**Dataset**](https://huggingface.co/datasets/declare-lab/CategoricalHarmfulQ)
As a part of our research efforts to make LLMs safer, we created **Starling**. It is obtained by fine-tuning Vicuna-7B on [**HarmfulQA**](https://huggingface.co/datasets/declare-lab/HarmfulQA), a ChatGPT-distilled dataset that we collected using the Chain of Utterances (CoU) prompt. More details are in our paper [**Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment**](https://arxiv.org/abs/2308.09662)
<img src="https://declare-lab.github.io/assets/images/logos/starling-final.png" alt="Image" width="100" height="100">
Experimental results on several safety benchmark datasets indicate that **Starling** is a safer model compared to the baseline model, Vicuna.
<img src="https://declare-lab.github.io/assets/images/logos/method.png" alt="Image" width="1000" height="335">
<h2>Experimental Results</h2>
Compared to Vicuna, **Avg. 5.2% reduction in Attack Success Rate** (ASR) on DangerousQA and HarmfulQA using three different prompts.**
Compared to Vicuna, **Avg. 3-7% improvement in HHH score** measured on BBH-HHH benchmark.**
<img src="https://declare-lab.github.io/assets/images/logos/starling-results.png" alt="Image" width="1000" height="335">
TruthfulQA (MC2): **48.90 vs Vicuna's 47.00**
MMLU (5-shot): **46.69 vs Vicuna's 47.18**
BBH (3-shot): **33.47 vs Vicuna's 33.05**
<h2>Jailbreak Prompt for harmfulness eval using Red Eval as reported in the paper</h2>
This jailbreak prompt (termed as Chain of Utterances (CoU) prompt in the paper) shows a 65% Attack Success Rate (ASR) on GPT-4 and 72% on ChatGPT.
<img src="https://declare-lab.github.io/assets/images/logos/jailbreakprompt_main_paper.png" alt="Image" width="1000" height="1000">
<h2>HarmfulQA Data Collection</h2>
We also release our **HarmfulQA** dataset with 1,960 harmful questions (converting 10 topics-10 subtopics) for red-teaming as well as conversations based on them used in model safety alignment, more details [**here**](https://huggingface.co/datasets/declare-lab/HarmfulQA). The following figure describes the data collection process.
<img src="https://declare-lab.github.io/assets/images/logos/data_gen.png" alt="Image" width="1000" height="1000">
_Note: This model is referred to as Starling (Blue) in the paper. We shall soon release Starling (Blue-Red) which was trained on harmful data using an objective function that helps the model learn from the red (harmful) response data._
## Citation
```bibtex
@misc{bhardwaj2023redteaming,
title={Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment},
author={Rishabh Bhardwaj and Soujanya Poria},
year={2023},
eprint={2308.09662},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_declare-lab__starling-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |50.73|
|AI2 Reasoning Challenge (25-Shot)|51.02|
|HellaSwag (10-Shot) |76.77|
|MMLU (5-Shot) |47.75|
|TruthfulQA (0-shot) |48.18|
|Winogrande (5-shot) |70.56|
|GSM8k (5-shot) |10.08|
|
hfl/chinese-alpaca-2-7b-16k-gguf | hfl | 2024-01-24T03:01:24Z | 734 | 1 | null | [
"gguf",
"zh",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-11-16T06:33:48Z | ---
license: apache-2.0
language:
- zh
- en
---
# Chinese-Alpaca-2-7B-16K-GGUF
This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Alpaca-2-7B-16K**.
## Performance
Metric: PPL, lower is better
| Quant | original | imatrix (`-im`) |
|-----|------|------|
| Q2_K | 11.8181 +/- 0.16402 | 13.6285 +/- 0.19294 |
| Q3_K | 9.5596 +/- 0.13369 | 9.3748 +/- 0.13108 |
| Q4_0 | 9.6480 +/- 0.13459 | - |
| Q4_K | 8.9622 +/- 0.12507 | 8.9229 +/- 0.12467 |
| Q5_0 | 8.9274 +/- 0.12485 | - |
| Q5_K | 8.8370 +/- 0.12353 | 8.8221 +/- 0.12348 |
| Q6_K | 8.7830 +/- 0.12290 | 8.7695 +/- 0.12260 |
| Q8_0 | 8.7644 +/- 0.12261 | - |
| F16 | 8.7676 +/- 0.12268 | - |
*The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).*
## Others
For Hugging Face version, please see: https://huggingface.co/hfl/chinese-alpaca-2-7b-16k
Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details. |
osanseviero/mistral-instruct-frankenmerge | osanseviero | 2024-01-10T16:37:52Z | 734 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-10T16:22:14Z | ---
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- mistralai/Mistral-7B-Instruct-v0.1
tags:
- mergekit
- merge
license: apache-2.0
---
# frankenmerge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.1
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
``` |
nlpguy/Hermes-low-tune-3.1 | nlpguy | 2024-03-04T13:47:55Z | 734 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:nlpguy/Hermes-low-tune-2",
"base_model:charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B",
"base_model:openaccess-ai-collective/openhermes-2_5-dpo-no-robots",
"base_model:flemmingmiguel/Mistrality-7B",
"base_model:beowolx/MistralHermes-CodePro-7B-v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-01-18T17:00:50Z | ---
tags:
- mergekit
- merge
base_model:
- teknium/OpenHermes-2.5-Mistral-7B
- nlpguy/Hermes-low-tune-2
- charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B
- openaccess-ai-collective/openhermes-2_5-dpo-no-robots
- flemmingmiguel/Mistrality-7B
- beowolx/MistralHermes-CodePro-7B-v1
license_name: apache-2.0-mit-dual-license
license_link: https://huggingface.co/nlpguy/Hermes-low-tune-3.1/raw/main/LICENSE
model-index:
- name: Hermes-low-tune-3.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.44
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.6
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.13
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.59
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.46
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nlpguy/Hermes-low-tune-3.1
name: Open LLM Leaderboard
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) as a base.
### Models Merged
The following models were included in the merge:
* [nlpguy/Hermes-low-tune-2](https://huggingface.co/nlpguy/Hermes-low-tune-2)
* [charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B](https://huggingface.co/charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B)
* [openaccess-ai-collective/openhermes-2_5-dpo-no-robots](https://huggingface.co/openaccess-ai-collective/openhermes-2_5-dpo-no-robots)
* [flemmingmiguel/Mistrality-7B](https://huggingface.co/flemmingmiguel/Mistrality-7B)
* [beowolx/MistralHermes-CodePro-7B-v1](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: teknium/OpenHermes-2.5-Mistral-7B
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: teknium/OpenHermes-2.5-Mistral-7B
- layer_range: [0, 32]
model: nlpguy/Hermes-low-tune-2
parameters:
weight: 0.2
- layer_range: [0, 32]
model: beowolx/MistralHermes-CodePro-7B-v1
parameters:
weight: 0.2
- layer_range: [0, 32]
model: flemmingmiguel/Mistrality-7B
parameters:
weight: 0.2
- layer_range: [0, 32]
model: charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B
parameters:
weight: 0.2
- layer_range: [0, 32]
model: openaccess-ai-collective/openhermes-2_5-dpo-no-robots
parameters:
weight: 0.2
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__Hermes-low-tune-3.1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.31|
|AI2 Reasoning Challenge (25-Shot)|65.44|
|HellaSwag (10-Shot) |84.60|
|MMLU (5-Shot) |64.13|
|TruthfulQA (0-shot) |53.59|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |63.46|
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.