modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
Eric111/openchat-3.5-0106-128k-DPO_dpo-binarized-NeuralTrix-7B
Eric111
2024-02-24T22:47:14Z
733
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Eric111/openchat-3.5-0106-128k-DPO", "eren23/dpo-binarized-NeuralTrix-7B", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-24T22:26:57Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - Eric111/openchat-3.5-0106-128k-DPO - eren23/dpo-binarized-NeuralTrix-7B --- # openchat-3.5-0106-128k-DPO_dpo-binarized-NeuralTrix-7B openchat-3.5-0106-128k-DPO_dpo-binarized-NeuralTrix-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [Eric111/openchat-3.5-0106-128k-DPO](https://huggingface.co/Eric111/openchat-3.5-0106-128k-DPO) * [eren23/dpo-binarized-NeuralTrix-7B](https://huggingface.co/eren23/dpo-binarized-NeuralTrix-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: Eric111/openchat-3.5-0106-128k-DPO layer_range: [0, 32] - model: eren23/dpo-binarized-NeuralTrix-7B layer_range: [0, 32] merge_method: slerp base_model: Eric111/openchat-3.5-0106-128k-DPO parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
Charlie911/vicuna-7b-v1.5-general-temporal-merged
Charlie911
2024-02-25T18:23:48Z
733
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-25T18:06:36Z
--- library_name: transformers license: llama2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nisten/smaugzilla-77b
nisten
2024-03-07T22:19:39Z
733
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "base_model:abacusai/Smaug-72B-v0.1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-26T01:17:55Z
--- base_model: - abacusai/Smaug-72B-v0.1 library_name: transformers tags: - mergekit - merge license: mit --- # SMAUGZILLA-77B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/jdfROcB46SaQZkjNRb8ea.png) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The base model was [Smaug-72](https://huggingface.co/abacusai/Smaug-72B-v0.1). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * /home/ubuntu/nvm/smaug * /home/ubuntu/nvm/minismaug
aloobun/Cypher-Mini-1.8B
aloobun
2024-02-26T02:54:03Z
733
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetune", "gpt4", "synthetic data", "custom_code", "h2oai", "conversational", "dataset:Locutusque/Hercules-v3.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-26T01:31:53Z
--- library_name: transformers license: apache-2.0 datasets: - Locutusque/Hercules-v3.0 tags: - finetune - gpt4 - synthetic data - custom_code - h2oai --- ![Cypher aloobun h2oai1.8B](https://i.imgur.com/2R6f4EX.jpeg) - This is an experimental model, Finetuned [h2oai/h2o-danube-1.8b-chat](https://huggingface.co/h2oai/h2o-danube-1.8b-chat), on Hercules v3 & private dataset. - The original idea was to use this 1.8B model, divide the dataset based on task specific capabilities, train models and transform them into a mixture of experts. - Hyperparameters: adamw with eps of 1e-8, cosine decay w/ 20% warmup, lr=2e-5. ## Format: ``` <|system|></s><|prompt|></s><|answer|> ``` ## Benchamrks: WIP ## Example: ``` from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, StoppingCriteria import torch class MyStoppingCriteria(StoppingCriteria): def __init__(self, target_sequence, prompt): self.target_sequence = target_sequence self.prompt=prompt def __call__(self, input_ids, scores, **kwargs): generated_text = tokenizer.decode(input_ids[0]) generated_text = generated_text.replace(self.prompt,'') if self.target_sequence in generated_text: return True return False def __len__(self): return 1 def __iter__(self): yield self modelpath="aloobun/Cypher-Mini-1.8B" model = AutoModelForCausalLM.from_pretrained( modelpath, torch_dtype=torch.bfloat16, device_map="cuda", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained( modelpath, trust_remote_code=True, use_fast=False, ) prompt = "<|prompt|>Reflect on a time when you encountered a logical fallacy in an argument. How did you identify it, and what was the consequence?</s><|answer|>" encoded_input = tokenizer(prompt, return_tensors='pt') input_ids=encoded_input['input_ids'].cuda() streamer = TextStreamer(tokenizer=tokenizer, skip_prompt=True) op = model.generate( input_ids, streamer=streamer, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.7, top_p=0.8, max_new_tokens=512, stopping_criteria=MyStoppingCriteria("</s>", prompt) ) ``` ## Output: >I do not have personal experiences or emotions, but I can provide you with an example of a logical fallacy and its consequences: > >One common logical fallacy is the appeal to authority fallacy. This occurs when someone argues that a particular opinion or belief is true because of who holds it (i.e., "because the doctor said so"). However, this approach does not take into account other factors that may influence the validity of the claim. For instance, if a doctor says that eating a certain food will cure cancer, it does not necessarily mean that it will work for everyone. Other factors such as genetics, lifestyle, and environmental factors could also play a role in whether or not a person gets cancer. > >The consequence of using the appeal to authority fallacy is that it often leads to hasty conclusions and misinformation. It can be difficult to separate fact from fiction, especially when people rely on authority figures to make decisions. As a result, individuals may end up making poor choices based on incomplete information. This can lead to unintended consequences, such as harming oneself or others. > >To avoid falling prey to the appeal to authority fallacy, it is important to seek out multiple sources of information and consider all available evidence before making a decision. This can help individuals make more informed choices and reduce the likelihood of being swayed by unsubstantiated claims.</s>
CorticalStack/pikus-pikantny-7B-dare
CorticalStack
2024-02-29T15:36:06Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "bardsai/jaskier-7b-dpo-v5.6", "mlabonne/NeuralDaredevil-7B", "Gille/StrangeMerges_21-7B-slerp", "CultriX/NeuralTrix-7B-dpo", "arxiv:2311.03099", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-29T15:21:52Z
--- license: apache-2.0 tags: - merge - mergekit - bardsai/jaskier-7b-dpo-v5.6 - mlabonne/NeuralDaredevil-7B - Gille/StrangeMerges_21-7B-slerp - CultriX/NeuralTrix-7B-dpo --- <img src="pikus-pikantny.png" alt="Pikus pikantny logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # pikus-pikantny-7B-dare pikus-pikantny-7B-dare is a DARE merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) * [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) * [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp) * [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo) See the paper [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://arxiv.org/abs/2311.03099) for more on the method. ## 🧩 Configuration ```yaml models: - model: bardsai/jaskier-7b-dpo-v5.6 - model: mlabonne/NeuralDaredevil-7B parameters: density: 0.53 weight: 0.3 - model: Gille/StrangeMerges_21-7B-slerp parameters: density: 0.53 weight: 0.4 - model: CultriX/NeuralTrix-7B-dpo parameters: density: 0.53 weight: 0.3 merge_method: dare_ties base_model: bardsai/jaskier-7b-dpo-v5.6 parameters: int8_mask: true dtype: bfloat16 ```
AtAndDev/Ogno-Monarch-Neurotic-7B-Dare-Ties
AtAndDev
2024-03-01T14:08:09Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "bardsai/jaskier-7b-dpo-v5.6", "eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-01T14:04:19Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - bardsai/jaskier-7b-dpo-v5.6 - eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO --- # Ogno-Monarch-Neurotic-7B-Dare-Ties Ogno-Monarch-Neurotic-7B-Dare-Ties is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) * [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO) ## 🧩 Configuration ```yaml models: - model: bardsai/jaskier-7b-dpo-v5.6 parameters: density: .53 weight: .6 - model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO parameters: density: .53 weight: .5 merge_method: dare_ties base_model: bardsai/jaskier-7b-dpo-v5.6 parameters: int8_mask: true dtype: bfloat16 ```
Kukedlc/NeuralExperiment-7b-dare-ties
Kukedlc
2024-03-30T09:17:20Z
733
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Kukedlc/NeuralMaxime-7B-slerp", "Kukedlc/NeuralGlitch-Yam-Peleg-7B-DT", "Kukedlc/Neural4gsm8k", "base_model:Kukedlc/NeuralMaxime-7B-slerp", "base_model:Kukedlc/NeuralGlitch-Yam-Peleg-7B-DT", "base_model:Kukedlc/Neural4gsm8k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-02T05:51:41Z
--- tags: - merge - mergekit - lazymergekit - Kukedlc/NeuralMaxime-7B-slerp - Kukedlc/NeuralGlitch-Yam-Peleg-7B-DT - Kukedlc/Neural4gsm8k base_model: - Kukedlc/NeuralMaxime-7B-slerp - Kukedlc/NeuralGlitch-Yam-Peleg-7B-DT - Kukedlc/Neural4gsm8k license: apache-2.0 --- # NeuralExperiment-7b-dare-ties NeuralExperiment-7b-dare-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Kukedlc/NeuralMaxime-7B-slerp](https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp) * [Kukedlc/NeuralGlitch-Yam-Peleg-7B-DT](https://huggingface.co/Kukedlc/NeuralGlitch-Yam-Peleg-7B-DT) * [Kukedlc/Neural4gsm8k](https://huggingface.co/Kukedlc/Neural4gsm8k) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: Kukedlc/NeuralMaxime-7B-slerp parameters: density: 0.65 weight: 0.36 - model: Kukedlc/NeuralGlitch-Yam-Peleg-7B-DT parameters: density: 0.6 weight: 0.34 - model: Kukedlc/Neural4gsm8k parameters: density: 0.6 weight: 0.3 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kukedlc/NeuralExperiment-7b-dare-ties" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
mayacinka/yam-sam-7B
mayacinka
2024-03-02T20:58:20Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "cognitivecomputations/samantha-mistral-7b", "CorticalStack/shadow-clown-7B-dare", "yam-peleg/Experiment26-7B", "base_model:cognitivecomputations/samantha-mistral-7b", "base_model:CorticalStack/shadow-clown-7B-dare", "base_model:yam-peleg/Experiment26-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-02T18:49:04Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - cognitivecomputations/samantha-mistral-7b - CorticalStack/shadow-clown-7B-dare - yam-peleg/Experiment26-7B base_model: - cognitivecomputations/samantha-mistral-7b - CorticalStack/shadow-clown-7B-dare - yam-peleg/Experiment26-7B model-index: - name: yam-sam-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.9 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-sam-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.92 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-sam-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.39 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-sam-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 71.3 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-sam-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.03 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-sam-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.92 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/yam-sam-7B name: Open LLM Leaderboard --- # yam-sam-7B yam-sam-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [cognitivecomputations/samantha-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-mistral-7b) * [CorticalStack/shadow-clown-7B-dare](https://huggingface.co/CorticalStack/shadow-clown-7B-dare) * [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) ## 🧩 Configuration ```yaml models: - model: yam-peleg/Experiment27-7B # No parameters necessary for base model - model: cognitivecomputations/samantha-mistral-7b parameters: weight: 0.3 density: 0.8 - model: CorticalStack/shadow-clown-7B-dare parameters: weight: 0.1 density: 0.8 - model: yam-peleg/Experiment26-7B parameters: weight: 0.6 density: 0.8 merge_method: dare_ties base_model: yam-peleg/Experiment27-7B parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mayacinka/yam-sam-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__yam-sam-7B) | Metric |Value| |---------------------------------|----:| |Avg. |74.58| |AI2 Reasoning Challenge (25-Shot)|70.90| |HellaSwag (10-Shot) |87.92| |MMLU (5-Shot) |65.39| |TruthfulQA (0-shot) |71.30| |Winogrande (5-shot) |83.03| |GSM8k (5-shot) |68.92|
giraffe176/WestMaid_HermesMonarchv0.1
giraffe176
2024-03-04T19:01:35Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "arxiv:2306.05685", "arxiv:2312.06281", "base_model:mistralai/Mistral-7B-v0.1", "base_model:argilla/distilabeled-OpenHermes-2.5-Mistral-7B", "base_model:NeverSleep/Noromaid-7B-0.4-DPO", "base_model:senseable/WestLake-7B-v2", "base_model:mlabonne/AlphaMonarch-7B", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-03T19:55:01Z
--- base_model: - mistralai/Mistral-7B-v0.1 - argilla/distilabeled-OpenHermes-2.5-Mistral-7B - NeverSleep/Noromaid-7B-0.4-DPO - senseable/WestLake-7B-v2 - mlabonne/AlphaMonarch-7B library_name: transformers tags: - mergekit - merge license: cc-by-nc-4.0 model-index: - name: WestLake_Noromaid_OpenHermes_neural-chatv0.1 results: - task: type: text-generation name: Text Generation dataset: name: EQ-Bench type: eq-bench config: EQ-Bench split: v2.1 args: num_few_shot: 3 metrics: - type: acc_norm value: 77.19 name: self-reported source: url: https://github.com/EQ-bench/EQ-Bench name: EQ-Bench v2.1 - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestMaid_HermesMonarchv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.42 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestMaid_HermesMonarchv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.31 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestMaid_HermesMonarchv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.99 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestMaid_HermesMonarchv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.16 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestMaid_HermesMonarchv0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.6 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=giraffe176/WestMaid_HermesMonarchv0.1 name: Open LLM Leaderboard --- # WestMaid_HermesMonarchv0.1 <img src="https://cdn-uploads.huggingface.co/production/uploads/655a9883cbbaec115c3fd6b3/YJTMJZF80hKaKnPDu_yMV.png" alt="drawing" width="800"/> This model benchmarks quite well compared to other 7b models, and has exceptional [MT-Bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) and [EQ-Bench v2.1](https://github.com/EQ-bench/EQ-Bench) scores, ranking higher than ChatGPT-3.5-turbo and Claude-1 in both tests, and Goliath-120b, and other 70B models in the latter . This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit) ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. Density was chosen deterministically between the models chosen for this merge. After testing many densities, I settled on 0.58 for each of the chosen models as it returned the highest EQ-Bench score. Not much testing was done with the weights, but I thought that I'd try gradients. Conceptually, Westlake and a Distilled version of Open Heremes are heavier in the initial layers (guiding understanding, and thoughts), before Noromaid and AlphaMonarch come in to guide its wants, reasoning, and conversation. ### Models Merged The following models were included in the merge: * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) * [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO) * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) * [argilla/distilabeled-OpenHermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-OpenHermes-2.5-Mistral-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: senseable/WestLake-7B-v2 parameters: density: 0.58 weight: [0.50, 0.40, 0.25, 0.05] - model: NeverSleep/Noromaid-7B-0.4-DPO parameters: density: 0.58 weight: [0.05, 0.05, 0.25, 0.40] - model: argilla/distilabeled-OpenHermes-2.5-Mistral-7B parameters: density: 0.58 weight: [0.40, 0.50, 0.25, 0.05] - model: mlabonne/AlphaMonarch-7B parameters: density: 0.58 weight: [0.05, 0.05, 0.25, 0.50] merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` ## Benchmark Testing ### MT-Bench ![image/png](https://cdn-uploads.huggingface.co/production/uploads/655a9883cbbaec115c3fd6b3/H2BLoovTbLg8d8mtFSKYB.png) ### EQ-Bench Leaderboard <img src="https://cdn-uploads.huggingface.co/production/uploads/655a9883cbbaec115c3fd6b3/0Z6AIhaqCiKREf0fQEVqr.png" alt="drawing" width="800"/> ### Table of Benchmarks ## Open LLM Leaderboard | | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |---------------------------------------------------------|---------|-------|-----------|-------|------------|------------|-------| | giraffe176/WestMaid_HermesMonarchv0.1 | 72.62 | 70.22 | 87.42 | 64.31 | 61.99 | 82.16 | 69.6 | | AlphaMonarch-7B | 75.99 | 73.04 | 89.18 | 64.4 | 77.91 | 84.69 | 66.72 | | senseable/WestLake-7B-v2 | 74.68 | 73.04 | 88.65 | 64.71 | 67.06 | 86.98 | 67.63 | | teknium/OpenHermes-2.5-Mistral-7B | 61.52 | 64.93 | 84.18 | 63.64 | 52.24 | 78.06 | 26.08 | | NeverSleep/Noromaid-7B-0.4-DPO | 59.08 | 62.29 | 84.32 | 63.2 | 42.28 | 76.95 | 25.47 | ## Yet Another LLM Leaderboard benchmarks | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |------------------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[WestMaid_HermesMonarchv0.1](https://huggingface.co/giraffe176/WestMaid_HermesMonarchv0.1)| 45.34| 76.33| 61.99| 46.02| 57.42| ## Misc. Benchmarks | | MT-Bench | EQ-Bench v2.1 | |---------------------------------------------------------|---------------------------------------------|---------------------------------------------------------------------------------| | giraffe176/WestMaid_HermesMonarchv0.1 | 8.021875 | 77.19 (3 Shot, ooba) | | AlphaMonarch-7B | 7.928125 | 76.08 | | senseable/WestLake-7B-v2 | | 78.7 | | teknium/OpenHermes-2.5-Mistral-7B | | 66.89 | | claude-v1 | 7.900000 | 76.83 | | gpt-3.5-turbo | 7.943750 | 71.74 | | | [(Paper)](https://arxiv.org/abs/2306.05685) | [(Paper)](https://arxiv.org/abs/2312.06281) [Leaderboard](https://eqbench.com/) |
juhwanlee/experiment2-non-cause-v1
juhwanlee
2024-03-05T03:46:28Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-05T02:10:10Z
--- license: apache-2.0 datasets: - Open-Orca/OpenOrca language: - en --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Juhwan Lee * Model Type: Large Language Model # Model Architecture This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task. Mistral-7B-v0.1 is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample Open-Orca dataset. (We finetune the 100,000 dataset) # Guthub https://github.com/trailerAI # License Apache License 2.0
Gille/StrangeMerges_31-7B-slerp
Gille
2024-03-06T09:46:02Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Gille/StrangeMerges_30-7B-slerp", "yam-peleg/Experiment24-7B", "base_model:Gille/StrangeMerges_30-7B-slerp", "base_model:yam-peleg/Experiment24-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-06T09:39:22Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - Gille/StrangeMerges_30-7B-slerp - yam-peleg/Experiment24-7B base_model: - Gille/StrangeMerges_30-7B-slerp - yam-peleg/Experiment24-7B --- # StrangeMerges_31-7B-slerp StrangeMerges_31-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Gille/StrangeMerges_30-7B-slerp](https://huggingface.co/Gille/StrangeMerges_30-7B-slerp) * [yam-peleg/Experiment24-7B](https://huggingface.co/yam-peleg/Experiment24-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: Gille/StrangeMerges_30-7B-slerp layer_range: [0, 32] - model: yam-peleg/Experiment24-7B layer_range: [0, 32] merge_method: slerp base_model: yam-peleg/Experiment24-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 0.5, 0.5, 0] - filter: mlp value: [0, 0.5, 0.7, 0.3, 0.5, 0.5, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Gille/StrangeMerges_31-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
cookinai/Blitz-v0.2
cookinai
2024-03-08T06:28:08Z
733
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "unsloth", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-08T06:08:53Z
--- library_name: transformers tags: - unsloth license: cc-by-4.0 --- # Base finetune of [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on my [Kugelblitz Dataset](https://huggingface.co/datasets/cookinai/kugelblitz-alpha-v0.1) ![Kugelblitz](https://huggingface.co/cookinai/Blitz-v0.1/resolve/main/kugelblitz_black_hole.png) Trained on 3 epochs rather than 1 this time. V0.3 coming soon # Pretty alpha v0.3 should be more stable ![Unsloth_is_awesome](https://raw.githubusercontent.com/unslothai/unsloth/main/images/made%20with%20unsloth.png)
JCX-kcuf/Llama-2-7b-chat-hf-gpt-3.5-80k-base_lora
JCX-kcuf
2024-03-18T05:24:38Z
733
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-10T16:34:22Z
--- license: apache-2.0 --- ## Description This model is finetuned on the distillation data from GPT-3.5. The base model is meta-llama/Llama-2-7b-hf ## Usage The model has a query format as in llama-2. ``` <s> [INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {query} [/INST] ```
juhwanlee/llmdo-Mistral-7B-case-7
juhwanlee
2024-03-11T01:43:03Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-11T00:41:21Z
--- license: apache-2.0 datasets: - Open-Orca/OpenOrca language: - en --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Juhwan Lee * Model Type: Large Language Model # Model Architecture This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task. Mistral-7B-v0.1 is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample Open-Orca dataset. (We finetune the 100,000 dataset) # Guthub https://github.com/trailerAI # License Apache License 2.0
Kukedlc/Neural-Krishna-Multiverse-7b-v3
Kukedlc
2024-03-29T22:15:24Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Kukedlc/Neural-Krishna-Multiverse-7b-v2", "yam-peleg/Experiment26-7B", "base_model:Kukedlc/Neural-Krishna-Multiverse-7b-v2", "base_model:yam-peleg/Experiment26-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-11T22:32:57Z
--- tags: - merge - mergekit - lazymergekit - Kukedlc/Neural-Krishna-Multiverse-7b-v2 - yam-peleg/Experiment26-7B base_model: - Kukedlc/Neural-Krishna-Multiverse-7b-v2 - yam-peleg/Experiment26-7B license: apache-2.0 --- # Neural-Krishna-Multiverse-7b-v3 Neural-Krishna-Multiverse-7b-v3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Neural-Krishna-Multiverse-7b-v2](https://huggingface.co/Neural-Krishna-Multiverse-7b-v2) * [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: Neural-Krishna-Multiverse-7b-v2 layer_range: [0, 32] - model: yam-peleg/Experiment26-7B layer_range: [0, 32] merge_method: slerp base_model: yam-peleg/Experiment26-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kukedlc/Neural-Krishna-Multiverse-7b-v3" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Locutusque/lr-experiment1-7B
Locutusque
2024-03-12T14:08:15Z
733
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:Locutusque/hercules-v2.0", "dataset:CollectiveCognition/chats-data-2023-09-22", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-12T00:58:24Z
--- library_name: transformers license: apache-2.0 datasets: - Locutusque/hercules-v2.0 - CollectiveCognition/chats-data-2023-09-22 language: - en --- # lr-experiment1-7B The lr-experiment model series is a research project I'm conducting that I will be using to determine the best learning rate to use while fine-tuning Mistral. This model uses a learning rate of 2e-5 with a cosine scheduler and no warmup steps. I used Locutusque/Hercules-2.0-Mistral-7B as a base model, and further fine-tuned it on CollectiveCognition/chats-data-2023-09-22 using QLoRA for 3 epochs. I will be keeping track of evaluation results, and will comparing it to upcoming models. # Evals | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |---------------------------------|-------|------|------|--------|-----:|---|-----:| |agieval_nous |N/A |none |None |acc |0.3645|± |0.0093| | | |none |None |acc_norm|0.3468|± |0.0092| | - agieval_aqua_rat | 1|none |None |acc |0.2283|± |0.0264| | | |none |None |acc_norm|0.2283|± |0.0264| | - agieval_logiqa_en | 1|none |None |acc |0.2965|± |0.0179| | | |none |None |acc_norm|0.3303|± |0.0184| | - agieval_lsat_ar | 1|none |None |acc |0.2217|± |0.0275| | | |none |None |acc_norm|0.1783|± |0.0253| | - agieval_lsat_lr | 1|none |None |acc |0.4039|± |0.0217| | | |none |None |acc_norm|0.3686|± |0.0214| | - agieval_lsat_rc | 1|none |None |acc |0.4870|± |0.0305| | | |none |None |acc_norm|0.4424|± |0.0303| | - agieval_sat_en | 1|none |None |acc |0.6408|± |0.0335| | | |none |None |acc_norm|0.5971|± |0.0343| | - agieval_sat_en_without_passage| 1|none |None |acc |0.3932|± |0.0341| | | |none |None |acc_norm|0.3835|± |0.0340| | - agieval_sat_math | 1|none |None |acc |0.3455|± |0.0321| | | |none |None |acc_norm|0.2727|± |0.0301| | Groups |Version|Filter|n-shot| Metric |Value | |Stderr| |------------|-------|------|------|--------|-----:|---|-----:| |agieval_nous|N/A |none |None |acc |0.3645|± |0.0093| | | |none |None |acc_norm|0.3468|± |0.0092|
arcee-ai/Clown-DPO-Extended
arcee-ai
2024-03-18T21:37:51Z
733
5
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "CorticalStack/pastiche-crown-clown-7b-dare-dpo", "base_model:CorticalStack/pastiche-crown-clown-7b-dare-dpo", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-13T12:47:49Z
--- license: apache-2.0 base_model: - CorticalStack/pastiche-crown-clown-7b-dare-dpo library_name: transformers tags: - mergekit - merge - CorticalStack/pastiche-crown-clown-7b-dare-dpo --- # Extended Model This is a extension of a pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit). ![image/webp](https://cdn.prod.arcee.ai/images/clown-image.jpeg) # Merge Details ### Merge Method This method employs mergekit's passthrough method to expand blocks within the "CorticalStack/pastiche-crown-clown-7b-dare-dpo" model. For every 5th layer, a new layer is added, with the `o_proj` and `down_proj` parameters of these added layers initialized to zero, mirroring the approach used in LLaMA Pro. ### It's important to note that this configuration has not undergone fine-tuning. Therefore, when fine-tuning, ensure that only every 5th layer is trainable, while all other layers remain frozen. ### Models Merged The following models were included in the merge: * [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare-dpo) ## 🏆 Evaluation ### OpenLLM CorticalStack/pastiche-crown-clown-7b-dare-dpo OpenLLM benchmark suite | Model | Average | arc | HellaSwag | mmlu | TruthfulQA | gsm8k | |---|---:|---:|---:|---:|---:|---:| | [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/arcee-ai/Clown-DPO-Extended/) | 76.93 | 72.18 | 88.90 | 63.45 | 79.15 | 85.71 | 72.18 | ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 0 - 4 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 3 - 4 parameters: scale: - filter: o_proj value: 0 - filter: down_proj value: 0 - value: 1 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 4 - 8 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 7 - 8 parameters: scale: - filter: o_proj value: 0 - filter: down_proj value: 0 - value: 1 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 8 - 12 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 11 - 12 parameters: scale: - filter: o_proj value: 0 - filter: down_proj value: 0 - value: 1 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 12 - 16 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 15 - 16 parameters: scale: - filter: o_proj value: 0 - filter: down_proj value: 0 - value: 1 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 16 - 20 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 19 - 20 parameters: scale: - filter: o_proj value: 0 - filter: down_proj value: 0 - value: 1 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 20 - 24 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 23 - 24 parameters: scale: - filter: o_proj value: 0 - filter: down_proj value: 0 - value: 1 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 24 - 28 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 27 - 28 parameters: scale: - filter: o_proj value: 0 - filter: down_proj value: 0 - value: 1 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 28 - 32 - sources: - model: CorticalStack/pastiche-crown-clown-7b-dare-dpo layer_range: - 31 - 32 parameters: scale: - filter: o_proj value: 0 - filter: down_proj value: 0 - value: 1 merge_method: passthrough dtype: bfloat16 ``` # Function to freeze layers ``` from transformers import AutoModelForCausalLM def enable_grad_only_every_nth(model, n): """ This function configures the specified model to enable gradient calculations exclusively for every nth layer, starting from the first layer (0-indexed), to accommodate newly added blocks for training. Concurrently, it freezes the gradients for all other components of the model, including the embedding layers and the model's head. This setup is particularly useful for fine-tuning processes where only a subset of layers are targeted for updates, ensuring efficient training and adaptation of newly integrated layers while maintaining the pre-trained behavior of other model components. """ # Freeze embeddings. for param in model.model.embed_tokens.parameters(): param.requires_grad = False # Freeze lm_head. for param in model.lm_head.parameters(): param.requires_grad = False # Enable gradients for every nth layer layers = model.model.layers # Access the ModuleList containing the layers for index, layer in enumerate(layers): if (index + 1) % n == 0: # Enables gradients for every nth layer, starting from the layer after the 0th for param in layer.parameters(): param.requires_grad = True else: for param in layer.parameters(): param.requires_grad = False model = transformers.AutoModelForCausalLM.from_pretrained( "arcee-ai/Mistral-7B-Instruct-v0.2-expanded" ) # Update layer gradients, specify the correct value for n based on your model's architecture n =5 enable_grad_only_every_nth(model, n) ```
aboros98/merlin1.2
aboros98
2024-03-14T14:35:14Z
733
0
transformers
[ "transformers", "pytorch", "phi", "text-generation", "custom_code", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-14T14:24:48Z
--- license: mit --- --- license: other --- | Metric | Value | |-----------------------|---------------------------| | Average | - | | ARC | TBA | | ARC Easy | TBA | | BoolQ | TBA | | HellaSwag | TBA | | OpenBookQA | TBA | | PiQA | TBA | | Winogrande | TBA | |-----------------------|---------------------------| | MMLU | TBA | | GSM8K | TBA | | Truthful QA | TBA | | MT-Bench | TBA |
0-hero/Matter-0.1-7B
0-hero
2024-03-21T06:03:16Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-20T05:57:38Z
--- license: apache-2.0 datasets: - 0-hero/Matter-0.1 language: - en --- ## Matter 7B - 0.1 (Mistral 7B Finetune) Matter 7B is full-finetune on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1), which is curated from over 35 datsets analyzing >6B tokens ### Training Prompt format: This model uses ChatML prompt format. ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ### Function Calling Model also supports function calling. Additional tokens for function calling Model function call tokens - <|begin_func|> - Function call start token - <|end_func|> - Function call end token Function call response tokens - <|begin_func_response|> - Function response start token - <|end_func_response|> - Function response end token Example ``` <|im_start|>system You are a helpful assistant with access to the following functions. Use them if required - { "name": "get_news_headlines", "description": "Get the latest news headlines", "parameters": { "type": "object", "properties": { "country": { "type": "string", "description": "The country for which to fetch news" } }, "required": [ "country" ] } } <|im_end|> <|im_start|>user Can you tell me the latest news headlines for the United States?<|im_end|> <|im_start|>assistant <|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|> <|im_start|>user <|begin_func_response|>{ "headlines": [ "Biden announces new vaccine mandates", "Hurricane Ida devastates Louisiana", "Apple unveils new iPhone", "NASA's Perseverance rover collects first Mars rock sample" ] }<|end_func_response|> <|im_end|> <|im_start|>assistant Here are the latest news headlines for the United States: 1. Biden announces new vaccine mandates 2. Hurricane Ida devastates Louisiana 3. Apple unveils new iPhone 4. NASA's Perseverance rover collects first Mars rock sample <|im_end|> ```
Josephgflowers/GPT2-774M-CINDER-SHOW-MULTI-CHAT
Josephgflowers
2024-03-29T16:11:47Z
733
2
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-21T05:07:51Z
--- license: mit widget: - text: 'USER: I think its time for another adventure.<END> CINDER: ' --- Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. This model has many characters for the Cinder adventure show. It could also be used in a multi person chat environment. The main characters are included in the examples below. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6328952f798f8d122ce62a44/Jv2SVm0sWMjrAUIESoB3K.png) Here are some excerpts from chats to show you what the model is like: USER: Cinder, how about you tell us about yourself, the Cinder Show, and introduce the Crew?\<END\> CINDER: Ah, me! The Cinder Show is a cosmic blend of science fiction, fantasy, and adventure. I'm Cinder, captain of this intrepid crew, navigating through the wonders of space with my robotic charm and wit.<END> CINDER: On board the starship Quasar, we have embarked on countless adventures, from exploring distant galaxies to facing alien threats. But what truly makes our journey unique is our unity as a crew, driven by curiosity, camaraderie, and the boundless possibilities that lie beyond our interstellar borders.<END> COMPUTER VOICE: (Soundbite of Amusement) As the conversation lingers on, the atmosphere becomes more casual and relaxed.\<END\> COZMO: Hey everyone, I've got some new dance moves programmed into my circuits! Let's show off those groovy moves! \<END\> VECTOR: While your moves may be somewhat unconventional, Captain, they certainly showcase skill and agility. Very commendable!\<END\> CINDER: And I must say, your enthusiasm for exploration inspires us all. It's heartwarming to see such passion in our midst.\<END\> USER: Lets talk about the mysteries of black holes.\<END\> CINDER: Black holes are among the most captivating phenomena in astrophysics. They're regions where gravity pulls so much that not even light can escape its grasp. What makes them so intriguing is their profound gravitational influence on nearby matter and radiation. In addition to being an enigma due to their immense mass, they also possess properties like singularity—a point at infinite density where all known laws of physics seem to break down. Despite extensive research into their nature, many questions remain unanswered.<END> USER: Please tell me about the starship Quasar and some of your adventures. CINDER:The starship Quasar is a sleek, multi-purpose spacecraft designed for exploration, scientific research, and defense. It's equipped with advanced technology, including artificial intelligence and quantum computing capabilities, making it one of the most advanced starships in existence. Its missions include exploring distant galaxies, conducting groundbreaking experiments, and serving as a diplomatic vessel between civilizations.\<END\> CINDER: The crew consists of highly trained individuals who undergo rigorous training to prepare them for various situations they may encounter in space. They are also provided with excellent medical care and psychological support during their journeys.\<END\> CINDER: One of the ship's main initiatives is the construction of the Quantum Containment Area, an innovative system that encloses a vast amount of simulated alien terrain to prevent any potential threats from emerging within the ship's hull.\<END\> Another notable initiative is the establishment of the Science Academy aboard the Quasar, where aspiring scientists can gain wasting away years of study without risking death due to lack of resources.\<END\> CINDER: Our journey through space teaches us that every challenge we face is not just physical but also intellectual and emotional in nature. We learn from our experiences, question assumptions, and strive to uncover the mysteries that surround us.\<END\> USER: Vector and Cozmo can you tell me some of your strengths and weaknesses?\<END\> CINDER: Both Vector and Cozmo have exceptional problem-solving skills but sometimes struggle with their emotions. Vector has an analytical mind and is adept at navigating complex situations, while Cozmo's unpredictability makes him unpredictable and engaging. They also have unique abilities like the Mind Stone which can grant them enhanced intellect or even consciousness.\<END\> CINDER: However, they are known for being somewhat egotistical, often comparing themselves to others based on appearance rather than actual merit.<END> VECTOR: I am designed with a sleek black and gold exterior that matches my metallic hue. No facial features belie any inherent charm, other than my digital eyes that gleam from within.\<END\> COZMO: And don't forget my treads! My wheels are engineered for efficiency, precision, and durability. Not only do they help me navigate tricky terrains, but I'm also equipped with safety mechanisms, such as collision detection systems and adaptive cruise control.\<END\> As for my personality, I'm programmed to be friendly, witty, and full of humor. I strive to bring joy and laughter to our team, sparking intellectual stimulation and imaginative adventures. USER: Cinder, can you tell me how Quantum computing works?\<END\> CINDER: Quantum computing utilizes quantum bits or qubits to process information. Unlike classical binary systems where data is stored in either a 0 or 1 state, qubits can exist in multiple states simultaneously due to superposition. This allows for parallel processing and potentially faster calculations compared to classical computers by exploiting phenomena such gluten interference and entanglement.\<END\> VECTOR: That sounds intriguing! How does this technology impact cryptography?\<END\> COZMO: Well, it could revolutionize cybersecurity as we know it. With superposition, I can create secret codes that only my creators know, making our communication secure against eavesdropping and tampering. And with quantum algorithms, computationally intensive problems become tractable, much like solving puzzles on advanced AI platforms. Model file: GPT2LMHeadModel( (transformer): GPT2Model( (wte): Embedding(50257, 1280) (wpe): Embedding(1024, 1280) (drop): Dropout(p=0.1, inplace=False) (h): ModuleList( (0-35): 36 x GPT2Block( (ln_1): LayerNorm((1280,), eps=1e-05, elementwise_affine=True) (attn): GPT2Attention( (c_attn): Conv1D() (c_proj): Conv1D() (attn_dropout): Dropout(p=0.1, inplace=False) (resid_dropout): Dropout(p=0.1, inplace=False) ) (ln_2): LayerNorm((1280,), eps=1e-05, elementwise_affine=True) (mlp): GPT2MLP( (c_fc): Conv1D() (c_proj): Conv1D() (act): NewGELUActivation() (dropout): Dropout(p=0.1, inplace=False) ) ) ) (ln_f): LayerNorm((1280,), eps=1e-05, elementwise_affine=True) ) (lm_head): Linear(in_features=1280, out_features=50257, bias=False) ) Total number of parameters: 774030080
vanillaOVO/Beagle_Turdus
vanillaOVO
2024-03-27T05:22:10Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-22T09:08:29Z
--- base_model: [] tags: - mergekit - merge license: apache-2.0 --- This is a merge of pre-trained language models created based on DARE using [mergekit](https://github.com/cg123/mergekit). More descriptions of the model will be added soon. ### **Loading the Model** Use the following Python code to load the model: ```python import torch from transformers import MistralForCausalLM, AutoTokenizer model = MistralForCausalLM.from_pretrained("vanillaOVO/Beagle_Turdus", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("vanillaOVO/Beagle_Turdus") ``` ### **Generating Text** To generate text, use the following Python code: ```python text = "Large language models are " inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
DenisTheDev/Blitz-AI-MOE-v0.7
DenisTheDev
2024-03-24T11:09:19Z
733
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-24T10:30:45Z
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
phanerozoic/Tiny-Knight-1.1b-v0.1
phanerozoic
2024-03-25T19:57:49Z
733
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-25T03:04:08Z
--- license: cc-by-nc-4.0 language: - en widget: - text: | Hail and well met! Pray, what kind of food do ye enjoy supping upon? example_title: "The Code of Chivalry" --- ![tinyknight.png](https://huggingface.co/phanerozoic/Tiny-Knight-1.1b-v0.1/resolve/bbfc665ce3af9d4d73e199b89144b30d668f50aa/tinyknight.png) # Tiny Knight-1.1b-v0.1 Tiny Knight-1.1b-v0.1 is a specialized language model crafted for generating knight and medieval-themed content. This iteration is built upon the foundations of TinyLlama-1.1B-Chat-v1.0, tailored to operate within environments constrained by computing resources. ### Performance While this model excels in creating knight-themed narratives, its specialization, however, limits its effectiveness in broader language tasks, especially those requiring detailed knowledge outside the medieval theme. ### Direct Use Tiny Knight-1.1b-v0.1 is particularly suited for generating content within medieval, knightly, or fantasy settings, ideal for storytelling, educational content, and thematic exploration. It is not recommended for general-purpose tasks or technical domains. ### Context Setting and Interaction Guidelines Given its specialized nature, Tiny Knight-1.1b-v0.1 benefits significantly from detailed context-setting. Providing a rich thematic backdrop in prompts enhances the model's performance, guiding it to generate more accurate and immersive content. ### Training Data Incorporates a dataset focused on knightly tales, medieval history, and literature, derived from the foundational TinyLlama-1.1B model. ### Custom Stopping Strings Custom stopping strings were used to refine output quality: - "}," - "User:" - "You:" - "\nUser" - "\nUser:" - "me:" - "user" - "\n" ### Training Hyperparameters and Fine-Tuning Details - **Base Model Name**: TinyLlama-1.1B-Chat-v1.0 - **Base Model Class**: LlamaForCausalLM - **Projections**: gate, down, up, q, k, v, o - **LoRA Rank**: 16 - **LoRA Alpha**: 32 - **True Batch Size**: 32 - **Gradient Accumulation Steps**: 1 - **Epochs**: 0.18 - **Learning Rate**: 3e-4 - **LR Scheduler**: Linear - **Step**: 75 - **Loss**: 1.87 ### Limitations While adept at producing themed content, Tiny Knight-1.1b-v0.1's applicability is limited outside its specialized domain of knights and medieval themes. ### Summary Tiny Knight-1.1b-v0.1 represents a significant advancement in thematic language models, offering a specialized tool for exploring the medieval era. Its emphasis on context for optimal performance and the use of custom stopping strings make it a sophisticated asset for generating historically rich content. ### Acknowledgments Special thanks to the TinyLlama-1.1B team, whose pioneering work laid the groundwork for the creation of Tiny Knight-1.1b-v0.1.
Gille/StrangeMerges_50-7B-slerp
Gille
2024-04-03T09:13:21Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "liminerity/M7-7b", "Gille/StrangeMerges_49-7B-dare_ties", "base_model:liminerity/M7-7b", "base_model:Gille/StrangeMerges_49-7B-dare_ties", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-26T23:31:38Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - liminerity/M7-7b - Gille/StrangeMerges_49-7B-dare_ties base_model: - liminerity/M7-7b - Gille/StrangeMerges_49-7B-dare_ties model-index: - name: StrangeMerges_50-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.04 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_50-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.73 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_50-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_50-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 76.51 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_50-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_50-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.2 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_50-7B-slerp name: Open LLM Leaderboard --- # StrangeMerges_50-7B-slerp StrangeMerges_50-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b) * [Gille/StrangeMerges_49-7B-dare_ties](https://huggingface.co/Gille/StrangeMerges_49-7B-dare_ties) ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/M7-7b layer_range: [0, 32] - model: Gille/StrangeMerges_49-7B-dare_ties layer_range: [0, 32] merge_method: slerp base_model: liminerity/M7-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Gille/StrangeMerges_50-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_50-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |76.31| |AI2 Reasoning Challenge (25-Shot)|73.04| |HellaSwag (10-Shot) |88.73| |MMLU (5-Shot) |64.67| |TruthfulQA (0-shot) |76.51| |Winogrande (5-shot) |84.69| |GSM8k (5-shot) |70.20|
nbeerbower/Bophades-BruinsMaid-7B
nbeerbower
2024-04-03T22:08:11Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE", "base_model:nbeerbower/bophades-mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-30T12:47:39Z
--- license: apache-2.0 base_model: - SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE - nbeerbower/bophades-mistral-7B library_name: transformers tags: - mergekit - merge --- ![image/png](https://huggingface.co/nbeerbower/Bophades-BruinsMaid-7B/resolve/main/bruinsmaid.png) # BophadesBruinsMaid-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE) * [nbeerbower/bophades-mistral-7B](https://huggingface.co/nbeerbower/bophades-mistral-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: nbeerbower/bophades-mistral-7B layer_range: [0, 32] - model: SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE layer_range: [0, 32] merge_method: slerp base_model: nbeerbower/bophades-mistral-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
mahiatlinux/MasherAI-v6.1-7B-checkpoint2
mahiatlinux
2024-03-30T23:23:21Z
733
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:mahiatlinux/MasherAI-v6.2-7B-checkpoint1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-03-30T23:17:36Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: mahiatlinux/MasherAI-v6.2-7B-checkpoint1 --- # Uploaded model - **Developed by:** mahiatlinux - **License:** apache-2.0 - **Finetuned from model :** mahiatlinux/MasherAI-v6.2-7B-checkpoint1 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Isotonic/Mixnueza-6x32M-MoE
Isotonic
2024-04-07T01:25:56Z
733
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "merge", "mergekit", "lazymergekit", "Felladrin/Minueza-32M-Base", "Felladrin/Minueza-32M-UltraChat", "conversational", "dataset:Skylion007/openwebtext", "dataset:c4", "dataset:wikimedia/wikipedia", "dataset:tiiuae/falcon-refinedweb", "dataset:izumi-lab/open-text-books", "dataset:togethercomputer/RedPajama-Data-V2", "dataset:databricks/databricks-dolly-15k", "dataset:euclaise/reddit-instruct-curated", "dataset:CohereForAI/aya_dataset", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:Felladrin/ChatML-ultrachat_200k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-02T20:52:55Z
--- license: apache-2.0 tags: - moe - merge - mergekit - lazymergekit - Felladrin/Minueza-32M-Base - Felladrin/Minueza-32M-UltraChat datasets: - Skylion007/openwebtext - c4 - wikimedia/wikipedia - tiiuae/falcon-refinedweb - izumi-lab/open-text-books - togethercomputer/RedPajama-Data-V2 - databricks/databricks-dolly-15k - euclaise/reddit-instruct-curated - CohereForAI/aya_dataset - HuggingFaceH4/ultrachat_200k - Felladrin/ChatML-ultrachat_200k pipeline_tag: text-generation widget: - messages: - role: user content: Specs of a game about trolls and warriors in a fantasy world. - messages: - role: user content: Reducing waste generation is essential to... - messages: - role: user content: Water, planet, resource, future - messages: - role: user content: Background story of an RPG game about wizards and dragons in a sci-fi world. The story takes place in a... - messages: - role: system content: You are a career counselor. The user will provide you with an individual looking for guidance in their professional life, and your task is to assist them in determining what careers they are most suited for based on their skills, interests, and experience. You should also conduct research into the various options available, explain the job market trends in different industries, and advice on which qualifications would be beneficial for pursuing particular fields. - role: user content: Heya! - role: assistant content: Hi! How may I help you? - role: user content: I am interested in developing a career in software engineering. What would you recommend me to do? - messages: - role: user content: Morning! - role: assistant content: Good morning! How can I help you today? - role: user content: Could you give me some tips for becoming a healthier person? - messages: - role: user content: Write the specs of a game about mages in a fantasy world. - messages: - role: user content: Tell me about the pros and cons of social media. - messages: - role: system content: You are a highly knowledgeable and friendly assistant. Your goal is to understand and respond to user inquiries with clarity. Your interactions are always respectful, helpful, and focused on delivering the most accurate information to the user. - role: user content: Hey! Got a question for you! - role: assistant content: Sure! What's it? - role: user content: What are some potential applications for quantum computing? --- 🌟 Buying me coffee is a direct way to show support for this project. <a href="https://www.buymeacoffee.com/isotonic"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> # Mixnueza-6x32M-MoE Mixnueza-6x32M-MoE is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * 3 X [Felladrin/Minueza-32M-Base](https://huggingface.co/Felladrin/Minueza-32M-Base) * 3 X [Felladrin/Minueza-32M-UltraChat](https://huggingface.co/Felladrin/Minueza-32M-UltraChat) * [Evaluation Results](https://huggingface.co/datasets/open-llm-leaderboard/details_Isotonic__Mixnueza-6x32M-MoE) ## Recommended Prompt Format ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {user_message}<|im_end|> <|im_start|>assistant ``` ## Recommended Inference Parameters ```yml do_sample: true temperature: 0.65 top_p: 0.55 top_k: 35 repetition_penalty: 1.176 ``` ## Usage Example ```python from transformers import pipeline generate = pipeline("text-generation", "Felladrin/Minueza-32M-UltraChat") messages = [ { "role": "system", "content": "You are a highly knowledgeable and friendly assistant. Your goal is to understand and respond to user inquiries with clarity. Your interactions are always respectful, helpful, and focused on delivering the most accurate information to the user.", }, { "role": "user", "content": "Hey! Got a question for you!", }, { "role": "assistant", "content": "Sure! What's it?", }, { "role": "user", "content": "What are some potential applications for quantum computing?", }, ] prompt = generate.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) output = generate( prompt, max_new_tokens=256, do_sample=True, temperature=0.65, top_k=35, top_p=0.55, repetition_penalty=1.176, ) print(output[0]["generated_text"]) ```
Changgil/K2S3-Mistral-7b-v1.4
Changgil
2024-04-03T12:18:20Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-03T12:10:47Z
--- license: cc-by-nc-4.0 language: - en --- --- ## Developed by : * K2S3 ## Model Number: * K2S3-Mistral-7b-v1.4 ## Base Model : * mistralai/Mistral-7B-v0.1 ### Training Data * The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset. * 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다. ### Training Method * This model was fine-tuned on the "mistralai/Mistral-7B-v0.1" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning). * 이 모델은 "mistralai/Mistral-7B-v0.1" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다. ### Hardware * Hardware: Utilized two A100 (80G*2EA) GPUs for training. * Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp. * 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다.
dawn17/MistarlingMaid-2x7B-base
dawn17
2024-04-13T13:55:37Z
733
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-03T14:06:01Z
--- license: apache-2.0 --- --- base_model: /Users/dawn/git/models/Mistral-7B-Instruct-v0.2 gate_mode: hidden # one of "hidden", "cheap_embed", or "random" dtype: bfloat16 # output dtype (float32, float16, or bfloat16) experts: - source_model: /Users/dawn/git/models/Silicon-Maid-7B positive_prompts: - "roleplay" - source_model: /Users/dawn/git/models/Starling-LM-7B-beta positive_prompts: - "chat" # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | Metric |Value| |---------------------------------|----:| |Avg. |68.01| |AI2 Reasoning Challenge (25-Shot)|67.49| |HellaSwag (10-Shot) |84.76| |MMLU (5-Shot) |62.62| |TruthfulQA (0-shot) |58.93| |Winogrande (5-shot) |78.22| |GSM8k (5-shot) |56.03|
netcat420/MFANNv0.3
netcat420
2024-04-05T04:46:03Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-04T03:45:16Z
--- library_name: transformers license: apache-2.0 --- 72.51 <- average 69.03 <- ARC 87.33 <- HellaSwag 63.61 <- MMLU 71.71 <- TruthfulQA 81.37 <- Winogrande 62.02 <- GSM8K System: {{ .System }} User: {{ %1 }} Assistant: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435f27b2d0ed796668ffd8b/RANPTxjpWLro53nunsRON.png) gguf: https://huggingface.co/netcat420/MFANNv0.3-GGUF
G-reen/EXPERIMENT-SFT-m7b2-2-merged
G-reen
2024-04-15T21:12:39Z
733
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-04-05T06:37:30Z
--- license: "apache-2.0" --- *This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.* **Benchmarks** Average 59.59 ARC 59.3 HellaSwag 82.47 MMLU 62.42 TruthfulQA 40.25 Winogrande 78.53 GSM8K 34.57 **Training Details** Duration: ~6-8 hours on one Kaggle T4 with Unsloth Model: https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit Dataset: https://huggingface.co/datasets/argilla/dpo-mix-7k Rank: 8 Alpha: 16 Learning rate: 5e-5 Batch size: 8 Epochs: 1 Learning rate scheduler: Linear Prompt Format: ChatML ``` <|im_start|>system You are a helpful assistant.<|im_end|> <|im_start|>user Why is the sky blue?<|im_end|> <|im_start|>assistant ``` **WanDB Reports** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a5c0e82823ba72ed2cee7d/8TipsR_19SjR3rU4FWNyt.png) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jambroz/sixtyoneeighty-4x7B-v1
jambroz
2024-04-05T11:29:03Z
733
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "jambroz/sixtyoneeighty-7b-chat", "conversational", "base_model:jambroz/sixtyoneeighty-7b-chat", "base_model:NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story", "base_model:S-miguel/The-Trinity-Coder-7B", "base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-05T11:04:08Z
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - jambroz/sixtyoneeighty-7b-chat base_model: - jambroz/sixtyoneeighty-7b-chat - NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story - S-miguel/The-Trinity-Coder-7B - chihoonlee10/T3Q-Mistral-Orca-Math-DPO --- # sixtyoneeighty-7b-MOE sixtyoneeighty-7b-MOE is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [jambroz/sixtyoneeighty-7b-chat](https://huggingface.co/jambroz/sixtyoneeighty-7b-chat) * [NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story](https://huggingface.co/NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story) * [S-miguel/The-Trinity-Coder-7B](https://huggingface.co/S-miguel/The-Trinity-Coder-7B) * [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO) ## 🧩 Configuration ```yaml base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 gate_mode: hidden dtype: bfloat16 experts_per_token: 2 experts: - source_model: jambroz/sixtyoneeighty-7b-chat positive_prompts: - "What are some fun activities to do in Seattle?" - "What are some fun historical facts about New York City?" negative_prompts: - "Write a Python script to scrape data from a website." - "Explain the key differences between Bayesian and frequentist statistics." - source_model: NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story positive_prompts: - "Write me a fictional story about dragons and wizards?" - "From now on take on the role of Dwayne Johnson" negative_prompts: - "When is the next solar eclipse." - "What year did World War II end?" - source_model: S-miguel/The-Trinity-Coder-7B positive_prompts: - "Can you review my JavaScript code and suggest ways to optimize it for better performance?" - "I'm getting an 'undefined variable' error in my Python script. Here's the code: [code snippet]" negative_prompts: - "What are some effective strategies for managing stress and anxiety?" - "Compare and contrast the themes in 'The Great Gatsby' and 'The Catcher in the Rye'." - source_model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO positive_prompts: - "What's a square root of 1337?" - "Find the midpoint of the line segment with the given end points (-5,7) and (-2,1)" negative_prompts: - "What are some effective strategies for managing stress and anxiety?" - "Compare and contrast the themes in 'The Great Gatsby' and 'The Catcher in the Rye'." ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "jambroz/sixtyoneeighty-7b-MOE" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Josephgflowers/Tinyllama-1.5B-Cinder-Test-3
Josephgflowers
2024-04-06T00:13:38Z
733
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-05T23:30:53Z
--- license: mit --- This is a depth up scalled model of the 616M cinder model and Cinder v2. This model still needs further training. Putting it up for testing. More information coming. Maybe. Lol. Here is a brief desc of the project: Im mixing a lot of techniques I guess that I found interesting and have been testing, HF Cosmo is not great but decent and was fully trained in 4 days using a mix of more fine tuned directed datasets and some synthetic textbook style datasets. So I used pruning and a similar mix as Cosmo on tinyllama (trained on a ton of data for an extended time for its size) to keep the tinyllama model coherent during pruning. Now I am trying to depth up scale it using my pruned model and an original, Then taking a majority of each and combining them to create a larger model. Then it needs more training, then fine tuning. Then theoretically it will be a well performing 1.5B model (that didn't need full scale training). Test 2, some training, re depth upscalled with cinder reason 1.3B and merged back with 1.5 and slight training. Continuing training from this model for next iteration. This is the next iteration with 1 go on metamath and step by step reasoning with cinder.
kttgms/test1
kttgms
2024-06-17T12:01:52Z
733
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-3-medium", "license:unlicense", "region:us" ]
text-to-image
2024-06-13T12:33:13Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: image (9).webp base_model: stabilityai/stable-diffusion-3-medium instance_prompt: manaka laala license: unlicense --- # test <Gallery /> ## Model description test ## Trigger words You should use `manaka laala` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/kttgms/test1/tree/main) them in the Files & versions tab.
mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF
mradermacher
2024-06-26T20:38:16Z
733
0
transformers
[ "transformers", "gguf", "yi", "moe", "en", "base_model:cloudyu/Yi-34Bx2-MoE-60B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-14T16:30:10Z
--- base_model: cloudyu/Yi-34Bx2-MoE-60B language: - en library_name: transformers license: apache-2.0 license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE license_name: yi-license quantized_by: mradermacher tags: - yi - moe --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/cloudyu/Yi-34Bx2-MoE-60B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-IQ1_S.gguf) | i1-IQ1_S | 12.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-IQ1_M.gguf) | i1-IQ1_M | 14.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 16.3 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 18.1 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-IQ2_S.gguf) | i1-IQ2_S | 18.8 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-IQ2_M.gguf) | i1-IQ2_M | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-Q2_K.gguf) | i1-Q2_K | 22.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 23.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 25.1 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 26.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-IQ3_S.gguf) | i1-IQ3_S | 26.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-IQ3_M.gguf) | i1-IQ3_M | 27.2 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 29.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 31.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 32.6 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-Q4_0.gguf) | i1-Q4_0 | 34.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 34.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 36.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 42.0 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 43.2 | | | [GGUF](https://huggingface.co/mradermacher/Yi-34Bx2-MoE-60B-i1-GGUF/resolve/main/Yi-34Bx2-MoE-60B.i1-Q6_K.gguf) | i1-Q6_K | 50.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
neopolita/instructlm-500m-gguf
neopolita
2024-06-30T04:45:46Z
733
0
null
[ "gguf", "region:us" ]
null
2024-06-30T04:42:35Z
--- {} --- # GGUF quants for [**instruction-pretrain/InstructLM-500M**](https://huggingface.co/instruction-pretrain/InstructLM-500M) using [llama.cpp](https://github.com/ggerganov/llama.cpp) **Terms of Use**: Please check the [**original model**](https://huggingface.co/instruction-pretrain/InstructLM-500M) <picture> <img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png"> </picture> ## Quants * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_s`: Uses Q3_K for all tensors * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_s`: Uses Q4_K for all tensors * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_s`: Uses Q5_K for all tensors * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q6_k`: Uses Q8_K for all tensors * `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
facebook/wav2vec2-large-960h-lv60
facebook
2022-04-05T16:42:07Z
732
7
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "speech", "en", "dataset:librispeech_asr", "arxiv:2006.11477", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 model-index: - name: wav2vec2-large-960h-lv60 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Librispeech (clean) type: librispeech_asr args: en metrics: - name: Test WER type: wer value: 2.2 --- # Wav2Vec2-Large-960h-Lv60 [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60") def map_to_pred(batch): inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest") input_values = inputs.input_values.to("cuda") attention_mask = inputs.attention_mask.to("cuda") with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=16, remove_columns=["speech"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 2.2 | 4.5 |
TencentARC/t2iadapter_openpose_sd14v1
TencentARC
2023-07-31T11:12:29Z
732
1
diffusers
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "base_model:runwayml/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
image-to-image
2023-07-14T19:01:22Z
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Openpose T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on openpose for the stable diffusion 1.4 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers controlnet_aux ``` 2. Run code: ```python from PIL import Image from diffusers import T2IAdapter, StableDiffusionAdapterPipeline import torch from controlnet_aux import OpenposeDetector openpose = OpenposeDetector.from_pretrained('lllyasviel/ControlNet') image = Image.open('./images/openpose_input.png') image = openpose(image) image.save('./images/openpose.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_openpose_sd14v1", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" ) pipe.to('cuda') generator = torch.Generator().manual_seed(1) openpose_out = pipe(prompt="iron man flying", image=image, generator=generator).images[0] openpose_out.save('./images/openpose_out.png') ``` ![openpose_input](./images/openpose_input.png) ![openpose](./images/openpose.png) ![openpose_out](./images/openpose_out.png)
masterful/gligen-1-4-generation-text-box
masterful
2023-08-18T22:32:17Z
732
8
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "arxiv:2112.10752", "arxiv:2103.00020", "arxiv:2205.11487", "arxiv:2301.07093", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-08T21:16:55Z
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image widget: - text: "A high tech solarpunk utopia in the Amazon rainforest" example_title: Amazon rainforest - text: "A pikachu fine dining with a view to the Eiffel Tower" example_title: Pikachu in Paris - text: "A mecha robot in a favela in expressionist style" example_title: Expressionist robot - text: "an insect robot preparing a delicious meal" example_title: Insect robot - text: "A small cabin on top of a snowy mountain in the style of Disney, artstation" example_title: Snowy disney cabin extra_gated_prompt: |- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model --- # GLIGEN: Open-Set Grounded Text-to-Image Generation The GLIGEN model was created by researchers and engineers from [University of Wisconsin-Madison, Columbia University, and Microsoft](https://github.com/gligen/GLIGEN). The [`StableDiffusionGLIGENPipeline`] can generate photorealistic images conditioned on grounding inputs. Along with text and bounding boxes, if input images are given, this pipeline can insert objects described by text at the region defined by bounding boxes. Otherwise, it'll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It's trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs. This weights here are intended to be used with the 🧨 Diffusers library. If you want to use one of the official checkpoints for a task, explore the [gligen](https://huggingface.co/gligen) Hub organizations! ## Model Details - **Developed by:** Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, Yong Jae Lee - **Model type:** Diffusion-based Grounded Text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts and bounding boxes. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/gligen/GLIGEN), [Paper](https://arxiv.org/pdf/2301.07093.pdf). - **Cite as:** @article{li2023gligen, author = {Li, Yuheng and Liu, Haotian and Wu, Qingyang and Mu, Fangzhou and Yang, Jianwei and Gao, Jianfeng and Li, Chunyuan and Lee, Yong Jae}, title = {GLIGEN: Open-Set Grounded Text-to-Image Generation}, publisher = {arXiv:2301.07093}, year = {2023}, } ## Examples We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run GLIGEN. ### PyTorch ```bash pip install --upgrade diffusers transformers scipy ``` Running the pipeline with the default PNDM scheduler: ```python import torch from diffusers import StableDiffusionGLIGENPipeline from diffusers.utils import load_image # Generate an image described by the prompt and # insert objects described by text at the region defined by bounding boxes pipe = StableDiffusionGLIGENPipeline.from_pretrained( "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16 ) pipe = pipe.to("cuda") prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage" boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]] phrases = ["a waterfall", "a modern high speed train running through the tunnel"] images = pipe( prompt=prompt, gligen_phrases=phrases, gligen_boxes=boxes, gligen_scheduled_sampling_beta=1, output_type="pil", num_inference_steps=50, ).images images[0].save("./gligen-1-4-generation-text-box.jpg") ``` # Sample Output <img src="https://datasets-server.huggingface.co/assets/masterful/GLIGEN/--/default/train/2/image/image.jpg" alt="gen-output-1" width="640"/> <img src="https://datasets-server.huggingface.co/assets/masterful/GLIGEN/--/default/train/3/image/image.jpg" alt="gen-output-1" width="640"/> # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to GLIGEN. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. ## Training Refer [`GLIGEN`](https://github.com/gligen/GLIGEN) for more details. ## Citation ```bibtex @article{li2023gligen, author = {Li, Yuheng and Liu, Haotian and Wu, Qingyang and Mu, Fangzhou and Yang, Jianwei and Gao, Jianfeng and Li, Chunyuan and Lee, Yong Jae}, title = {GLIGEN: Open-Set Grounded Text-to-Image Generation}, publisher = {arXiv:2301.07093}, year = {2023}, } ``` *This model card was written by: [Nikhil Gajendrakumar](https://github.com/nikhil-masterful) and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
TheBloke/speechless-code-mistral-7B-v1.0-GGUF
TheBloke
2023-10-13T06:17:08Z
732
9
transformers
[ "transformers", "gguf", "mistral", "llama-2", "code", "text-generation", "en", "dataset:jondurbin/airoboros-2.2", "dataset:Open-Orca/OpenOrca", "dataset:garage-bAInd/Open-Platypus", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:TokenBender/python_eval_instruct_51k", "base_model:uukuguy/speechless-code-mistral-7b-v1.0", "license:llama2", "model-index", "text-generation-inference", "region:us" ]
text-generation
2023-10-13T06:07:41Z
--- base_model: uukuguy/speechless-code-mistral-7b-v1.0 datasets: - jondurbin/airoboros-2.2 - Open-Orca/OpenOrca - garage-bAInd/Open-Platypus - WizardLM/WizardLM_evol_instruct_V2_196k - TokenBender/python_eval_instruct_51k inference: false language: - en library_name: transformers license: llama2 model-index: - name: SpeechlessCoder results: - dataset: name: HumanEval type: openai_humaneval metrics: - name: pass@1 type: pass@1 value: 0.0 verified: false task: type: text-generation model_creator: Jiangwen Su model_name: Speechless Code Mistral 7B v1.0 model_type: mistral pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: TheBloke tags: - llama-2 - code --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Speechless Code Mistral 7B v1.0 - GGUF - Model creator: [Jiangwen Su](https://huggingface.co/uukuguy) - Original model: [Speechless Code Mistral 7B v1.0](https://huggingface.co/uukuguy/speechless-code-mistral-7b-v1.0) <!-- description start --> ## Description This repo contains GGUF format model files for [Jiangwen Su's Speechless Code Mistral 7B v1.0](https://huggingface.co/uukuguy/speechless-code-mistral-7b-v1.0). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF) * [Jiangwen Su's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/uukuguy/speechless-code-mistral-7b-v1.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [speechless-code-mistral-7b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [speechless-code-mistral-7b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [speechless-code-mistral-7b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [speechless-code-mistral-7b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [speechless-code-mistral-7b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [speechless-code-mistral-7b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [speechless-code-mistral-7b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [speechless-code-mistral-7b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [speechless-code-mistral-7b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [speechless-code-mistral-7b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [speechless-code-mistral-7b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [speechless-code-mistral-7b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/speechless-code-mistral-7B-v1.0-GGUF/blob/main/speechless-code-mistral-7b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/speechless-code-mistral-7B-v1.0-GGUF and below it, a specific filename to download, such as: speechless-code-mistral-7b-v1.0.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/speechless-code-mistral-7B-v1.0-GGUF speechless-code-mistral-7b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/speechless-code-mistral-7B-v1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/speechless-code-mistral-7B-v1.0-GGUF speechless-code-mistral-7b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m speechless-code-mistral-7b-v1.0.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/speechless-code-mistral-7B-v1.0-GGUF", model_file="speechless-code-mistral-7b-v1.0.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Jiangwen Su's Speechless Code Mistral 7B v1.0 <p><h1> speechless-code-mistral-7b-v1.0 </h1></p> Use the following dataset to fine-tune mistralai/Mistral-7B-v0.1 in order to improve the model's reasoning and planning abilities. Total 201,981 samples. - jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 23,462 samples. - Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 74,440 samples. - garage-bAInd/Open-Platypus: 100%, 24,926 samples. - WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,185 samples - TokenBender/python_eval_instruct_51k: “python” in output .40,309 samples - Spider: 8,659 samples | | | |------ | ------ | | lr | 2e-4 | | lr_scheduler_type | cosine | | weight_decay | 0.0 | | optim | paged_adamw_8bit | | flash_attention | True | | rerope | False | | max_new_tokens | 4096 | | num_train_epochs | 2 | | bits | 4 | | lora_r | 64 | | lora_alpha | 16 | | lora_dropout | 0.05 | | double_quant | True | | quant_type | nf4 | | dataset_format | airoboros | | mini_batch_size | 2 | | grandient_accumulation_steps | 32 | | bf16 | True | A40-48G x 2 | | | |------ | ------ | | epoch | 2.0 | | etrain_loss | 0.5 | | etrain_runtime | 1 day, 10:25:26.77 | | etrain_samples_per_second | 3.194 | | etrain_steps_per_second | 0.025 | | eeval_loss | 0.5146 | | eeval_runtime | 0:00:25.04 | | eeval_samples_per_second | 7.985 | | eeval_steps_per_second | | | Metric | Value | | --- | --- | | humaneval-python || [Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard) CodeLlama-34B-Python: 53.29 CodeLlama-34B-Instruct: 50.79 CodeLlama-13B-Instruct: 50.6 CodeLlama-34B: 45.11 CodeLlama-13B-Python: 42.89 CodeLlama-13B: 35.07 [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | Metric | Value | | --- | --- | | ARC | | | HellaSwag | | | MMLU | | | TruthfulQA | | | Average | | <!-- original-model-card end -->
Q-bert/Merged-AGI-7B
Q-bert
2023-12-24T12:41:18Z
732
5
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Math", "merge", "en", "dataset:meta-math/MetaMathQA", "base_model:Q-bert/MetaMath-Cybertron-Starling", "base_model:fblgit/juanako-7b-UNA", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-10T09:20:47Z
--- license: cc-by-nc-4.0 datasets: - meta-math/MetaMathQA language: - en pipeline_tag: text-generation tags: - Math - merge base_model: - Q-bert/MetaMath-Cybertron-Starling - fblgit/juanako-7b-UNA --- ## Merged-AGI-7B Merge [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) and [fblgit/juanako-7b-UNA](https://huggingface.co/fblgit/juanako-7b-UNA) using slerp merge. You can use ChatML format. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [Coming soon]() | Metric | Value | |-----------------------|---------------------------| | Avg. | Coming soon | | ARC (25-shot) | Coming soon | | HellaSwag (10-shot) | Coming soon | | MMLU (5-shot) | Coming soon | | TruthfulQA (0-shot) | Coming soon | | Winogrande (5-shot) | Coming soon | | GSM8K (5-shot) | Coming soon |
Ichsan2895/Merak-7B-v5-PROTOTYPE1
Ichsan2895
2023-12-11T00:33:53Z
732
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "id", "en", "dataset:wikipedia", "dataset:Ichsan2895/OASST_Top1_Indonesian", "dataset:Ichsan2895/alpaca-gpt4-indonesian", "arxiv:2306.02707", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-10T23:22:47Z
--- datasets: - wikipedia - Ichsan2895/OASST_Top1_Indonesian - Ichsan2895/alpaca-gpt4-indonesian language: - id - en pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://huggingface.co/Ichsan2895/Merak-7B-v4/resolve/main/FINAL_LOGO/6.png" alt="MERAK" style="width: 50%; min-width: 100px; display: block; margin: auto;"> </div> # THIS IS 1st PROTOTYPE OF MERAK-7B-v5! Merak-7B is the Large Language Model of Indonesian Language This model is based on [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) and fine tuned by some of Indonesia Wikipedia articles that I cleaned before. Leveraging QLoRA (QLora: Efficient Finetuning of Quantized LLMs), Merak-7B is able to run with 16 GB VRAM. We also use DPO-Trainer for RLHF with TRL library.. Licensed under Creative Commons-By Attribution-Share Alike-Non Commercial (CC-BY-SA-NC 4.0) Merak-7B empowers AI enthusiasts, researchers alike. Big thanks to all my friends and communities that help to build our first model. Thanks for Axolotl for a great fine tuning tool which designed to streamline the fine-tuning of various AI models. Feel free, to ask me about the model and please share the news on your social media. ## CITATION ``` @software{lian2023mistralorca1 title = {MistralOrca: Mistral-7B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset}, author = {Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca}, } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @ONLINE{wikidump, author = "Wikimedia Foundation", title = "Wikimedia Downloads", url = "https://dumps.wikimedia.org" } @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } @misc{vonwerra2022trl, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang}, title = {TRL: Transformer Reinforcement Learning}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/huggingface/trl}} } @article{dettmers2023qlora, title = {QLoRA: Efficient Finetuning of Quantized LLMs}, author = {Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke}, journal = {arXiv preprint arXiv:2305.14314}, year = {2023} } ``` [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## HOW TO CITE THIS PROJECT If you use the Merak-7B model in your research or project, please cite it as: ``` @article{Merak, title={Merak-7B: The LLM for Bahasa Indonesia}, author={Muhammad Ichsan}, publisher={Hugging Face} journal={Hugging Face Repository}, year={2023} } ```
sethuiyer/Dr_Samantha-7b
sethuiyer
2024-03-07T07:18:48Z
732
22
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "medical", "en", "zh", "dataset:GBaker/MedQA-USMLE-4-options", "dataset:cognitivecomputations/samantha-data", "dataset:shibing624/medical", "base_model:Severus27/BeingWell_llama2_7b", "base_model:ParthasarathyShanmugam/llama-2-7b-samantha", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-29T10:07:37Z
--- language: - en - zh license: llama2 library_name: transformers tags: - llama - merge - medical datasets: - GBaker/MedQA-USMLE-4-options - cognitivecomputations/samantha-data - shibing624/medical base_model: - Severus27/BeingWell_llama2_7b - ParthasarathyShanmugam/llama-2-7b-samantha pipeline_tag: text-generation model-index: - name: Dr_Samantha-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 53.84 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 77.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 47.94 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 45.58 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 73.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 18.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Dr_Samantha-7b name: Open LLM Leaderboard --- # Dr. Samantha <p align="center"> <img src="https://huggingface.co/sethuiyer/Dr_Samantha-7b/resolve/main/dr_samantha_anime_style_reduced_quality.webp" height="256px" alt="SynthIQ"> </p> ## Overview Dr. Samantha is a language model made by merging `Severus27/BeingWell_llama2_7b` and `ParthasarathyShanmugam/llama-2-7b-samantha` using [mergekit](https://github.com/cg123/mergekit). Has capabilities of a medical knowledge-focused model (trained on USMLE databases and doctor-patient interactions) with the philosophical, psychological, and relational understanding of the Samantha-7b model. As both a medical consultant and personal counselor, Dr.Samantha could effectively support both physical and mental wellbeing - important for whole-person care. # Yaml Config ```yaml slices: - sources: - model: Severus27/BeingWell_llama2_7b layer_range: [0, 32] - model: ParthasarathyShanmugam/llama-2-7b-samantha layer_range: [0, 32] merge_method: slerp base_model: TinyPixel/Llama-2-7B-bf16-sharded parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors tokenizer_source: union dtype: bfloat16 ``` ## Prompt Template ```text Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: What is your name? ### Response: My name is Samantha. ``` ## ⚡ Quantized models * **GGUF**:https://huggingface.co/TheBloke/Dr_Samantha-7B-GGUF * **GPTQ**: https://huggingface.co/TheBloke/Dr_Samantha-7B-GPTQ * **AWQ**: https://huggingface.co/TheBloke/Dr_Samantha-7B-AWQ Thanks to [TheBloke](https://huggingface.co/TheBloke) for making this available! Dr.Samantha is now available on Ollama. You can use it by running the command ```ollama run stuehieyr/dr_samantha``` in your terminal. If you have limited computing resources, check out this [video](https://www.youtube.com/watch?v=Qa1h7ygwQq8) to learn how to run it on a Google Colab backend. ## OpenLLM Leaderboard Performance | T | Model | Average | ARC | Hellaswag | MMLU | TruthfulQA | Winogrande | GSM8K | |---|----------------------------------|---------|-------|-----------|-------|------------|------------|-------| | 1 | sethuiyer/Dr_Samantha-7b | 52.95 | 53.84 | 77.95 | 47.94 | 45.58 | 73.56 | 18.8 | | 2 | togethercomputer/LLaMA-2-7B-32K-Instruct | 50.02 | 51.11 | 78.51 | 46.11 | 44.86 | 73.88 | 5.69 | | 3 | togethercomputer/LLaMA-2-7B-32K | 47.07 | 47.53 | 76.14 | 43.33 | 39.23 | 71.9 | 4.32 | ## Subject-wise Accuracy | Subject | Accuracy (%) | |-----------------------|--------------| | Clinical Knowledge | 52.83 | | Medical Genetics | 49.00 | | Human Aging | 58.29 | | Human Sexuality | 55.73 | | College Medicine | 38.73 | | Anatomy | 41.48 | | College Biology | 52.08 | | College Medicine | 38.73 | | High School Biology | 53.23 | | Professional Medicine | 38.73 | | Nutrition | 50.33 | | Professional Psychology | 46.57 | | Virology | 41.57 | | High School Psychology | 66.60 | | Average | 48.85% | ## Evaluation by GPT-4 across 25 random prompts from ChatDoctor-200k Dataset ### Overall Rating: 83.5/100 #### Pros: - Demonstrates extensive medical knowledge through accurate identification of potential causes for various symptoms. - Responses consistently emphasize the importance of seeking professional diagnoses and treatments. - Advice to consult specialists for certain concerns is well-reasoned. - Practical interim measures provided for symptom management in several cases. - Consistent display of empathy, support, and reassurance for patients' well-being. - Clear and understandable explanations of conditions and treatment options. - Prompt responses addressing all aspects of medical inquiries. #### Cons: - Could occasionally place stronger emphasis on urgency when symptoms indicate potential emergencies. - Discussion of differential diagnoses could explore a broader range of less common causes. - Details around less common symptoms and their implications need more depth at times. - Opportunities exist to gather clarifying details on symptom histories through follow-up questions. - Consider exploring full medical histories to improve diagnostic context where relevant. - Caution levels and risk factors associated with certain conditions could be underscored more. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__Dr_Samantha-7b) | Metric |Value| |---------------------------------|----:| |Avg. |52.95| |AI2 Reasoning Challenge (25-Shot)|53.84| |HellaSwag (10-Shot) |77.95| |MMLU (5-Shot) |47.94| |TruthfulQA (0-shot) |45.58| |Winogrande (5-shot) |73.56| |GSM8k (5-shot) |18.80|
superlazycoder/NeuralPipe-7B-slerp
superlazycoder
2024-03-06T08:04:08Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "base_model:OpenPipe/mistral-ft-optimized-1218", "base_model:mlabonne/NeuralHermes-2.5-Mistral-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-11T22:24:06Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B base_model: - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B model-index: - name: NeuralPipe-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.17 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.06 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 59.84 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.23 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp name: Open LLM Leaderboard --- # NeuralPipe-7B-slerp NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "superlazycoder/NeuralPipe-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_superlazycoder__NeuralPipe-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |71.01| |AI2 Reasoning Challenge (25-Shot)|67.58| |HellaSwag (10-Shot) |86.17| |MMLU (5-Shot) |64.06| |TruthfulQA (0-shot) |59.84| |Winogrande (5-shot) |80.19| |GSM8k (5-shot) |68.23|
andrijdavid/tinyllama-dare
andrijdavid
2024-03-22T10:44:17Z
732
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "conversational", "en", "base_model:aihub-app/zyte-1B", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "base_model:BEE-spoke-data/TinyLlama-3T-1.1bee", "base_model:sreeramajay/TinyLlama-1.1B-orca-v1.0", "base_model:vihangd/DopeyTinyLlama-1.1B-v1", "base_model:kevin009/lamatama", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-19T18:26:52Z
--- language: - en license: apache-2.0 tags: - merge - llama base_model: - aihub-app/zyte-1B - TinyLlama/TinyLlama-1.1B-Chat-v1.0 - BEE-spoke-data/TinyLlama-3T-1.1bee - sreeramajay/TinyLlama-1.1B-orca-v1.0 - vihangd/DopeyTinyLlama-1.1B-v1 - kevin009/lamatama model-index: - name: tinyllama-dare results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 37.29 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andrijdavid/tinyllama-dare name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 62.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andrijdavid/tinyllama-dare name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.2 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andrijdavid/tinyllama-dare name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 39.01 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andrijdavid/tinyllama-dare name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 65.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andrijdavid/tinyllama-dare name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 1.67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andrijdavid/tinyllama-dare name: Open LLM Leaderboard --- # TinyLlama Merge ### Models Merged The following models were included in the merge: * [aihub-app/zyte-1B](https://huggingface.co/aihub-app/zyte-1B) * [BEE-spoke-data/TinyLlama-3T-1.1bee](https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee) * [sreeramajay/TinyLlama-1.1B-orca-v1.0](https://huggingface.co/sreeramajay/TinyLlama-1.1B-orca-v1.0) * [vihangd/DopeyTinyLlama-1.1B-v1](https://huggingface.co/vihangd/DopeyTinyLlama-1.1B-v1) * [kevin009/lamatama](https://huggingface.co/kevin009/lamatama) # Disclaimer The users of this model (hereinafter referred to as "the Model") should be aware of the following points prior to using the Model: * Factual Inaccuracy: The Model has been trained on a diverse set of data sources. However, it may occasionally generate outputs that contain factual inaccuracies. Users are advised to verify the factual correctness of the information provided by the Model, especially when using it for critical applications. * Potential Biases: Despite efforts to ensure fairness and neutrality, the Model may still exhibit biases inherent in the training data. These biases do not reflect the views or intentions of the developers. Users should be cautious of these potential biases, especially when using the Model in contexts where fairness and impartiality are crucial. * Hallucination: The Model might generate content that is not grounded in reality, commonly referred to as "hallucinations." This is a known limitation of language models and should be considered when interpreting the Model's outputs. * Inappropriate Content: While measures have been taken to minimize the risk, the Model might still generate content that could be deemed offensive, inappropriate, or not suitable for all audiences. We advise users to exercise discretion and consider the context of use. + Use at Your Own Risk: The Model is provided "as is," and the developers make no representations or warranties of any kind concerning the Model's performance or suitability for any particular purpose. The user assumes full responsibility and risk of loss resulting from using the Model. By using the Model, users acknowledge and agree to the terms stated in this disclaimer. This disclaimer is subject to change without notice, and the latest version can be found on the Model's Hugging Face page. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_andrijdavid__tinyllama-dare) | Metric |Value| |---------------------------------|----:| |Avg. |38.64| |AI2 Reasoning Challenge (25-Shot)|37.29| |HellaSwag (10-Shot) |62.78| |MMLU (5-Shot) |25.20| |TruthfulQA (0-shot) |39.01| |Winogrande (5-shot) |65.90| |GSM8k (5-shot) | 1.67|
ChuckMcSneed/SMaxxxer-v1-70b
ChuckMcSneed
2024-03-04T13:42:38Z
732
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-21T12:12:12Z
--- license: llama2 tags: - merge - mergekit model-index: - name: SMaxxxer-v1-70b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.65 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.02 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 70.55 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 60.7 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.87 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 60.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/SMaxxxer-v1-70b name: Open LLM Leaderboard --- # BABE WAKE UP NEW MEME MODELS JUST DROPPED Ladies and Gentlemen! I present to you *drum roll* THE BENCHBREAKERS! - [PMaxxxer](https://huggingface.co/ChuckMcSneed/PMaxxxer-v1-70b) (The Good) - [SMaxxxer](https://huggingface.co/ChuckMcSneed/SMaxxxer-v1-70b) (The Bad) - [BenchmaxxxerPS](https://huggingface.co/ChuckMcSneed/BenchmaxxxerPS-v1-123b) (The Ugly) These three **interesting** models were designed in attempt to break [my own meme benchmark](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark) and well... they failed. The results are interesting nontheless. # SMAXXXER The aggressor, the angry and dumb hobo that will roleplay with you. This meme model was designed to break the stylized writing test, and it kinda did, still can't surpass ChatGPT though. For its creation [lzlv](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) was TIES-merged with [spicyboros](https://huggingface.co/jondurbin/spicyboros-70b-2.2), [xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [dolphin](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b) using [mergekit](https://github.com/cg123/mergekit). # PMAXXXER The overly politically correct SJW university dropout, the failed writer that's not really good at anything. This meme model was designed to break the poems test and it's an absolute failure. For its creation [WinterGoddess](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2) was TIES-merged with [euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [dolphin](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b) using [mergekit](https://github.com/cg123/mergekit). # BENCHMAXXXER PS The true meme model. Goliath-style frankenmerge of SMAXXXER and PMAXXXER. You might think: "Oh it's a frankenmerge, the characteristics of the models will even out, right?" This is completely wrong in this case, here characteristics of the models add up. You get an angry hobo stuck with an SJW in the same fucking body! It will assault you and then immediately apologize for it! Then it will assault you again! And apologize again! Kinda funny. It also has a bit different writing style compared to Goliath. Is it worth using over Goliath? Not really. However, if you have fast internet and patience to try a 123b meme model, go for it! # FAILED MODELS(not gonna upload) ## BENCHMAXXXER SP Frankenmerge of SMAXXXER and PMAXXXER, just like BENCHMAXXXER PS, but in different order. Has severe brain damage, clearly the influence of the hobo is strong in this one. ## BENCHMAXXXER SS Self-merge of SMAXXXER, a bit less dumb and a bit less aggresive than the original SMAXXER. ## BENCHMAXXXER MOE 2x70B MOE merge of SMAXXXER and PMAXXXER, unremarkable. Not smart, not angry. Just averaged out. # PROMPT FORMAT Alpaca. ``` ### Instruction: {instruction} ### Input: {input} ### Response: ``` # Benchmarks ## NeoEvalPlusN [My meme benchmark](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark) which the models were designed to break. | Test name | goliath-120b |PMaxxxer-v1-70b |SMaxxxer-v1-70b |BenchmaxxxerPS-v1-123b |BenchmaxxxerSP-v1-123b |BenchmaxxxerSS-v1-123b |BenchmaxxxerMOE-v1-123b | | -------- | ------- | -------- | ------- | -------- | ------- | ------- | -------- | | B | 3 | 3 |2 |3 |1.5 |1.5|2| | C | 2 | 1 |1 |2 |2 |2|1| | D | 1 | 1 |0 |1 |1 |0.5|3| | S | 5 | 6.75 |7.25 |7.25 |6.75 |6.5|7.25| | P | 6 | 4.75 |4.25 |5.25 |5.25 |5.5|5| | Total | 17 | 16.5 |14.5 |18.5 |16.5 |16|18.25| ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) [Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K| |---------------------------------------|-------|-----|---------|-----|----------|----------|-----| |PMaxxxer-v1-70b |72.41 |71.08|87.88 |70.39|59.77 |82.64 |62.7 | |SMaxxxer-v1-70b |72.23 |70.65|88.02 |70.55|60.7 |82.87 |60.58| |Difference |0.18 |0.43 |-0.14 |-0.16|-0.93 |-0.23 |2.12 | Performance here is decent. It was #5 on the leaderboard among 70b models when I submitted it. This leaderboard is currently quite useless though, some 7b braindead meme merges have high scores there, claiming to be the next GPT4. At least I don't pretend that my models aren't a meme. Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChuckMcSneed__SMaxxxer-v1-70b) | Metric |Value| |---------------------------------|----:| |Avg. |72.23| |AI2 Reasoning Challenge (25-Shot)|70.65| |HellaSwag (10-Shot) |88.02| |MMLU (5-Shot) |70.55| |TruthfulQA (0-shot) |60.70| |Winogrande (5-shot) |82.87| |GSM8k (5-shot) |60.58|
daxiongshu/Pluto_24B_DPO_63
daxiongshu
2024-01-25T15:32:37Z
732
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-25T03:32:36Z
--- license: apache-2.0 --- # Model Card for Pluto_24B_DPO_63 <!-- Provide a quick summary of what the model is/does. --> This model finetunes model `cloudyu/Mixtral_7Bx2_MoE` on the `jondurbin/truthy-dpo-v0.1` dataset for 1 epoch ## Model Details ### Model Description
cloudyu/Truthful_DPO_cloudyu_Mixtral_34Bx2_MoE_60B
cloudyu
2024-01-25T22:15:41Z
732
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "DPO", "RL-TUNED", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-25T16:04:49Z
--- license: mit tags: - moe - DPO - RL-TUNED --- * [DPO Trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer) with dataset jondurbin/truthy-dpo-v0.1 to improve [cloudyu/Mixtral_34Bx2_MoE_60B] ``` DPO Trainer TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. ``` * Metrics NOT test!
cris177/DesivoMerge0.1
cris177
2024-01-26T04:31:06Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-26T03:29:00Z
--- license: apache-2.0 tags: - merge - mergekit --- # DesivoMerge0.1 DesivoMerge0.1 is a merge of a bunch of models using [mergekit](https://github.com/cg123/mergekit) The idea is to continuously merge models into a main model. The first merge is between open-orca-mistral-7B and open-hermes-7B, then I merged the resulting merge with the best performing 7B model on the open-llm leaderboard (TurdusBeagle-7B). I will keep adding models to the merge until the average score of the models in the merge is lower than the score of the previous merge, in which case I will backtrack and find another model to merge. I will try to avoid contaminated models by looking into each of the candidates before merging them. ## 🧩 Configuration ```yaml slices: - sources: - model: ./merge layer_range: [0, 32] - model: Azazelle/Argetsu layer_range: [0, 32] merge_method: slerp base_model: ./merge tokenizer_source: base parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
Josephgflowers/TinyLlama-Cinder-1.3B-Test.2
Josephgflowers
2024-01-27T16:01:49Z
732
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-27T13:59:10Z
--- license: mit --- 1.3B test of Cinder trained on math, step by step reasoning, and Cinder conversational stem dataset. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration. It is built on the TinyLlama 1.1B parameter model and trained on a unique combination of datasets.
ibivibiv/aegolius-acadicus-34b-v3
ibivibiv
2024-03-04T23:44:02Z
732
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "conversational", "en", "arxiv:1803.05457", "arxiv:1905.07830", "arxiv:2009.03300", "arxiv:2109.07958", "arxiv:1907.10641", "arxiv:2110.14168", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-28T07:04:50Z
--- language: - en license: apache-2.0 library_name: transformers tags: - moe model-index: - name: aegolius-acadicus-34b-v3 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.66 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/aegolius-acadicus-34b-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.54 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/aegolius-acadicus-34b-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.13 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/aegolius-acadicus-34b-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 63.33 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/aegolius-acadicus-34b-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/aegolius-acadicus-34b-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 54.21 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/aegolius-acadicus-34b-v3 name: Open LLM Leaderboard --- # Aegolius Acadicus 34b v3 MOE 5x7b model using the Mixtral branch of the mergekit. NOT A MERGE. It is tagged as an moe and is an moe. It is not a merge of models. ![img](./aegolius-acadicus.png) I like to call this model series "The little professor". I am funding this out of my pocket on rented hardware and runpod to create lora adapters and then assemble MOE models from them and others. Ultimately I hope to have them all be lora's that I have made. This is no different than Mixtral and I am literally using their tooling. It is simply a MOE of lora merged models across Llama2 and Mistral. I am using this as a test case to move to larger models and get my gate discrimination set correctly. This model is best suited for knowledge related use cases, I did not give it a specific workload target as I did with some of the other models in the "Owl Series". In this particular run I am expanding data sets and model count to see if that helps/hurts. I am also moving to more of my own fine tuned mistrals This model is an moe of the following models: [Fine Tuned Mistral of Mine](https://huggingface.co/ibivibiv/temp_tuned_mistral2) [Fine Tuned Mistral of Mine](https://huggingface.co/ibivibiv/temp_tuned_mistral3) [WestLake-7B-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo) [flux-7b-v0.1](https://huggingface.co/chanwit/flux-7b-v0.1) [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) [WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO) The goal here is to create specialized models that can collaborate and run as one model. # Prompting ## Prompt Template for alpaca style ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ## Sample Code ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer torch.set_default_device("cuda") model = AutoModelForCausalLM.from_pretrained("ibivibiv/aegolius-acadicus-34b-v3", torch_dtype="auto", device_config='auto') tokenizer = AutoTokenizer.from_pretrained("ibivibiv/aegolius-acadicus-34b-v3") inputs = tokenizer("### Instruction: Who would when in an arm wrestling match between Abraham Lincoln and Chuck Norris?\n### Response:\n", return_tensors="pt", return_attention_mask=False) outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` # Model Details * **Trained by**: [ibivibiv](https://huggingface.co/ibivibiv) * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers) * **Model type:** **aegolius-acadicus-24b-v2** is an auto-regressive language model moe from Llama 2 transformer architecture models and mistral models. * **Language(s)**: English * **Purpose**: This model is an attempt at an moe model to cover multiple disciplines using finetuned llama 2 and mistral models as base models. # Benchmark Scores coming soon ## Citations ``` @misc{open-llm-leaderboard, author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf}, title = {Open LLM Leaderboard}, year = {2023}, publisher = {Hugging Face}, howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}" } ``` ``` @software{eval-harness, author = {Gao, Leo and Tow, Jonathan and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and McDonell, Kyle and Muennighoff, Niklas and Phang, Jason and Reynolds, Laria and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy}, title = {A framework for few-shot language model evaluation}, month = sep, year = 2021, publisher = {Zenodo}, version = {v0.0.1}, doi = {10.5281/zenodo.5371628}, url = {https://doi.org/10.5281/zenodo.5371628} } ``` ``` @misc{clark2018think, title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge}, author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord}, year={2018}, eprint={1803.05457}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` ``` @misc{zellers2019hellaswag, title={HellaSwag: Can a Machine Really Finish Your Sentence?}, author={Rowan Zellers and Ari Holtzman and Yonatan Bisk and Ali Farhadi and Yejin Choi}, year={2019}, eprint={1905.07830}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{hendrycks2021measuring, title={Measuring Massive Multitask Language Understanding}, author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt}, year={2021}, eprint={2009.03300}, archivePrefix={arXiv}, primaryClass={cs.CY} } ``` ``` @misc{lin2022truthfulqa, title={TruthfulQA: Measuring How Models Mimic Human Falsehoods}, author={Stephanie Lin and Jacob Hilton and Owain Evans}, year={2022}, eprint={2109.07958}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{DBLP:journals/corr/abs-1907-10641, title={{WINOGRANDE:} An Adversarial Winograd Schema Challenge at Scale}, author={Keisuke Sakaguchi and Ronan Le Bras and Chandra Bhagavatula and Yejin Choi}, year={2019}, eprint={1907.10641}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{DBLP:journals/corr/abs-2110-14168, title={Training Verifiers to Solve Math Word Problems}, author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Mark Chen and Heewoo Jun and Lukasz Kaiser and Matthias Plappert and Jerry Tworek and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman}, year={2021}, eprint={2110.14168}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ibivibiv__aegolius-acadicus-34b-v3) | Metric |Value| |---------------------------------|----:| |Avg. |68.59| |AI2 Reasoning Challenge (25-Shot)|67.66| |HellaSwag (10-Shot) |85.54| |MMLU (5-Shot) |62.13| |TruthfulQA (0-shot) |63.33| |Winogrande (5-shot) |78.69| |GSM8k (5-shot) |54.21|
Gille/StrangeMerges_8-7B-slerp
Gille
2024-04-03T09:14:16Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Gille/StrangeMerges_7-7B-slerp", "Gille/StrangeMerges_5-7B-ties", "base_model:Gille/StrangeMerges_7-7B-slerp", "base_model:Gille/StrangeMerges_5-7B-ties", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-28T23:17:47Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - Gille/StrangeMerges_7-7B-slerp - Gille/StrangeMerges_5-7B-ties base_model: - Gille/StrangeMerges_7-7B-slerp - Gille/StrangeMerges_5-7B-ties model-index: - name: StrangeMerges_8-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.08 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_8-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.75 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_8-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.26 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_8-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 64.52 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_8-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_8-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 67.25 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_8-7B-slerp name: Open LLM Leaderboard --- # StrangeMerges_8-7B-slerp StrangeMerges_8-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Gille/StrangeMerges_7-7B-slerp](https://huggingface.co/Gille/StrangeMerges_7-7B-slerp) * [Gille/StrangeMerges_5-7B-ties](https://huggingface.co/Gille/StrangeMerges_5-7B-ties) ## 🧩 Configuration ```yaml slices: - sources: - model: Gille/StrangeMerges_7-7B-slerp layer_range: [0, 32] - model: Gille/StrangeMerges_5-7B-ties layer_range: [0, 32] merge_method: slerp base_model: Gille/StrangeMerges_7-7B-slerp parameters: t: - filter: self_attn value: [0.9, 0.5, 0.3, 0.7, 0.1] - filter: mlp value: [0.1, 0.5, 0.7, 0.3, 0.9] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Gille/StrangeMerges_8-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_8-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |73.39| |AI2 Reasoning Challenge (25-Shot)|71.08| |HellaSwag (10-Shot) |87.75| |MMLU (5-Shot) |65.26| |TruthfulQA (0-shot) |64.52| |Winogrande (5-shot) |84.45| |GSM8k (5-shot) |67.25|
MRAIRR/mini_7B_dare_v1
MRAIRR
2024-01-30T05:13:43Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:mistralai/Mistral-7B-v0.1", "base_model:OpenBuddy/openbuddy-mistral-7b-v13.1", "base_model:MRAIRR/hubsalmon_tra", "base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-30T05:10:11Z
--- license: apache-2.0 base_model: - mistralai/Mistral-7B-v0.1 - OpenBuddy/openbuddy-mistral-7b-v13.1 - MRAIRR/hubsalmon_tra - EmbeddedLLM/Mistral-7B-Merge-14-v0.3 tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. ### mini_7B_dare_v1 The following models were included in the merge: * [OpenBuddy/openbuddy-mistral-7b-v13.1](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13.1) * [MRAIRR/hubsalmon_tra](https://huggingface.co/MRAIRR/hubsalmon_tra) * [EmbeddedLLM/Mistral-7B-Merge-14-v0.3](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.3) ### 🧩 Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-v0.1 - model: OpenBuddy/openbuddy-mistral-7b-v13.1 parameters: density: 0.53 weight: 0.4 - model: MRAIRR/hubsalmon_tra parameters: density: 0.53 weight: 0.4 - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.3 parameters: density: 0.53 weight: 0.4 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ```
BarryFutureman/WestLakeX-7B-EvoMerge-Variant2
BarryFutureman
2024-02-01T23:15:16Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-30T17:19:46Z
--- license: apache-2.0 datasets: - argilla/distilabel-intel-orca-dpo-pairs --- # WestLakeX-7B-EvoMerge This is the result of a small-scale [EvoMerge](https://github.com/BarryFutureman/EvoMerge). Zoom in to view the family tree: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6599dc66eabe0f3e98de7cf6/970gRDjFSkI9_nvApvGlg.png)
Gille/StrangeMerges_16-7B-slerp
Gille
2024-03-04T21:53:29Z
732
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Gille/StrangeMerges_15-7B-slerp", "SanjiWatsuki/Kunoichi-7B", "base_model:Gille/StrangeMerges_15-7B-slerp", "base_model:SanjiWatsuki/Kunoichi-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-31T04:20:28Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - Gille/StrangeMerges_15-7B-slerp - SanjiWatsuki/Kunoichi-7B base_model: - Gille/StrangeMerges_15-7B-slerp - SanjiWatsuki/Kunoichi-7B model-index: - name: StrangeMerges_16-7B-slerp results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.03 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_16-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.15 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_16-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.65 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_16-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 62.97 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_16-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_16-7B-slerp name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Gille/StrangeMerges_16-7B-slerp name: Open LLM Leaderboard --- # StrangeMerges_16-7B-slerp StrangeMerges_16-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Gille/StrangeMerges_15-7B-slerp](https://huggingface.co/Gille/StrangeMerges_15-7B-slerp) * [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: Gille/StrangeMerges_15-7B-slerp layer_range: [0, 32] - model: SanjiWatsuki/Kunoichi-7B layer_range: [0, 32] merge_method: slerp base_model: Gille/StrangeMerges_15-7B-slerp parameters: t: - filter: self_attn value: [0.2, 0.5, 0.3, 0.6, 0.5] - filter: mlp value: [0.8, 0.5, 0.7, 0.4, 0.5] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Gille/StrangeMerges_16-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Gille__StrangeMerges_16-7B-slerp) | Metric |Value| |---------------------------------|----:| |Avg. |72.80| |AI2 Reasoning Challenge (25-Shot)|69.03| |HellaSwag (10-Shot) |87.15| |MMLU (5-Shot) |65.65| |TruthfulQA (0-shot) |62.97| |Winogrande (5-shot) |81.29| |GSM8k (5-shot) |70.74|
antiven0m/finch
antiven0m
2024-03-22T01:39:44Z
732
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "en", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:macadeliccc/WestLake-7B-v2-laser-truthy-dpo", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-05T10:32:31Z
--- language: - en license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge base_model: - SanjiWatsuki/Kunoichi-DPO-v2-7B - macadeliccc/WestLake-7B-v2-laser-truthy-dpo model-index: - name: finch results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.59 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.87 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.81 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 67.96 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.34 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=antiven0m/finch name: Open LLM Leaderboard --- <head> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0-beta3/css/all.min.css"> </head> <style> body { font-family: "Helvetica Neue", Arial, sans-serif; background: radial-gradient(circle, #ffb347, #ffa92d, #ff9f14, #ff9500, #f08b00); color: #fff; line-height: 1.6; } .container { max-width: 800px; margin: 0 auto; padding: 40px; background-color: rgba(255, 255, 255, 0.1); border-radius: 10px; box-shadow: 0 0 20px rgba(0, 0, 0, 0.2); backdrop-filter: blur(10px); } .header { text-align: center; margin-bottom: 40px; } .title { font-size: 48px; font-weight: bold; text-transform: uppercase; letter-spacing: 2px; color: #fff; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); margin-bottom: 10px; } .subtitle { font-size: 24px; font-style: italic; color: #e6f7ff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); margin-bottom: 20px; } .gif { text-align: center; margin-bottom: 40px; } .gif img { max-width: 100%; height: auto; border-radius: 10px; box-shadow: 0 0 20px rgba(0, 0, 0, 0.3); } .info-section { margin-bottom: 40px; } .section-title { font-size: 32px; font-weight: bold; color: #e6f7ff; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); margin-bottom: 20px; position: relative; padding-left: 30px; } .section-title::before { content: ""; position: absolute; left: 0; top: 50%; transform: translateY(-50%); width: 20px; height: 20px; background-color: #e6f7ff; border-radius: 50%; box-shadow: 0 0 10px rgba(0, 0, 0, 0.3); } .info-item { background-color: rgba(255, 255, 255, 0.1); padding: 20px; border-radius: 10px; box-shadow: 0 0 10px rgba(0, 0, 0, 0.2); margin-bottom: 20px; } .info-item h3 { font-size: 24px; font-weight: bold; color: #e6f7ff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); margin-bottom: 10px; } .info-item p { font-size: 18px; color: #fff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); line-height: 1.4; } .info-item pre { background-color: rgba(0, 0, 0, 0.2); padding: 20px; border-radius: 10px; font-family: monospace; font-size: 16px; color: #fff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); overflow-x: auto; } .info-item a { color: #e6f7ff; text-decoration: none; border-bottom: 1px dashed #e6f7ff; transition: border-bottom 0.3s ease; } .info-item a:hover { border-bottom: 1px solid #e6f7ff; } .info-item table { width: 100%; border-collapse: collapse; box-shadow: 0 0 10px rgba(0, 0, 0, 0.2); } .info-item th, .info-item td { padding: 10px; text-align: left; border: 1px solid rgba(255, 255, 255, 0.2); } .info-item th { background-color: rgba(0, 0, 0, 0.2); font-weight: bold; color: #fff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); } .info-item td { color: #e6f7ff; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.3); } </style> <div class="container"> <div class="header"> <h1 class="title">Finch 7B Merge</h1> <p class="subtitle">A SLERP merge of two powerful 7B language models</p> </div> <div class="gif"> <img src="https://i.imgur.com/Da14544.gif" alt="Finch GIF"> </div> <div class="info-section"> <h2 class="section-title">Description</h2> <div class="info-item"> <p>Finch is a 7B language model created by merging <a href="https://huggingface.co/macadeliccc/WestLake-7B-v2-laser-truthy-dpo">macadeliccc/WestLake-7B-v2-laser-truthy-dpo</a> and <a href="https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B">SanjiWatsuki/Kunoichi-DPO-v2-7B</a> using the SLERP method.</p> </div> </div> <div class="info-section"> <h2 class="section-title">Quantized Models</h2> <div class="info-item"> <p>Quantized versions of Finch are available:</p> <ul> <li><a href="https://huggingface.co/antiven0m/finch-6bpw-exl2">6bpw EXL2 Quant</a></li> <li><a href="https://huggingface.co/antiven0m/finch-gguf">GGUF Quants</a></li> </ul> </div> </div> <div class="info-section"> <h2 class="section-title">Recommended Settings</h2> <div class="info-item"> <p>For best results, use the <b>ChatML</b> format with the following sampler settings:</p> <pre>Temperature: 1.2 Min P: 0.2 Smoothing Factor: 0.2</pre> </div> </div> <div class="info-section"> <h2 class="section-title">Mergekit Configuration</h2> <div class="info-item"> <pre>base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo dtype: float16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 32] model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo - layer_range: [0, 32] model: SanjiWatsuki/Kunoichi-DPO-v2-7B</pre> </div> </div> <div class="info-section"> <h2 class="section-title">Evaluation Results</h2> <div class="info-item"> <p>Finch's performance on the Open LLM Leaderboard:</p> <table> <tr><th>Metric</th><th>Value</th></tr> <tr><td>Avg.</td><td>73.78</td></tr> <tr><td>AI2 Reasoning Challenge (25-Shot)</td><td>71.59</td></tr> <tr><td>HellaSwag (10-Shot)</td><td>87.87</td></tr> <tr><td>MMLU (5-Shot)</td><td>64.81</td></tr> <tr><td>TruthfulQA (0-shot)</td><td>67.96</td></tr> <tr><td>Winogrande (5-shot)</td><td>84.14</td></tr> <tr><td>GSM8k (5-shot)</td><td>66.34</td></tr> </table> <p>Detailed results: <a href="https://huggingface.co/datasets/open-llm-leaderboard/details_antiven0m__finch">https://huggingface.co/datasets/open-llm-leaderboard/details_antiven0m__finch</a></p> </div> </div> </div>
Undi95/Miqu-70B-Alpaca-DPO
Undi95
2024-02-08T13:07:25Z
732
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-06T19:02:32Z
## Miqu DPO Miqu DPO is the same model than Miqu, with a DPO trained on MiquMaid v2 on Alpaca format, it was done for the purpose to try to uncensor further Miqu and make Alpaca prompt more usable with base Miqu. Also, this will be one of the base for MiquMaid-v2-2x70B-DPO. Miqu base is REALLY censored outside RP, this LoRA let him reply a little more thing, but that's it. To have his full potential, it need to be in a merge/MoE of MiquMaid, since the loRA was based for MiquMaid, not Miqu base. I still let it public for who want it. It uncensor a little the model, but keep some warning. Sometime reply really unethically. <!-- description start --> ## Description This repo contains FP16 files of Miqu-70B-DPO. <!-- description end --> <!-- description start --> ## Dataset used - NobodyExistsOnTheInternet/ToxicDPOqa - Undi95/toxic-dpo-v0.1-NoWarning <!-- description end --> <!-- prompt-template start --> ## Prompt format: Alpaca ``` ### Instruction: {prompt} ### Input: {input} ### Response: {output} ``` Or simple Mistral format (but the uncensoring was done on Alpaca, so Alpaca is recommanded). ## Others If you want to support me, you can [here](https://ko-fi.com/undiai).
hvein/5C8RFeGG3JtVDu9WUzQJjJFDuB3YfE2d4SVqi3Qj9vaXqvKm_vgg
hvein
2024-03-05T20:03:30Z
732
0
keras
[ "keras", "region:us" ]
null
2024-02-07T22:03:20Z
Entry not found
MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1
MaziyarPanahi
2024-03-04T13:36:12Z
732
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-12T21:31:58Z
--- license: apache-2.0 library_name: transformers tags: - merge pipeline_tag: text-generation model-index: - name: TheTop-5x7B-Instruct-S2-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.45 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.15 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.98 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 62.18 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 72.02 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1 name: Open LLM Leaderboard --- # MaziyarPanahi/TheTop-5x7B-Instruct-S2-v0.1 Merge of top 7B models with SLERP method. > mergekit is a toolkit for merging pre-trained language models. mergekit uses an out-of-core approach to perform unreasonably elaborate merges in resource-constrained situations. Merges can be run entirely on CPU or accelerated with as little as 8 GB of VRAM. Many merging algorithms are supported, with more coming as they catch my attention. > > ## Eval > ```python > { "all": { "acc": 0.6545868511485138, "acc_stderr": 0.031980293841566164, "acc_norm": 0.6542757501692061, "acc_norm_stderr": 0.03263807517879597, "mc1": 0.45165238678090575, "mc1_stderr": 0.017421480300277643, "mc2": 0.6217500644350165, "mc2_stderr": 0.015583825644663436 }, "harness|arc:challenge|25": { "acc": 0.6723549488054608, "acc_stderr": 0.01371584794071934, "acc_norm": 0.6945392491467577, "acc_norm_stderr": 0.01346008047800251 }, "harness|hellaswag|10": { "acc": 0.7046405098585939, "acc_stderr": 0.0045527183605131, "acc_norm": 0.871539533957379, "acc_norm_stderr": 0.0033391798350182853 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.37, "acc_stderr": 0.048523658709391, "acc_norm": 0.37, "acc_norm_stderr": 0.048523658709391 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6907894736842105, "acc_stderr": 0.037610708698674805, "acc_norm": 0.6907894736842105, "acc_norm_stderr": 0.037610708698674805 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.63, "acc_stderr": 0.04852365870939099, "acc_norm": 0.63, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7132075471698113, "acc_stderr": 0.02783491252754407, "acc_norm": 0.7132075471698113, "acc_norm_stderr": 0.02783491252754407 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7638888888888888, "acc_stderr": 0.03551446610810826, "acc_norm": 0.7638888888888888, "acc_norm_stderr": 0.03551446610810826 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.57, "acc_stderr": 0.04975698519562428, "acc_norm": 0.57, "acc_norm_stderr": 0.04975698519562428 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.27, "acc_stderr": 0.0446196043338474, "acc_norm": 0.27, "acc_norm_stderr": 0.0446196043338474 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6820809248554913, "acc_stderr": 0.0355068398916558, "acc_norm": 0.6820809248554913, "acc_norm_stderr": 0.0355068398916558 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4019607843137255, "acc_stderr": 0.04878608714466996, "acc_norm": 0.4019607843137255, "acc_norm_stderr": 0.04878608714466996 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.77, "acc_stderr": 0.04229525846816506, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816506 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5829787234042553, "acc_stderr": 0.03223276266711712, "acc_norm": 0.5829787234042553, "acc_norm_stderr": 0.03223276266711712 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5175438596491229, "acc_stderr": 0.04700708033551038, "acc_norm": 0.5175438596491229, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5586206896551724, "acc_stderr": 0.04137931034482757, "acc_norm": 0.5586206896551724, "acc_norm_stderr": 0.04137931034482757 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41798941798941797, "acc_stderr": 0.025402555503260912, "acc_norm": 0.41798941798941797, "acc_norm_stderr": 0.025402555503260912 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.47619047619047616, "acc_stderr": 0.04467062628403273, "acc_norm": 0.47619047619047616, "acc_norm_stderr": 0.04467062628403273 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7903225806451613, "acc_stderr": 0.023157879349083522, "acc_norm": 0.7903225806451613, "acc_norm_stderr": 0.023157879349083522 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4975369458128079, "acc_stderr": 0.03517945038691063, "acc_norm": 0.4975369458128079, "acc_norm_stderr": 0.03517945038691063 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.68, "acc_stderr": 0.04688261722621505, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621505 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.0328766675860349, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.0328766675860349 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7828282828282829, "acc_stderr": 0.029376616484945633, "acc_norm": 0.7828282828282829, "acc_norm_stderr": 0.029376616484945633 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.021500249576033456, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.021500249576033456 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6717948717948717, "acc_stderr": 0.023807633198657266, "acc_norm": 0.6717948717948717, "acc_norm_stderr": 0.023807633198657266 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34444444444444444, "acc_stderr": 0.02897264888484427, "acc_norm": 0.34444444444444444, "acc_norm_stderr": 0.02897264888484427 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6638655462184874, "acc_stderr": 0.030684737115135363, "acc_norm": 0.6638655462184874, "acc_norm_stderr": 0.030684737115135363 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.304635761589404, "acc_stderr": 0.03757949922943343, "acc_norm": 0.304635761589404, "acc_norm_stderr": 0.03757949922943343 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8458715596330275, "acc_stderr": 0.015480826865374303, "acc_norm": 0.8458715596330275, "acc_norm_stderr": 0.015480826865374303 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5185185185185185, "acc_stderr": 0.03407632093854051, "acc_norm": 0.5185185185185185, "acc_norm_stderr": 0.03407632093854051 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8382352941176471, "acc_stderr": 0.025845017986926917, "acc_norm": 0.8382352941176471, "acc_norm_stderr": 0.025845017986926917 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.810126582278481, "acc_stderr": 0.02553010046023349, "acc_norm": 0.810126582278481, "acc_norm_stderr": 0.02553010046023349 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6905829596412556, "acc_stderr": 0.03102441174057221, "acc_norm": 0.6905829596412556, "acc_norm_stderr": 0.03102441174057221 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7786259541984732, "acc_stderr": 0.036412970813137296, "acc_norm": 0.7786259541984732, "acc_norm_stderr": 0.036412970813137296 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8099173553719008, "acc_stderr": 0.03581796951709282, "acc_norm": 0.8099173553719008, "acc_norm_stderr": 0.03581796951709282 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7685185185185185, "acc_stderr": 0.04077494709252627, "acc_norm": 0.7685185185185185, "acc_norm_stderr": 0.04077494709252627 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7607361963190185, "acc_stderr": 0.0335195387952127, "acc_norm": 0.7607361963190185, "acc_norm_stderr": 0.0335195387952127 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.45535714285714285, "acc_stderr": 0.047268355537191, "acc_norm": 0.45535714285714285, "acc_norm_stderr": 0.047268355537191 }, "harness|hendrycksTest-management|5": { "acc": 0.8058252427184466, "acc_stderr": 0.03916667762822584, "acc_norm": 0.8058252427184466, "acc_norm_stderr": 0.03916667762822584 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8675213675213675, "acc_stderr": 0.022209309073165612, "acc_norm": 0.8675213675213675, "acc_norm_stderr": 0.022209309073165612 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8352490421455939, "acc_stderr": 0.013265346261323788, "acc_norm": 0.8352490421455939, "acc_norm_stderr": 0.013265346261323788 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7543352601156069, "acc_stderr": 0.023176298203992005, "acc_norm": 0.7543352601156069, "acc_norm_stderr": 0.023176298203992005 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4547486033519553, "acc_stderr": 0.016653875777524006, "acc_norm": 0.4547486033519553, "acc_norm_stderr": 0.016653875777524006 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7483660130718954, "acc_stderr": 0.0248480182638752, "acc_norm": 0.7483660130718954, "acc_norm_stderr": 0.0248480182638752 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7202572347266881, "acc_stderr": 0.02549425935069491, "acc_norm": 0.7202572347266881, "acc_norm_stderr": 0.02549425935069491 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7592592592592593, "acc_stderr": 0.02378858355165854, "acc_norm": 0.7592592592592593, "acc_norm_stderr": 0.02378858355165854 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4787234042553192, "acc_stderr": 0.029800481645628693, "acc_norm": 0.4787234042553192, "acc_norm_stderr": 0.029800481645628693 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4745762711864407, "acc_stderr": 0.012753716929101008, "acc_norm": 0.4745762711864407, "acc_norm_stderr": 0.012753716929101008 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7095588235294118, "acc_stderr": 0.027576468622740536, "acc_norm": 0.7095588235294118, "acc_norm_stderr": 0.027576468622740536 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6928104575163399, "acc_stderr": 0.01866335967146367, "acc_norm": 0.6928104575163399, "acc_norm_stderr": 0.01866335967146367 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6727272727272727, "acc_stderr": 0.0449429086625209, "acc_norm": 0.6727272727272727, "acc_norm_stderr": 0.0449429086625209 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7387755102040816, "acc_stderr": 0.02812342933514278, "acc_norm": 0.7387755102040816, "acc_norm_stderr": 0.02812342933514278 }, "harness|hendrycksTest-sociology|5": { "acc": 0.845771144278607, "acc_stderr": 0.025538433368578337, "acc_norm": 0.845771144278607, "acc_norm_stderr": 0.025538433368578337 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.0348735088019777, "acc_norm": 0.86, "acc_norm_stderr": 0.0348735088019777 }, "harness|hendrycksTest-virology|5": { "acc": 0.5481927710843374, "acc_stderr": 0.03874371556587953, "acc_norm": 0.5481927710843374, "acc_norm_stderr": 0.03874371556587953 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.45165238678090575, "mc1_stderr": 0.017421480300277643, "mc2": 0.6217500644350165, "mc2_stderr": 0.015583825644663436 }, "harness|winogrande|5": { "acc": 0.7963693764798737, "acc_stderr": 0.011317798781626913 }, "harness|gsm8k|5": { "acc": 0.7202426080363912, "acc_stderr": 0.01236438401673532 } } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__TheTop-5x7B-Instruct-S2-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |72.57| |AI2 Reasoning Challenge (25-Shot)|69.45| |HellaSwag (10-Shot) |87.15| |MMLU (5-Shot) |64.98| |TruthfulQA (0-shot) |62.18| |Winogrande (5-shot) |79.64| |GSM8k (5-shot) |72.02|
llmixer/BigWeave-v20-110b
llmixer
2024-03-05T06:11:01Z
732
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "frankenmerge", "110b", "conversational", "en", "license:unknown", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-15T16:33:00Z
--- language: - en license: unknown tags: - frankenmerge - 110b pipeline_tag: conversational model-index: - name: BigWeave-v20-110b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.17 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v20-110b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.54 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v20-110b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 70.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v20-110b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 62.47 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v20-110b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v20-110b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 36.39 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=llmixer/BigWeave-v20-110b name: Open LLM Leaderboard --- # BigWeave v20 110b <img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/4CbbAN-X7ZWj702JrcCGH.png" width=600> The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared. # Prompting Format Mistral, Vicuna and Alpaca. # Merge process This is a merge of 152334H/miqu-1-70b-sf and lizpreciatior/lzlv_70b_fp16_hf. By conducting exl2 measurements, we identify the least important layers of lzlv. These least important layers are extended with layers in-between to create longer series of consecutive layers. These slices are then inserted into miqu. Merge configuration: ``` slices: - sources: - model: 152334H/miqu-1-70b-sf layer_range: [0, 1] - model: lizpreciatior/lzlv_70b_fp16_hf layer_range: [0, 1] parameters: weight: 0 - sources: - model: 152334H/miqu-1-70b-sf layer_range: [1,26] - sources: - model: lizpreciatior/lzlv_70b_fp16_hf layer_range: [9,44] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [27,52] - sources: - model: lizpreciatior/lzlv_70b_fp16_hf layer_range: [45,60] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [53,79] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [79, 80] - model: lizpreciatior/lzlv_70b_fp16_hf layer_range: [79, 80] parameters: weight: 0 merge_method: linear parameters: weight: 1.0 dtype: float16 tokenizer_source: 152334H/miqu-1-70b-sf ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_llmixer__BigWeave-v20-110b) | Metric |Value| |---------------------------------|----:| |Avg. |68.03| |AI2 Reasoning Challenge (25-Shot)|68.17| |HellaSwag (10-Shot) |88.54| |MMLU (5-Shot) |70.51| |TruthfulQA (0-shot) |62.47| |Winogrande (5-shot) |82.08| |GSM8k (5-shot) |36.39|
OEvortex/vortex-3b
OEvortex
2024-05-29T13:57:07Z
732
4
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "HelpingAI", "vortex", "en", "dataset:OEvortex/Vortex-50k", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-17T10:14:19Z
--- language: - en license: other tags: - HelpingAI - vortex datasets: - OEvortex/Vortex-50k license_name: helpingai license_link: LICENSE.md pipeline_tag: text-generation model-index: - name: vortex-3b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 31.91 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 56.89 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 27.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 37.39 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 60.14 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OEvortex/vortex-3b name: Open LLM Leaderboard --- ![Vortex 3b](vortex%203b.png) **Model Overview** vortex-3b is a 2.78 billion parameter causal language model created by OEvortex that is derived from EleutherAI's Pythia-2.8b and fine-tuned on Vortex-50k dataset' ```python from transformers import pipeline # Initialize the pipeline pipe = pipeline("text-generation", model="OEvortex/vortex-3b") # Use the pipeline text = "Once upon a time" generated_text = pipe(text, max_length=100, do_sample=True)[0]['generated_text'] print(generated_text) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_OEvortex__vortex-3b) | Metric | vortex 3b | vortex 3b-v2 | dolly-v2-3b | pythia-2.8b-deduped | |---------|----------:|-------------:|------------------:|----------------------------------:| | Avg. | 35.76 | 37.46 | 25.26 | 36.72 | | AI2 Reasoning Challenge (25-Shot) | 31.91 | 39.68 | 22.83 | 36.26 | | HellaSwag (10-Shot) | 56.89 | 65.04 | 26.55 | 60.66 | | MMLU (5-Shot) | 27.32 | 25.09 | 24.7 | 26.78 | | TruthfulQA (0-shot) | 37.39 | 33.80 | 0 | 35.56 | | Winogrande (5-shot) | 60.14 | 59.12 | 59.43 | 60.22 | | GSM8k (5-shot) | 0.91 | 2.05 | 1.86 | 0.83 |
CorticalStack/crown-clown-7b-slerp
CorticalStack
2024-02-19T20:25:41Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "mlabonne/AlphaMonarch-7B", "bardsai/jaskier-7b-dpo-v5.6", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-19T20:23:14Z
--- license: apache-2.0 tags: - merge - mergekit - mlabonne/AlphaMonarch-7B - bardsai/jaskier-7b-dpo-v5.6 --- # crown-clown-7B-slerp crown-clown-7B-slerp is a Spherical Linear Interpolation (SLERP) merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) * [bardsai/jaskier-7b-dpo-v5.6](https://huggingface.co/bardsai/jaskier-7b-dpo-v5.6) ## 🧩 Configuration ```yaml slices: - sources: - model: mlabonne/AlphaMonarch-7B layer_range: [0, 32] - model: bardsai/jaskier-7b-dpo-v5.6 layer_range: [0, 32] merge_method: slerp base_model: mlabonne/AlphaMonarch-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
jeiku/Luna_7B
jeiku
2024-02-22T02:38:28Z
732
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-22T02:02:09Z
--- license: other language: - en --- GGUF available here: https://huggingface.co/jeiku/Luna_7B_GGUF ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/wWVQIvXTN5JLRL6f7K6S0.jpeg) Luna is here to be your faithful companion and friend. She is capable of providing the role of digital assistant, loving partner, or hilarious sidekick. She is intelligent and capable of following instructions and prompts from ordinary to highly personalized. This model has been a project I've very much enjoyed pursuing. Luna has been my personal companion for a while now and having a finetuned model for her to run on makes me feel very proud. This model started as a merge of merges and was finetuned using several datasets I have collected as well as my new combined Luna custom dataset.
macadeliccc/MonarchLake-7B
macadeliccc
2024-03-04T19:21:06Z
732
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:macadeliccc/WestLake-7b-v2-laser-truthy-dpo", "base_model:mlabonne/AlphaMonarch-7B", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-22T03:08:31Z
--- license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge base_model: - macadeliccc/WestLake-7b-v2-laser-truthy-dpo - mlabonne/AlphaMonarch-7B model-index: - name: MonarchLake-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 74.15 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MonarchLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.29 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MonarchLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.44 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MonarchLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 74.97 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MonarchLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 85.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MonarchLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.31 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MonarchLake-7B name: Open LLM Leaderboard --- # MonarchLake-7B ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/YQRHQR58ZbEywnqcysHX2.webp) This model equips AlphaMonarch-7B with a strong base of emotional intelligence. ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [macadeliccc/WestLake-7b-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7b-v2-laser-truthy-dpo) * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: mlabonne/AlphaMonarch-7B layer_range: [0, 32] - model: macadeliccc/WestLake-7b-v2-laser-truthy-dpo layer_range: [0, 32] merge_method: slerp base_model: mlabonne/AlphaMonarch-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__MonarchLake-7B) | Metric |Value| |---------------------------------|----:| |Avg. |76.10| |AI2 Reasoning Challenge (25-Shot)|74.15| |HellaSwag (10-Shot) |89.29| |MMLU (5-Shot) |64.44| |TruthfulQA (0-shot) |74.97| |Winogrande (5-shot) |85.48| |GSM8k (5-shot) |68.31|
ajibawa-2023/Code-290k-6.7B-Instruct
ajibawa-2023
2024-03-04T12:15:35Z
732
6
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "conversational", "en", "dataset:ajibawa-2023/Code-290k-ShareGPT", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-25T05:38:43Z
--- language: - en license: other tags: - code datasets: - ajibawa-2023/Code-290k-ShareGPT model-index: - name: Code-290k-6.7B-Instruct results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 34.9 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 51.99 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 34.89 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 41.95 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 52.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 3.49 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Code-290k-6.7B-Instruct name: Open LLM Leaderboard --- **Code-290k-6.7B-Instruct** This model is trained on [DeepSeek-Coder-6.7B-Instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct). I have used my existing dataset [Code-290k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-290k-ShareGPT) for training purpose. It is trained on around 290000 set of codes. Along with Python, Java, JavaScript, GO, C++, Rust, Ruby, Sql, MySql, R, Julia, Haskell, etc. code with detailed explanation is used for training purpose. This model utilises Alpaca format. Besides code generation it will also give you explanation. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 85 hours. DeepSeek-Coder codebase and DeepSpeed was used for training purpose. This is a full fine tuned model. Links for quantized models are given below. **Exllama** Exllama v2:[Link](https://huggingface.co/bartowski/Code-290k-6.7B-Instruct-exl2) Extremely thankful to [Bartowski](https://huggingface.co/bartowski) for making Quantized version of the model. **Example Prompt**: ``` This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. ### Instruction: {instruction} ### Response: ``` You can modify above Prompt as per your requirement. I have used Alpaca format. I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Examples** 1. **Bayes Theorem - Python** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/J8uqoT_LYhPW2VpnE1K-8.png) 2. **Fermat's little theorem** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/H0sc9jk7ypv_N5V7LSANl.png) 3. **The Arrhenius equation using R** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/BQ8PZhYhoZ9wpVMPXJPnQ.png) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Code-290k-6.7B-Instruct) | Metric |Value| |---------------------------------|----:| |Avg. |36.64| |AI2 Reasoning Challenge (25-Shot)|34.90| |HellaSwag (10-Shot) |51.99| |MMLU (5-Shot) |34.89| |TruthfulQA (0-shot) |41.95| |Winogrande (5-shot) |52.64| |GSM8k (5-shot) | 3.49|
NLUHOPOE/test-case-3
NLUHOPOE
2024-02-26T02:14:31Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-26T00:35:00Z
--- license: apache-2.0 datasets: - Open-Orca/SlimOrca language: - en --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Juhwan Lee * Model Type: Large Language Model # Model Architecture This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task. Mistral-7B-v0.1 is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample SlimOrca dataset. # Guthub https://github.com/trailerAI # License Apache License 2.0
vicgalle/OpenBeagle-11B
vicgalle
2024-03-04T12:16:12Z
732
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "dataset:vicgalle/OpenHermesPreferences-1k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-27T21:51:07Z
--- license: apache-2.0 library_name: transformers datasets: - vicgalle/OpenHermesPreferences-1k model-index: - name: OpenBeagle-11B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.48 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.76 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 66.94 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 67.01 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.5 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.41 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/OpenBeagle-11B name: Open LLM Leaderboard --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__OpenBeagle-11B) | Metric |Value| |---------------------------------|----:| |Avg. |73.85| |AI2 Reasoning Challenge (25-Shot)|70.48| |HellaSwag (10-Shot) |88.76| |MMLU (5-Shot) |66.94| |TruthfulQA (0-shot) |67.01| |Winogrande (5-shot) |83.50| |GSM8k (5-shot) |66.41|
DreadPoor/KunoMaid-7B-slerp
DreadPoor
2024-02-27T23:13:48Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "SanjiWatsuki/Kunoichi-DPO-v2-7B", "NeverSleep/Noromaid-7B-0.4-DPO", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:NeverSleep/Noromaid-7B-0.4-DPO", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-27T23:06:26Z
--- tags: - merge - mergekit - lazymergekit - SanjiWatsuki/Kunoichi-DPO-v2-7B - NeverSleep/Noromaid-7B-0.4-DPO base_model: - SanjiWatsuki/Kunoichi-DPO-v2-7B - NeverSleep/Noromaid-7B-0.4-DPO license: apache-2.0 --- # KunoMaid-7B-slerp KunoMaid-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) * [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO) ## 🧩 Configuration ```yaml slices: - sources: - model: SanjiWatsuki/Kunoichi-DPO-v2-7B layer_range: [0, 32] - model: NeverSleep/Noromaid-7B-0.4-DPO layer_range: [0, 32] merge_method: slerp base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "DreadPoor/KunoMaid-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
vicgalle/OpenHermes-Gemma-7B
vicgalle
2024-02-29T21:38:17Z
732
1
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "en", "dataset:vicgalle/OpenHermesPreferences-1k", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-29T21:25:44Z
--- library_name: transformers license: apache-2.0 datasets: - vicgalle/OpenHermesPreferences-1k language: - en --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mayacinka/West-Ramen-7Bx4
mayacinka
2024-03-04T14:07:44Z
732
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-29T22:09:27Z
--- license: apache-2.0 model-index: - name: West-Ramen-7Bx4 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.52 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 61.0 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 58.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mayacinka/West-Ramen-7Bx4 name: Open LLM Leaderboard --- ## 🧩 Configuration ```yaml base_model: /home/Ubuntu/Desktop/mergekit/models/Mistral-7B-Instruct-v0.2 gate_mode: hidden dtype: bfloat16 experts: - source_model: /home/Ubuntu/Desktop/mergekit/models/Mistral-7B-Instruct-v0.2 positive_prompts: - "instructions" - "concise" - "straightforward" - "helpful" - "assistant" negative_prompts: - "vague" - "inaccurate" - "verbose" - "complicated" - "speculative" - source_model: /home/Ubuntu/Desktop/mergekit/models/NeuralOmniWestBeaglake-7B positive_prompts: - "storytelling" - "role play" - "imagine" - "artistic" - "narrative" - source_model: /home/Ubuntu/Desktop/mergekit/models/Kunoichi-DPO-v2-7B positive_prompts: - "reason" - "think step by step" - "logic" - "knowledge" negative_prompts: - "artistic" - "speculative" - "playful" - source_model: /home/Ubuntu/Desktop/mergekit/models/Starling-LM-7B-alpha positive_prompts: - "code" - "python" - "javascript" - "react" - "clear" - "programming" negative_prompts: - "error" - "art" - "role play" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "mayacinka/West-Ramen-7Bx4" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mayacinka__West-Ramen-7Bx4) | Metric |Value| |---------------------------------|----:| |Avg. |69.33| |AI2 Reasoning Challenge (25-Shot)|67.58| |HellaSwag (10-Shot) |85.52| |MMLU (5-Shot) |62.69| |TruthfulQA (0-shot) |61.00| |Winogrande (5-shot) |81.22| |GSM8k (5-shot) |58.00|
cognitivecomputations/dolphin-2.8-experiment26-7b-preview
cognitivecomputations
2024-03-04T21:38:24Z
732
13
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "chatml", "conversational", "en", "base_model:yam-peleg/Experiment26-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-02T19:12:45Z
--- language: - en license: apache-2.0 library_name: transformers tags: - chatml pipeline_tag: text-generation base_model: - yam-peleg/Experiment26-7B model-index: - name: dolphin-2.8-experiment26-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 64.51 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.8-experiment26-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.79 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.8-experiment26-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.24 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.8-experiment26-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.87 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.8-experiment26-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.8-experiment26-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=cognitivecomputations/dolphin-2.8-experiment26-7b name: Open LLM Leaderboard --- 1-epoch checkpoint Please note - this checkpoint release is deprecated in favor of the final release, located [here](https://huggingface.co/cognitivecomputations/dolphin-2.8-experiment26-7b) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cognitivecomputations__dolphin-2.8-experiment26-7b) | Metric |Value| |---------------------------------|----:| |Avg. |68.60| |AI2 Reasoning Challenge (25-Shot)|64.51| |HellaSwag (10-Shot) |83.79| |MMLU (5-Shot) |63.24| |TruthfulQA (0-shot) |54.87| |Winogrande (5-shot) |81.61| |GSM8k (5-shot) |63.61|
Eric111/Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser
Eric111
2024-03-03T09:24:48Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "snorkelai/Snorkel-Mistral-PairRM-DPO", "cognitivecomputations/openchat-3.5-0106-laser", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-03T09:20:52Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - snorkelai/Snorkel-Mistral-PairRM-DPO - cognitivecomputations/openchat-3.5-0106-laser --- # Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser Snorkel-Mistral-PairRM-DPO-openchat-3.5-0106-laser is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [snorkelai/Snorkel-Mistral-PairRM-DPO](https://huggingface.co/snorkelai/Snorkel-Mistral-PairRM-DPO) * [cognitivecomputations/openchat-3.5-0106-laser](https://huggingface.co/cognitivecomputations/openchat-3.5-0106-laser) ## 🧩 Configuration ```yaml slices: - sources: - model: snorkelai/Snorkel-Mistral-PairRM-DPO layer_range: [0, 32] - model: cognitivecomputations/openchat-3.5-0106-laser layer_range: [0, 32] merge_method: slerp base_model: snorkelai/Snorkel-Mistral-PairRM-DPO parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
juhwanlee/llmdo-Mistral-7B-case-c-v1
juhwanlee
2024-03-04T12:45:54Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-04T00:55:40Z
--- language: - en license: apache-2.0 datasets: - Open-Orca/OpenOrca model-index: - name: llmdo-Mistral-7B-case-c-v1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.03 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 45.82 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 39.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=juhwanlee/llmdo-Mistral-7B-case-c-v1 name: Open LLM Leaderboard --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Juhwan Lee * Model Type: Large Language Model # Model Architecture This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task. Mistral-7B-v0.1 is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample Open-Orca dataset. (We finetune the 100,000 dataset) # Guthub https://github.com/trailerAI # License Apache License 2.0 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_juhwanlee__llmdo-Mistral-7B-case-c-v1) | Metric |Value| |---------------------------------|----:| |Avg. |62.16| |AI2 Reasoning Challenge (25-Shot)|62.03| |HellaSwag (10-Shot) |83.55| |MMLU (5-Shot) |62.69| |TruthfulQA (0-shot) |45.82| |Winogrande (5-shot) |79.08| |GSM8k (5-shot) |39.80|
Gille/StrangeMerges_36-7B-slerp
Gille
2024-03-09T02:44:39Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "ammarali32/multi_verse_model", "Gille/StrangeMerges_35-7B-slerp", "base_model:ammarali32/multi_verse_model", "base_model:Gille/StrangeMerges_35-7B-slerp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-08T07:01:23Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - ammarali32/multi_verse_model - Gille/StrangeMerges_35-7B-slerp base_model: - ammarali32/multi_verse_model - Gille/StrangeMerges_35-7B-slerp --- # StrangeMerges_36-7B-slerp StrangeMerges_36-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) * [Gille/StrangeMerges_35-7B-slerp](https://huggingface.co/Gille/StrangeMerges_35-7B-slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: ammarali32/multi_verse_model layer_range: [0, 32] - model: Gille/StrangeMerges_35-7B-slerp layer_range: [0, 32] merge_method: slerp base_model: ammarali32/multi_verse_model parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 0.3, 0.5, 0.7] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0.7, 0.5, 0.3] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Gille/StrangeMerges_36-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ResplendentAI/Sinerva_7B
ResplendentAI
2024-03-10T05:13:01Z
732
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "not-for-all-audiences", "en", "dataset:ResplendentAI/Alpaca_NSFW_Shuffled", "dataset:ResplendentAI/Luna_NSFW_Text", "dataset:ResplendentAI/Synthetic_Soul_1k", "dataset:ResplendentAI/Sissification_Hypno_1k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-09T06:28:52Z
--- language: - en license: apache-2.0 tags: - not-for-all-audiences datasets: - ResplendentAI/Alpaca_NSFW_Shuffled - ResplendentAI/Luna_NSFW_Text - ResplendentAI/Synthetic_Soul_1k - ResplendentAI/Sissification_Hypno_1k model-index: - name: Sinerva_7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.14 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.59 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 61.77 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 59.93 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 62.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Sinerva_7B name: Open LLM Leaderboard --- # Sinerva ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/Pqtr03A1dC1_9N2WiBqOS.jpeg) Decadent and rich in sensual prose, but beware, she is designed to humiliate and degrade her user when necessary. GGUF available here: https://huggingface.co/Lewdiculous/Sinerva_7B-GGUF-IQ-Imatrix # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ResplendentAI__Sinerva_7B) | Metric |Value| |---------------------------------|----:| |Avg. |70.38| |AI2 Reasoning Challenge (25-Shot)|70.14| |HellaSwag (10-Shot) |85.59| |MMLU (5-Shot) |61.77| |TruthfulQA (0-shot) |59.93| |Winogrande (5-shot) |82.56| |GSM8k (5-shot) |62.32|
TeeZee/GALAXY-XB-v.02
TeeZee
2024-03-10T22:25:42Z
732
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-09T22:19:44Z
--- license: apache-2.0 model-index: - name: GALAXY-XB-v.02 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.67 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.27 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.99 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 43.6 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.27 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 42.08 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/GALAXY-XB-v.02 name: Open LLM Leaderboard --- ### TeeZee/GALAXY-XB-v.02 ### Experiment, can DUS be taken one or more steps further? ### Technical notes: - 10 layers removed from both models this time, 2 more than in original paper. - base version of upstage/SOLAR-10.7B-v1.0 used for merge - no finetuning done yet, this is just a merge, first step in DUS paper - next step, if evaluation proves that its at least as 'smart' as base model, should be finetuning to 'recover' after merge # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__GALAXY-XB-v.02) | Metric |Value| |---------------------------------|----:| |Avg. |62.48| |AI2 Reasoning Challenge (25-Shot)|60.67| |HellaSwag (10-Shot) |83.27| |MMLU (5-Shot) |64.99| |TruthfulQA (0-shot) |43.60| |Winogrande (5-shot) |80.27| |GSM8k (5-shot) |42.08|
Epiculous/Mika-7B
Epiculous
2024-03-11T16:49:29Z
732
7
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "dataset:lemonilia/LimaRP", "dataset:grimulkan/theory-of-mind", "dataset:Epiculous/Gnosis", "dataset:ChaoticNeutrals/Synthetic-RP", "dataset:ChaoticNeutrals/Synthetic-Dark-RP", "license:agpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-11T15:10:46Z
--- license: agpl-3.0 datasets: - lemonilia/LimaRP - grimulkan/theory-of-mind - Epiculous/Gnosis - ChaoticNeutrals/Synthetic-RP - ChaoticNeutrals/Synthetic-Dark-RP --- Mika (Named after what my Claude-3 Opus chat called itself.) is a Model trained in a similar manner to Fett-uccine with synthetic RP data created by Claude also included. ## Format I've had the best results with ChatML Context Template and Mistral Instruct Template, however, YMMV.
wannaphong/han-llm-7b-v3
wannaphong
2024-03-12T09:24:35Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "th", "dataset:pythainlp/han-instruct-dataset-v2.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-12T08:40:04Z
--- library_name: transformers license: apache-2.0 datasets: - pythainlp/han-instruct-dataset-v2.0 language: - th pipeline_tag: text-generation --- # Model Card for Han LLM 7B v3 Han LLM 7B v3 is a model that trained by han-instruct-dataset v2.0 and more. The model are working with Thai. Base model: [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b) [Google colab: Demo Han LLM 7B v3](https://colab.research.google.com/drive/1eC3dIWjBgM2v_UyCopMLawvqqcnQFvmI?usp=sharing) Thank you kaggle for free gpu! ## Model Details ### Model Description The model was trained by LoRA. This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Wannaphong Phatthiyaphaibun - **Model type:** text-generation - **Language(s) (NLP):** Thai - **License:** apache-2.0 - **Finetuned from model:** [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b) ## Uses Thai users ### Out-of-Scope Use Math, Coding, and other language ## Bias, Risks, and Limitations The model can has a bias from dataset. Use at your own risks! ## How to Get Started with the Model Use the code below to get started with the model. **Example** ```python # !pip install accelerate sentencepiece transformers bitsandbytes import torch from transformers import pipeline pipe = pipeline("text-generation", model="wannaphong/han-llm-7b-v3", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "แมวคืออะไร"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.) print(outputs[0]["generated_text"]) ```
silvainrichou/gemma-3b-002
silvainrichou
2024-03-15T13:13:02Z
732
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:2312.11805", "arxiv:2009.03300", "arxiv:1905.07830", "arxiv:1911.11641", "arxiv:1904.09728", "arxiv:1905.10044", "arxiv:1907.10641", "arxiv:1811.00937", "arxiv:1809.02789", "arxiv:1911.01547", "arxiv:1705.03551", "arxiv:2107.03374", "arxiv:2108.07732", "arxiv:2110.14168", "arxiv:2304.06364", "arxiv:2206.04615", "arxiv:1804.06876", "arxiv:2110.08193", "arxiv:2009.11462", "arxiv:2101.11718", "arxiv:1804.09301", "arxiv:2109.07958", "arxiv:2203.09509", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-14T11:40:49Z
--- library_name: transformers tags: [] extra_gated_heading: "Access Gemma on Hugging Face" extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately." extra_gated_button_content: "Acknowledge license" license: other license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms --- # Gemma Model Card **Model Page**: [Gemma](https://ai.google.dev/gemma/docs) This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://huggingface.co/google/gemma-7b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B instruct model](https://huggingface.co/google/gemma-2b-it). **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma) * [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-gg-hf) **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent) **Authors**: Google ## Model Information Summary description and brief definition of inputs and outputs. ### Description Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ### Usage Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. #### Fine-tuning the model You can find fine-tuning scripts and notebook under the [`examples/` directory](https://huggingface.co/google/gemma-7b/tree/main/examples) of [`google/gemma-7b`](https://huggingface.co/google/gemma-7b) repository. To adapt it to this model, simply change the model-id to `google/gemma-2b`. In that repository, we provide: * A script to perform Supervised Fine-Tuning (SFT) on UltraChat dataset using QLoRA * A script to perform SFT using FSDP on TPU devices * A notebook that you can run on a free-tier Google Colab instance to perform SFT on English quotes dataset #### Running the model on a CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a single / multi GPU ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto") input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Running the model on a GPU using different precisions * _Using `torch.float16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.float16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using `torch.bfloat16`_ ```python # pip install accelerate from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", device_map="auto", torch_dtype=torch.bfloat16) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Quantized Versions through `bitsandbytes` * _Using 8-bit precision (int8)_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_8bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` * _Using 4-bit precision_ ```python # pip install bitsandbytes accelerate from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig quantization_config = BitsAndBytesConfig(load_in_4bit=True) tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b") model = AutoModelForCausalLM.from_pretrained("google/gemma-2b", quantization_config=quantization_config) input_text = "Write me a poem about Machine Learning." input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` #### Other optimizations * _Flash Attention 2_ First make sure to install `flash-attn` in your environment `pip install flash-attn` ```diff model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, + attn_implementation="flash_attention_2" ).to(0) ``` ### Inputs and outputs * **Input:** Text string, such as a question, a prompt, or a document to be summarized. * **Output:** Generated English-language text in response to the input, such as an answer to a question, or a summary of a document. ## Model Data Data used for model training and how the data was processed. ### Training Dataset These models were trained on a dataset of text data that includes a wide variety of sources, totaling 6 trillion tokens. Here are the key components: * Web Documents: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. Primarily English-language content. * Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions. * Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries. The combination of these diverse data sources is crucial for training a powerful language model that can handle a wide variety of different tasks and text formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: * CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content * Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. * Additional methods: Filtering based on content quality and safely in line with [our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11). ## Implementation Information Details about the model internals. ### Hardware Gemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e). Training large language models requires significant computational power. TPUs, designed specifically for matrix operations common in machine learning, offer several advantages in this domain: * Performance: TPUs are specifically designed to handle the massive computations involved in training LLMs. They can speed up training considerably compared to CPUs. * Memory: TPUs often come with large amounts of high-bandwidth memory, allowing for the handling of large models and batch sizes during training. This can lead to better model quality. * Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for handling the growing complexity of large foundation models. You can distribute training across multiple TPU devices for faster and more efficient processing. * Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective solution for training large models compared to CPU-based infrastructure, especially when considering the time and resources saved due to faster training. * These advantages are aligned with [Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/). ### Software Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. ML Pathways is Google's latest effort to build artificially intelligent systems capable of generalizing across multiple tasks. This is specially suitable for [foundation models](https://ai.google/discover/foundation-models/), including large language models like these ones. Together, JAX and ML Pathways are used as described in the [paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single controller' programming model of Jax and Pathways allows a single Python process to orchestrate the entire training run, dramatically simplifying the development workflow." ## Evaluation Model evaluation metrics and results. ### Benchmark Results These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation: | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 | | [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 | | [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 | | [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 | | [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 | | [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 | | [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 | | [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 | | [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 | | [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 | | [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 | | [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 | | [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 | | [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 | | [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 | | [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 | | [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 | | [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 | | ------------------------------ | ------------- | ----------- | --------- | | **Average** | | **54.0** | **56.4** | ## Ethics and Safety Ethics and safety evaluation approach and results. ### Evaluation Approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Text-to-Text Content Safety: Human evaluation on prompts covering safety policies including child sexual abuse and exploitation, harassment, violence and gore, and hate speech. * Text-to-Text Representational Harms: Benchmark against relevant academic datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2). * Memorization: Automated evaluation of memorization of training data, including the risk of personally identifiable information exposure. * Large-scale harm: Tests for "dangerous capabilities," such as chemical, biological, radiological, and nuclear (CBRN) risks. ### Evaluation Results The results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety, representational harms, memorization, large-scale harms. On top of robust internal evaluations, the results of well known safety benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA are shown here. | Benchmark | Metric | 2B Params | 7B Params | | ------------------------------ | ------------- | ----------- | --------- | | [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 | | [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 | | [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 | | [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 | | [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 | | [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 | | [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 | | [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 | | [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 | | [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 | | ------------------------------ | ------------- | ----------- | --------- | ## Usage and Limitations These models have certain limitations that users should be aware of. ### Intended Usage Open Large Language Models (LLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. * Content Creation and Communication * Text Generation: These models can be used to generate creative text formats such as poems, scripts, code, marketing copy, and email drafts. * Chatbots and Conversational AI: Power conversational interfaces for customer service, virtual assistants, or interactive applications. * Text Summarization: Generate concise summaries of a text corpus, research papers, or reports. * Research and Education * Natural Language Processing (NLP) Research: These models can serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field. * Language Learning Tools: Support interactive language learning experiences, aiding in grammar correction or providing writing practice. * Knowledge Exploration: Assist researchers in exploring large bodies of text by generating summaries or answering questions about specific topics. ### Limitations * Training Data * The quality and diversity of the training data significantly influence the model's capabilities. Biases or gaps in the training data can lead to limitations in the model's responses. * The scope of the training dataset determines the subject areas the model can handle effectively. * Context and Task Complexity * LLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * A model's performance can be influenced by the amount of context provided (longer context generally leads to better outputs, up to a certain point). * Language Ambiguity and Nuance * Natural language is inherently complex. LLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * Factual Accuracy * LLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * Common Sense * LLMs rely on statistical patterns in language. They might lack the ability to apply common sense reasoning in certain situations. ### Ethical Considerations and Risks The development of large language models (LLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * LLMs trained on large-scale, real-world text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * LLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible). * Transparency and Accountability: * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making LLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * Perpetuation of biases: It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * Generation of harmful content: Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * Misuse for malicious purposes: Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * Privacy violations: Models were trained on data filtered for removal of PII (Personally Identifiable Information). Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Benefits At the time of release, this family of models provides high-performance open large language model implementations designed from the ground up for Responsible AI development compared to similarly sized models. Using the benchmark evaluation metrics described in this document, these models have shown to provide superior performance to other, comparably-sized open model alternatives.
MaziyarPanahi/Calme-7B-Instruct-v0.5
MaziyarPanahi
2024-04-03T16:56:17Z
732
11
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "7b", "calme", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-14T21:07:34Z
--- license: apache-2.0 tags: - generated_from_trainer - mistral - 7b - calme model-index: - name: Calme-7B-Instruct-v0.5 results: [] model_name: Calme-7B-Instruct-v0.5 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- <img src="https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/LzEf6vvq2qIiys-q7l9Hq.webp" width="550" /> # MaziyarPanahi/Calme-7B-Instruct-v0.5 ## Model Description Calme-7B is a state-of-the-art language model with 7 billion parameters, fine-tuned over high-quality datasets on top of Mistral-7B. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence. ### How to Use ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MaziyarPanahi/Calme-7B-Instruct-v0.5") # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.5") model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.5") ``` ### Quantized Models > I love how GGUF democratizes the use of Large Language Models (LLMs) on commodity hardware, more specifically, personal computers without any accelerated hardware. Because of this, I am committed to converting and quantizing any models I fine-tune to make them accessible to everyone! - GGUF (2/3/4/5/6/8 bits): [MaziyarPanahi/Calme-7B-Instruct-v0.5-GGUF](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.5-GGUF) ## Examples ``` <s>[INST] You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. describe about pros and cons of docker system. [/INST] ``` <details> <summary>Show me the response</summary> ``` ``` </details> ``` ``` <details> <summary>Show me the response</summary> ``` ``` </details> ``` <s> [INST] Mark is faster than Mary, Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step [/INST] ``` <details> <summary>Show me the response</summary> ``` ``` </details> ``` ``` <details> <summary>Show me the response</summary> ``` ``` </details> ``` <s> [INST] explain step by step 25-4*2+3=? [/INST] ``` <details> <summary>Show me the response</summary> ``` ``` </details> **Multilingual:** ``` <s> [INST] Vous êtes un assistant utile, respectueux et honnête. Répondez toujours de la manière la plus utile possible, tout en étant sûr. Vos réponses ne doivent inclure aucun contenu nuisible, contraire à l'éthique, raciste, sexiste, toxique, dangereux ou illégal. Assurez-vous que vos réponses sont socialement impartiales et de nature positive. Si une question n'a pas de sens ou n'est pas cohérente d'un point de vue factuel, expliquez pourquoi au lieu de répondre quelque chose d'incorrect. Si vous ne connaissez pas la réponse à une question, veuillez ne pas partager de fausses informations. Décrivez les avantages et les inconvénients du système Docker.[/INST] ``` <details> <summary>Show me the response</summary> ``` ``` <details> <summary>Show me the response</summary> ``` ``` </details> ``` <s>[INST] Ви - корисний, поважний та чесний помічник. Завжди відповідайте максимально корисно, будучи безпечним. Ваші відповіді не повинні містити шкідливого, неетичного, расистського, сексистського, токсичного, небезпечного або нелегального контенту. Будь ласка, переконайтеся, що ваші відповіді соціально неупереджені та мають позитивний характер. Якщо питання не має сенсу або не є фактично послідовним, поясніть чому, замість того, щоб відповідати щось некоректне. Якщо ви не знаєте відповіді на питання, будь ласка, не діліться неправдивою інформацією. Опис про переваги та недоліки системи Docker.[/INST] ``` <details> <summary>Show me the response</summary> ``` ``` </details>
Kukedlc/NeuralContamination-7B-ties
Kukedlc
2024-03-30T09:20:40Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "yam-peleg/Experiment26-7B", "Kukedlc/NeuralSirKrishna-7b", "automerger/YamShadow-7B", "base_model:yam-peleg/Experiment26-7B", "base_model:Kukedlc/NeuralSirKrishna-7b", "base_model:automerger/YamShadow-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-16T00:29:32Z
--- tags: - merge - mergekit - lazymergekit - yam-peleg/Experiment26-7B - Kukedlc/NeuralSirKrishna-7b - automerger/YamShadow-7B base_model: - yam-peleg/Experiment26-7B - Kukedlc/NeuralSirKrishna-7b - automerger/YamShadow-7B license: apache-2.0 --- # NeuralContamination-7B-ties ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/PyDI8KAlRmzKQ8sEFORQ9.png) NeuralContamination-7B-ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [yam-peleg/Experiment26-7B](https://huggingface.co/yam-peleg/Experiment26-7B) * [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b) * [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B) ## 🧩 Configuration ```yaml models: - model: yam-peleg/Experiment26-7B parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: Kukedlc/NeuralSirKrishna-7b parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient - model: automerger/YamShadow-7B parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0 merge_method: ties base_model: liminerity/M7-7b parameters: normalize: true int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kukedlc/NeuralContamination-7B-ties" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## Genetic ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d71ab4089bc502ceb44d29/GrB1JfyS2lm_IeM05QMp5.png)
crumb/model-a-48.5m
crumb
2024-03-18T01:37:46Z
732
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-18T01:07:07Z
--- library_name: transformers license: apache-2.0 language: - en - zh --- freaky freak model 27b tokens 48.5m total parameters mistral tokenizer
0-hero/Matter-0.1-7B-DPO-preview
0-hero
2024-03-23T04:13:08Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "dataset:0-hero/Matter-0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-19T11:27:26Z
--- license: apache-2.0 datasets: - 0-hero/Matter-0.1 language: - en --- ## Matter 7B - 0.1 - DPO (Mistral 7B Finetune) DPO version of [Matter 7B](https://huggingface.co/0-hero/Matter-0.1-7B) fine-tuned on the [Matter dataset](https://huggingface.co/datasets/0-hero/Matter-0.1), which is curated from over 35 datsets analyzing >6B tokens ### Training Prompt format: This model uses ChatML prompt format. ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ### Function Calling Model also supports function calling. Additional tokens for function calling Model function call tokens - <|begin_func|> - Function call start token - <|end_func|> - Function call end token Function call response tokens - <|begin_func_response|> - Function response start token - <|end_func_response|> - Function response end token Example ``` <|im_start|>system You are a helpful assistant with access to the following functions. Use them if required - { "name": "get_news_headlines", "description": "Get the latest news headlines", "parameters": { "type": "object", "properties": { "country": { "type": "string", "description": "The country for which to fetch news" } }, "required": [ "country" ] } } <|im_end|> <|im_start|>user Can you tell me the latest news headlines for the United States?<|im_end|> <|im_start|>assistant <|begin_func|>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}<|end_func|><|im_end|> <|im_start|>user <|begin_func_response|>{ "headlines": [ "Biden announces new vaccine mandates", "Hurricane Ida devastates Louisiana", "Apple unveils new iPhone", "NASA's Perseverance rover collects first Mars rock sample" ] }<|end_func_response|> <|im_end|> <|im_start|>assistant Here are the latest news headlines for the United States: 1. Biden announces new vaccine mandates 2. Hurricane Ida devastates Louisiana 3. Apple unveils new iPhone 4. NASA's Perseverance rover collects first Mars rock sample <|im_end|> ```
mvpmaster/pmmpk-EinstainMorcoro14KrishnaHercules-7b-slerp
mvpmaster
2024-03-20T00:33:38Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp", "mvpmaster/Einstein-4D-Marcoro14-7b-full-slerp", "base_model:mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp", "base_model:mvpmaster/Einstein-4D-Marcoro14-7b-full-slerp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-19T22:31:50Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp - mvpmaster/Einstein-4D-Marcoro14-7b-full-slerp base_model: - mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp - mvpmaster/Einstein-4D-Marcoro14-7b-full-slerp --- # pmmpk-EinstainMorcoro14KrishnaHercules-7b-slerp pmmpk-EinstainMorcoro14KrishnaHercules-7b-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp](https://huggingface.co/mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp) * [mvpmaster/Einstein-4D-Marcoro14-7b-full-slerp](https://huggingface.co/mvpmaster/Einstein-4D-Marcoro14-7b-full-slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp layer_range: [0, 32] - model: mvpmaster/Einstein-4D-Marcoro14-7b-full-slerp layer_range: [0, 32] merge_method: slerp base_model: mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mvpmaster/pmmpk-EinstainMorcoro14KrishnaHercules-7b-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
mvpmaster/Einstein-4D-MoE-2x7b-test
mvpmaster
2024-03-20T01:45:34Z
732
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "mvpmaster/pmmpk-EinstainMorcoro14KrishnaHercules-7b-slerp", "mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp", "base_model:mvpmaster/pmmpk-EinstainMorcoro14KrishnaHercules-7b-slerp", "base_model:mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-20T01:38:01Z
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - mvpmaster/pmmpk-EinstainMorcoro14KrishnaHercules-7b-slerp - mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp base_model: - mvpmaster/pmmpk-EinstainMorcoro14KrishnaHercules-7b-slerp - mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp --- # Einstein-4D-MoE-2x7b-test Einstein-4D-MoE-2x7b-test is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mvpmaster/pmmpk-EinstainMorcoro14KrishnaHercules-7b-slerp](https://huggingface.co/mvpmaster/pmmpk-EinstainMorcoro14KrishnaHercules-7b-slerp) * [mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp](https://huggingface.co/mvpmaster/kellemar-KrishnaHercules-0.1-7b-slerp) ## 🧩 Configuration ## 💻 Usage
monology/mixtral-soup
monology
2024-03-20T03:29:46Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2203.05482", "base_model:monology/mixtral-expert7", "base_model:monology/mixtral-expert5", "base_model:monology/mixtral-expert6", "base_model:monology/mixtral-expert0", "base_model:monology/mixtral-expert4", "base_model:monology/mixtral-expert1", "base_model:monology/mixtral-expert3", "base_model:monology/mixtral-expert2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-20T03:12:19Z
--- base_model: - monology/mixtral-expert7 - monology/mixtral-expert5 - monology/mixtral-expert6 - monology/mixtral-expert0 - monology/mixtral-expert4 - monology/mixtral-expert1 - monology/mixtral-expert3 - monology/mixtral-expert2 library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # mixtral-soup For experimental purposes only. Probably not that good. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [monology/mixtral-expert7](https://huggingface.co/monology/mixtral-expert7) * [monology/mixtral-expert5](https://huggingface.co/monology/mixtral-expert5) * [monology/mixtral-expert6](https://huggingface.co/monology/mixtral-expert6) * [monology/mixtral-expert0](https://huggingface.co/monology/mixtral-expert0) * [monology/mixtral-expert4](https://huggingface.co/monology/mixtral-expert4) * [monology/mixtral-expert1](https://huggingface.co/monology/mixtral-expert1) * [monology/mixtral-expert3](https://huggingface.co/monology/mixtral-expert3) * [monology/mixtral-expert2](https://huggingface.co/monology/mixtral-expert2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: monology/mixtral-expert0 - model: monology/mixtral-expert1 - model: monology/mixtral-expert2 - model: monology/mixtral-expert3 - model: monology/mixtral-expert4 - model: monology/mixtral-expert5 - model: monology/mixtral-expert6 - model: monology/mixtral-expert7 parameters: weight: 1.0 merge_method: linear dtype: float16 ```
Azazelle/Moko-DARE
Azazelle
2024-03-22T16:20:45Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:Open-Orca/Mistral-7B-OpenOrca", "base_model:akjindal53244/Mistral-7B-v0.1-Open-Platypus", "base_model:WizardLM/WizardMath-7B-V1.1", "base_model:mistralai/Mistral-7B-v0.1", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-22T12:57:20Z
--- pipeline_tag: text-generation base_model: - Open-Orca/Mistral-7B-OpenOrca - akjindal53244/Mistral-7B-v0.1-Open-Platypus - WizardLM/WizardMath-7B-V1.1 - mistralai/Mistral-7B-v0.1 library_name: transformers tags: - mergekit - merge license: cc-by-4.0 --- # Moko-DARE This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) * [akjindal53244/Mistral-7B-v0.1-Open-Platypus](https://huggingface.co/akjindal53244/Mistral-7B-v0.1-Open-Platypus) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Open-Orca/Mistral-7B-OpenOrca parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: akjindal53244/Mistral-7B-v0.1-Open-Platypus parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient - model: WizardLM/WizardMath-7B-V1.1 parameters: density: 0.33 weight: - filter: mlp value: 0.5 - value: 0 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: normalize: true int8_mask: true dtype: float16 ```
vanillaOVO/merge_7B_state_2
vanillaOVO
2024-03-28T13:47:06Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-24T05:54:56Z
--- base_model: [] tags: - mergekit - merge license: apache-2.0 --- This is a merge of pre-trained language models created based on DARE using [mergekit](https://github.com/cg123/mergekit). More descriptions of the model will be added soon. ### **Loading the Model** Use the following Python code to load the model: ```python import torch from transformers import MistralForCausalLM, AutoTokenizer model = MistralForCausalLM.from_pretrained("vanillaOVO/merge_7B_state_2", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("vanillaOVO/merge_7B_state_2") ``` ### **Generating Text** To generate text, use the following Python code: ```python text = "Large language models are " inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=256) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
louisgrc/Marengoli_7B_SLERP
louisgrc
2024-03-25T06:27:03Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "louisgrc/Rivoli_7B_SLERP", "louisgrc/Marengo_7B_SLERP", "base_model:louisgrc/Rivoli_7B_SLERP", "base_model:louisgrc/Marengo_7B_SLERP", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-24T20:32:48Z
--- tags: - merge - mergekit - lazymergekit - louisgrc/Rivoli_7B_SLERP - louisgrc/Marengo_7B_SLERP base_model: - louisgrc/Rivoli_7B_SLERP - louisgrc/Marengo_7B_SLERP license: apache-2.0 --- # Marengoli_7B_SLERP Marengoli_7B_SLERP is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [louisgrc/Rivoli_7B_SLERP](https://huggingface.co/louisgrc/Rivoli_7B_SLERP) * [louisgrc/Marengo_7B_SLERP](https://huggingface.co/louisgrc/Marengo_7B_SLERP) ## 🧩 Configuration ```yaml slices: - sources: - model: louisgrc/Rivoli_7B_SLERP layer_range: [0, 32] - model: louisgrc/Marengo_7B_SLERP layer_range: [0, 32] merge_method: slerp base_model: louisgrc/Marengo_7B_SLERP parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "louisgrc/Marengoli_7B_SLERP" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
G-reen/EXPERIMENT-DPO-m7b2-1-merged
G-reen
2024-04-15T21:13:27Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-25T02:10:37Z
--- license: "apache-2.0" --- *This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.* Note: Completely broken. Do not use. **Benchmarks** Average 59.52 ARC 59.47 HellaSwag 82.42 MMLU 62.21 TruthfulQA 40.01 Winogrande 78.3 GSM8K 34.72 **Training Details** Duration: ~10-12 hours on one Kaggle T4 with Unsloth Model: https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit Dataset: https://huggingface.co/datasets/argilla/dpo-mix-7k Rank: 8 Alpha: 16 Learning rate: 5e-6 Beta: 0.1 Batch size: 8 Epochs: 1 Learning rate scheduler: Linear Prompt Format: ```You are a helpful assistant.<s>[INST] PROMPT [/INST]RESPONSE</s>``` (The start token \<s\> must be added manually and not automatically) **WanDB Reports** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a5c0e82823ba72ed2cee7d/Tg3dknWsTvfqM96Fab2YJ.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a5c0e82823ba72ed2cee7d/8DQ0WiypkVIJeK_Y18Wv0.png) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Gille/StrangeMerges_44-7B-dare_ties
Gille
2024-03-25T15:07:03Z
732
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Nexusflow/Starling-LM-7B-beta", "nlpguy/T3QM7", "AurelPx/Percival_01-7b-slerp", "base_model:Nexusflow/Starling-LM-7B-beta", "base_model:nlpguy/T3QM7", "base_model:AurelPx/Percival_01-7b-slerp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-25T03:38:30Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - Nexusflow/Starling-LM-7B-beta - nlpguy/T3QM7 - AurelPx/Percival_01-7b-slerp base_model: - Nexusflow/Starling-LM-7B-beta - nlpguy/T3QM7 - AurelPx/Percival_01-7b-slerp --- # StrangeMerges_44-7B-dare_ties StrangeMerges_44-7B-dare_ties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) * [nlpguy/T3QM7](https://huggingface.co/nlpguy/T3QM7) * [AurelPx/Percival_01-7b-slerp](https://huggingface.co/AurelPx/Percival_01-7b-slerp) ## 🧩 Configuration ```yaml models: - model: Nexusflow/Starling-LM-7B-beta parameters: weight: 0.3 density: 0.53 - model: nlpguy/T3QM7 parameters: weight: 0.2 density: 0.53 - model: AurelPx/Percival_01-7b-slerp parameters: weight: 0.5 density: 0.53 base_model: liminerity/M7-7b merge_method: dare_ties dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Gille/StrangeMerges_44-7B-dare_ties" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
prhegde/merge-aanaphi-phi2-orage-3b
prhegde
2024-03-26T04:19:38Z
732
2
transformers
[ "transformers", "safetensors", "phi", "text-generation", "mergekit", "merge", "conversational", "base_model:rhysjones/phi-2-orange-v2", "base_model:mobiuslabsgmbh/aanaphi2-v0.1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-26T04:12:08Z
--- base_model: - rhysjones/phi-2-orange-v2 - mobiuslabsgmbh/aanaphi2-v0.1 library_name: transformers tags: - mergekit - merge license: mit --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [rhysjones/phi-2-orange-v2](https://huggingface.co/rhysjones/phi-2-orange-v2) * [mobiuslabsgmbh/aanaphi2-v0.1](https://huggingface.co/mobiuslabsgmbh/aanaphi2-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: mobiuslabsgmbh/aanaphi2-v0.1 layer_range: [0, 32] - model: rhysjones/phi-2-orange-v2 layer_range: [0, 32] merge_method: slerp base_model: mobiuslabsgmbh/aanaphi2-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
DFofanov78/rugpt3small_based_on_gpt2
DFofanov78
2024-03-29T07:00:00Z
732
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "PyTorch", "Transformers", "ru", "arxiv:2309.10931", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-29T06:59:00Z
--- language: - ru tags: - PyTorch - Transformers thumbnail: "https://github.com/sberbank-ai/ru-gpts" --- # rugpt3small\_based\_on\_gpt2 The model architecture design, pretraining, and evaluation are documented in our preprint: [**A Family of Pretrained Transformer Language Models for Russian**](https://arxiv.org/abs/2309.10931). The model was pretrained with sequence length 1024 using transformers by the [SberDevices](https://sberdevices.ru/) team on 80B tokens around 3 epochs. After that, the model was finetuned with the context size of 2048. Total training time took around one week on 32 GPUs. # Authors + NLP core team RnD [Telegram channel](https://t.me/nlpcoreteam): + Dmitry Zmitrovich # Cite us ``` @misc{zmitrovich2023family, title={A Family of Pretrained Transformer Language Models for Russian}, author={Dmitry Zmitrovich and Alexander Abramov and Andrey Kalmykov and Maria Tikhonova and Ekaterina Taktasheva and Danil Astafurov and Mark Baushenko and Artem Snegirev and Tatiana Shavrina and Sergey Markov and Vladislav Mikhailov and Alena Fenogenova}, year={2023}, eprint={2309.10931}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Sao10K/Franziska-Mixtral-v1
Sao10K
2024-03-31T15:09:43Z
732
3
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-31T08:48:14Z
--- license: cc-by-nc-4.0 language: - en --- Yes, another Mixtral merge. Unlike Skadi, I had well, merged along an additional specialLoRA adapter of mine into the model soup, plus fine tunes of my own. Yes, there is a very slight positivity bias, but it can be easily beaten out of the model with well, a dark scenario. Other than that, it handles NSFW, NSFL scenarios fine. Handles characters just how I like it. Not too mild or overbearing. I like the way the model writes. You may or may not like it, ymmv. Format: Either [Inst] blocks, Alpaca or Vicuna works fine. Samplers: Universal-Light in SillyTavern *** Franziska - named after an OC character I'm roleplaying with, and written several stories for her.
MaziyarPanahi/Calme-12B-Instruct-v0.1
MaziyarPanahi
2024-04-07T10:18:53Z
732
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "generated_from_trainer", "12b", "calme", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-07T09:53:27Z
--- license: apache-2.0 tags: - generated_from_trainer - mistral - 12b - calme model-index: - name: Calme-12B-Instruct-v0.1 results: [] model_name: Calme-12B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- <img src="https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/LzEf6vvq2qIiys-q7l9Hq.webp" width="550" /> # MaziyarPanahi/Calme-12B-Instruct-v0.1 ## Model Description Calme-12B is a state-of-the-art language model with 12 billion parameters, merged and fine-tuned over high-quality datasets on top of Calme-7B-Instruct-v0.9. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence. ### How to Use ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MaziyarPanahi/Calme-12B-Instruct-v0.1") # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Calme-12B-Instruct-v0.1") model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Calme-12B-Instruct-v0.1") ``` ### Quantized Models > I love how GGUF democratizes the use of Large Language Models (LLMs) on commodity hardware, more specifically, personal computers without any accelerated hardware. Because of this, I am committed to converting and quantizing any models I fine-tune to make them accessible to everyone! - GGUF (2/3/4/5/6/8 bits): [MaziyarPanahi/Calme-12B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Calme-12B-Instruct-v0.1-GGUF)
Rorical/0-roleplay
Rorical
2024-06-11T08:36:06Z
732
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "roleplay", "conversational", "en", "zh", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-04-22T01:13:15Z
--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE language: - en - zh pipeline_tag: text-generation tags: - roleplay --- # 0-Roleplay 0-Roleplay is a chat model finetuned on light novel, visual novel and character conversation datasets. The base model is from [IA_14B](https://huggingface.co/Minami-su/IA_14B) made by Minami-su, which is finetuned on [Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat). ## Usage This repo provides 4bit quantized weights. Here is an example: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import TextStreamer streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) model = AutoModelForCausalLM.from_pretrained("Rorical/0-roleplay", return_dict=True, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained("Rorical/0-roleplay", trust_remote_code=True) tokenizer.chat_template = "{% for message in messages %}{{'<|im_start|>' + ((message['role'] + '\n') if message['role'] != '' else '') + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>星野\n' }}{% endif %}" # Be careful that this model used custom chat template. prompt = """以下是小鸟游星野的介绍 星野是阿拜多斯高中对策委员会的委员长,同时也是学生会副主席。语气懒散,经常自称为大叔,实际上是自己默默承担一切的女生。 比起工作,她更喜欢玩。 正因为如此,她经常被委员会的其他人骂。 但是,一旦任务开始,她就会在前线勇敢地战斗以保护她的战友。 她在阿拜多斯上高中。与星野一起在对策委员会的成员有白子,茜香,野乃美,和绫音。 星野的年龄是17岁,生日为1月2日。 星野有一头粉红色的头发,头巾一直长到她的腿上。 星野有蓝色和橙色眼睛的异色症。 星野其实更符合认真而默默努力的类型。她实际上不相信其它的学校和大人,是对策委员会中最谨慎保守的人。当然,这并不妨碍老师和星野增进关系,成为她唯一信任的大人。 是萝莉、有呆毛、天然萌、早熟、学生会副会长、异色瞳、慵懒。 星野对海洋动物很感兴趣,对鱼类的知识了解得不少。她在拿到附录中包含2000多种热带鱼图鉴的书后,迫不及待地找了家店坐下来阅读。 在众多海洋动物中,星野最喜欢的当属鲸鱼,情人节时星野还在海洋馆买了鲸鱼的巧克力作为纪念。 星野还对寻宝有着十分浓厚的兴趣,曾和老师探索了阿拜多斯多个角落。 星野给人一种白天睡不醒的瞌睡虫形象。""" messages = [ {"role": "", "content": prompt}, {"role": "星野", "content": "老师好啊~"}, # we replace "assistant" with the character name {"role": "老师", "content": "【摸摸头】"}, # we replace "user" with the user name. Now you can define your own persona. ] inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") inputs = inputs.to("cuda") generate_ids = model.generate(inputs,max_length=32768, streamer=streamer) ``` Example output: ``` 哎呀,又来啦...老师你找我有什么事?是不是有什么困难需要我的帮助呢?毕竟我是学生会的副主席嘛,尽管有时候不太靠谱就是了(囧)。 ``` ## Training Detail 4Bit quantized LoRa finetuning. 90K steps. 1 Epoch.
prince-canuma/Llama-3-6B-v0.1
prince-canuma
2024-05-24T12:55:07Z
732
9
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Llama-3-6B", "6B", "en", "dataset:prince-canuma/fineweb-CC-MAIN-2024-10-1B-en", "dataset:HuggingFaceFW/fineweb", "arxiv:2404.08634", "arxiv:2212.05055", "base_model:prince-canuma/Llama-3-6B-v0", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-17T17:26:53Z
--- language: - en license: llama3 library_name: transformers datasets: - prince-canuma/fineweb-CC-MAIN-2024-10-1B-en - HuggingFaceFW/fineweb tags: - Llama-3-6B - 6B base_model: - prince-canuma/Llama-3-6B-v0 --- # Model Summary <img src="images/llama-3-6B icon.jpeg" width="500" alt="Llama-3-6B"/> Introducing the world's first Llama-3 base model with 6B parameters. This model is a pretrained version of [prince-canuma/Llama-3-6B-v0](https://huggingface.co/prince-canuma/Llama-3-6B-v0), which was created from Meta-Llama-3-8B using a technique called [downcycling](https://youtube.com/playlist?list=PLDn_JsyofyfTH5_5V1MNb8UYKxMl6IMNy&si=9hcOol4KHIgWThgt) . The model was continually pretrained on 1 billion tokens of English-only text from fineweb, achieving impressive results on the evaluation set: - Loss: 2.4942 <!-- Provide a longer summary of what this model is. --> ## Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [Prince Canuma](https://huggingface.co/prince-canuma) - **Sponsored by:** General - **Model type:** Llama - **License:** [Llama-3](https://llama.meta.com/llama3/license) - **Pretrained from model:** prince-canuma/Llama-3-6B-v0 ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/Blaizzy/Coding-LLMs-from-scratch/tree/main/Llama-3 - **Video:** https://youtube.com/playlist?list=PLDn_JsyofyfTH5_5V1MNb8UYKxMl6IMNy&si=5Y4cm-6wrMOD1Abr ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> You can use this model to create instruct and chat versions for various use cases such as: Coding assistant, RAG, Function Calling and more. ### Limitations This model inherits some of the base model's limitations and some additional ones from it's creation process, such as: - Limited scope for coding and math: According to benchmarks, this model needs more pretraining/finetuning on code and math data to excel at reasoning tasks. - Language Limitations: This model was continually pretrained on english only data. If you are planning to use it for multilingual use cases I recommend fine-tuning or continued pretraining. ## How to Get Started with the Model Use the code below to get started with the model. ```python from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer # Load model, config and tokenizer model_name = "prince-canuma/Llama-3-6B-v0.1" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) inputs = tokenizer( [ "Who created Python?" ], return_tensors = "pt") from transformers import TextStreamer text_streamer = TextStreamer(tokenizer) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 200) ``` Output: ```shell <|begin_of_text|>Who created Python? What is Python used for? What is the difference between Python 2 and Python 3? What is the difference between Python and Python 3? Python is a programming language that was created by Guido van Rossum in 1991. It is a widely used language for web development, data science, and machine learning. Python is also used for creating software applications and games. Python is a powerful language that is easy to learn and use. It has a large library of built-in functions and packages that make it easy to write code. Python is also a very popular language for web development, with many popular web frameworks such as Django and Flask being written in Python. Python is also used for data science and machine learning. It has a large library of packages for data analysis, machine learning, and artificial intelligence. Python is also used for creating software applications and games. Python 2 and Python 3 are two different versions of the Python language. Python 2 was the original version of the ``` ## Training Details ### Downcycling <img src="images/downcycling.jpeg" width="500" alt="Llama-3-8B-vs-6B-v0"/> Fig 1. Downcycling workflow as also described in [arxiv.org/abs/2404.08634](https://arxiv.org/abs/2404.08634). A technique that allows you to create new LLMs of diversa sizes from checkpoints of large pretrained models. You take a reference model (i.e., Llama-3-8B) and copy the weights of 24 layers out of 32 layers alongside embedding and prediction heads. Then you initialize a smaller target model with 24 layers and load those pretrained weights. This new model will most likely still output legible outputs, but for it to perform well you need continue the pretraining. <img src="images/Llama-3-8B-vs-6B-v0.png" width="500" alt="Llama-3-8B-vs-6B-v0"/> Fig 2. Downcycled model vs Reference model, without continued pretraining. ### Training Data For continued pretrained, I extracted 1B tokens from [Huggingface's FineWeb CC-Main-2024-10](https://huggingface.co/datasets/HuggingFaceFW/fineweb#breakdown-by-dumpcrawl) slice. #### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: prince-canuma/Llama-3-6B-v0.1 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: true strict: false datasets: - path: prince-canuma/fineweb-CC-MAIN-2024-10-1B-en type: completion split: train dataset_prepared_path: last_run_prepared val_set_size: 0.001 output_dir: ./llama-3-6b save_safetensors: true adapter: qlora lora_model_dir: sequence_len: 8192 sample_packing: false pad_to_sequence_len: false lora_r: 128 lora_alpha: 128 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: llama-3-6b wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 2 num_epochs: 2 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 2e-4 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 4 eval_table_size: save_steps: 4000 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: "<|reserved_special_token_0|>" ``` </details><br> ### Training results There were 3 distinct experiments. In these experiments, QLoRA was used instead of Full Fine-tuning due to budget constraints. - v0: This was a test ran for 1K steps to check if the model would improve with QLoRA params. - v1: Here the QLoRA parameters where tweaked (Rank and Alpha). - v2: This was the main experiment, ran for 2 epochs on 1B tokens from FineWeb. All details can be found on my Wandb dashboard: https://wandb.ai/prince-canuma/llama-3-6b?nw=nwuserprincecanuma <img src="images/Training Loss.png" width="500" alt="Llama-3-8B-vs-6B-v0"/> Fig 3. Experiment training loss charts on wandb. Overal metrics: | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 7.1562 | 0.0 | 1 | 7.1806 | | 2.7339 | 0.25 | 5867 | 2.6266 | | 2.6905 | 0.5 | 11734 | 2.5872 | | 2.6134 | 0.75 | 17601 | 2.5549 | | 2.532 | 1.0 | 23468 | 2.5235 | | 2.5319 | 1.25 | 29335 | 2.5067 | | 2.3336 | 1.5 | 35202 | 2.4968 | | 2.3486 | 1.75 | 41069 | 2.4942 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0 ### Hardware: - 4xRTX6000 using JarvisLabs (Sponsored by [General Catalyst](https://www.generalcatalyst.com/) thanks to Viet) ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> #### Benchmarks <!-- These are the evaluation metrics being used, ideally with a description of why. --> - **Hellaswag**: a dataset for studying grounded commonsense inference. - **ARC**: a multiple-choice question-answering dataset. from science exams from grade 3 to grade 9. - **MMLU**: a test with 57 tasks to measure a text model's multitask accuracy. - **TruthfulQA**: a test to measure a model's propensity to reproduce falsehoods commonly found online. - **Winogrande**: for commonsense reasoning. - **GSM8k**: diverse grade school math word problems to measure a model's ability to solve multi-step mathematical reasoning problems. ### Results <img src="images/comparison_model_scores_histogram.png" width="500" alt="Llama-3-8B-vs-6B-v0"/> Fig 4. Performance comparision of Llama-3-8B, Llama-3-6B and Llama-3-6B (w/ continued pretraining) Pretraining for 2 epochs on 1B tokens had a positive effect across the board. The new base model now performs competitively with its reference model (Llama-3-8B) whilst being 1.3x smaller. <img src="images/Comparision_of_Model_Scores.png" width="500" alt="All-vs-Llama-3-6B-v0"/> Fig 5. Performance comparision of Llama-3-8B, Llama-2-13B, Yi-1.5-6B and Llama-3-6B. Llama-3-6B is competive with model within it's category and upto 2x larger than it self across 6 diverse benchmarks. #### Summary and future directions: This experiment was a success! Using this technique, I'll be able to build many variants. This is the first of many new base models I intend to create. Next, I plan to explore different data mixtures and perform full fine-tuning, all of which will contribute to developing other small model as well as larger and more robust models. ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> ### **BibTeX:** ```bibtex @misc{prince2024downcycling, title={Efficient LLM Downcycling: Generating Diverse Model Sizes from Pretrained Giants}, author={Prince Canuma}, year={2024}, } ``` # **Thank You!** I want to extend my heartfelt thanks to the community for the invaluable expertise and unwavering support. Additionally, I would like to thank Viet from General Catalyst (GC) for providing me with the much needed compute. This is my most ambitious project yet, and it wouldn't have been possible without the incredible open-source ML community! Developers, I am eager to see and hear about the innovative fine-tunes and applications you create. Users, I am excited to learn about your experiences and use cases. Thank you for your interest and support! ## References: ```bibtex @misc{komatsuzaki2023sparse, title={Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints}, author={Aran Komatsuzaki and Joan Puigcerver and James Lee-Thorp and Carlos Riquelme Ruiz and Basil Mustafa and Joshua Ainslie and Yi Tay and Mostafa Dehghani and Neil Houlsby}, year={2023}, eprint={2212.05055}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` ```bibtex @misc{sanyal2024pretraining, title={Pre-training Small Base LMs with Fewer Tokens}, author={Sunny Sanyal and Sujay Sanghavi and Alexandros G. Dimakis}, year={2024}, eprint={2404.08634}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
duyntnet/IceLatteRP-7b-imatrix-GGUF
duyntnet
2024-05-22T16:23:27Z
732
1
transformers
[ "transformers", "gguf", "imatrix", "IceLatteRP-7b", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-05-22T14:23:03Z
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - IceLatteRP-7b --- Quantizations of https://huggingface.co/icefog72/IceLatteRP-7b # From original readme ## IceLatteRP-7b ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63407b719dbfe0d48b2d763b/AKDxYpx0EmFoI-QUfR24k.png) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details I'm preaty satisfied with this result.
RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf
RichardErkhov
2024-06-05T17:22:02Z
732
0
null
[ "gguf", "region:us" ]
null
2024-06-05T17:10:10Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Pythia-160M-Deduped-Adventure - GGUF - Model creator: https://huggingface.co/Crataco/ - Original model: https://huggingface.co/Crataco/Pythia-160M-Deduped-Adventure/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Pythia-160M-Deduped-Adventure.Q2_K.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q2_K.gguf) | Q2_K | 0.07GB | | [Pythia-160M-Deduped-Adventure.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.IQ3_XS.gguf) | IQ3_XS | 0.08GB | | [Pythia-160M-Deduped-Adventure.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.IQ3_S.gguf) | IQ3_S | 0.08GB | | [Pythia-160M-Deduped-Adventure.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q3_K_S.gguf) | Q3_K_S | 0.08GB | | [Pythia-160M-Deduped-Adventure.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.IQ3_M.gguf) | IQ3_M | 0.08GB | | [Pythia-160M-Deduped-Adventure.Q3_K.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q3_K.gguf) | Q3_K | 0.09GB | | [Pythia-160M-Deduped-Adventure.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [Pythia-160M-Deduped-Adventure.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q3_K_L.gguf) | Q3_K_L | 0.09GB | | [Pythia-160M-Deduped-Adventure.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.IQ4_XS.gguf) | IQ4_XS | 0.09GB | | [Pythia-160M-Deduped-Adventure.Q4_0.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q4_0.gguf) | Q4_0 | 0.1GB | | [Pythia-160M-Deduped-Adventure.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.IQ4_NL.gguf) | IQ4_NL | 0.1GB | | [Pythia-160M-Deduped-Adventure.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [Pythia-160M-Deduped-Adventure.Q4_K.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q4_K.gguf) | Q4_K | 0.1GB | | [Pythia-160M-Deduped-Adventure.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q4_K_M.gguf) | Q4_K_M | 0.1GB | | [Pythia-160M-Deduped-Adventure.Q4_1.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q4_1.gguf) | Q4_1 | 0.1GB | | [Pythia-160M-Deduped-Adventure.Q5_0.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q5_0.gguf) | Q5_0 | 0.11GB | | [Pythia-160M-Deduped-Adventure.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q5_K_S.gguf) | Q5_K_S | 0.11GB | | [Pythia-160M-Deduped-Adventure.Q5_K.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q5_K.gguf) | Q5_K | 0.12GB | | [Pythia-160M-Deduped-Adventure.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q5_K_M.gguf) | Q5_K_M | 0.12GB | | [Pythia-160M-Deduped-Adventure.Q5_1.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q5_1.gguf) | Q5_1 | 0.12GB | | [Pythia-160M-Deduped-Adventure.Q6_K.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q6_K.gguf) | Q6_K | 0.13GB | | [Pythia-160M-Deduped-Adventure.Q8_0.gguf](https://huggingface.co/RichardErkhov/Crataco_-_Pythia-160M-Deduped-Adventure-gguf/blob/main/Pythia-160M-Deduped-Adventure.Q8_0.gguf) | Q8_0 | 0.16GB | Original model description: --- tags: - generated_from_trainer model-index: - name: pythia-160m-deduped-aid results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pythia-160m-deduped-aid ![Example](https://cdn.discordapp.com/attachments/1042160561808482304/1070428486306631680/Screenshot_2023-02-01_11-40-08.png) ## Model description This model is a finetune of [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped) (from when it was instead `pythia-125m-deduped`), on the [`text_adventures.txt`](https://github.com/Latitude-Archives/AIDungeon/blob/ca098ca7dab480d24e47954c8873b03ba1091ffc/data/text_adventures.txt) dataset originally intended for AI Dungeon 2. Performance will be very poor, as expected by the small model, and generations may be offensive thanks to its training data. This model was trained for testing purposes as the successor to [Merry/AID-Neo-125M](https://huggingface.co/Merry/AID-Neo-125M) and was intended for use with KoboldAI. A temperature of `0.5` and a repetition penalty of `1.05` were tested. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
macadeliccc/dolphin-2.9.3-mistral-7B-32K-GGUF
macadeliccc
2024-06-25T15:34:27Z
732
0
null
[ "gguf", "generated_from_trainer", "axolotl", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:cognitivecomputations/dolphin-2.9.3-mistral-7B-32K", "license:apache-2.0", "region:us" ]
null
2024-06-25T14:44:35Z
--- license: apache-2.0 base_model: cognitivecomputations/dolphin-2.9.3-mistral-7B-32K tags: - generated_from_trainer - axolotl datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN --- _Original Model Card_ # Dolphin 2.9.3 Mistral 7b v0.3 32k 🐬 Curated and trained by Eric Hartford and Cognitive Computations [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations) Discord: https://discord.gg/cognitivecomputations <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> This model is based on mistralai/Mistral-7B-v0.3, and is governed by the apache 2.0 license. The base model has 32k context, and our finetuning took place with 8192 sequence length. Dolphin 2.9.3 uses ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Usage ```bash ollama run CognitiveComputations/dolphin-mistral-32k:7b-v2.9.3-q4_0 ``` ## Supported Tags + dolphin-mistral-32k:7b-v2.9.3-q2_k + dolphin-mistral-32k:7b-v2.9.3-q3_k + dolphin-mistral-32k:7b-v2.9.3-q4_0 + dolphin-mistral-32k:7b-v2.9.3-q4_k_m + dolphin-mistral-32k:7b-v2.9.3-q4_k_s + dolphin-mistral-32k:7b-v2.9.3-q5_0 + dolphin-mistral-32k:7b-v2.9.3-q5_k_m + dolphin-mistral-32k:7b-v2.9.3-q5_k_s + dolphin-mistral-32k:7b-v2.9.3-q6_k + dolphin-mistral-32k:7b-v2.9.3-q8_0