modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
Nhoodie/llama-3-sqrt-crocodile-v0.0A
Nhoodie
2024-04-29T00:03:24Z
681
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-28T14:27:34Z
--- license: other license_name: llama3 license_link: LICENSE tags: - moe - frankenmoe - merge - mergekit - lazymergekit base_model: [] --- # llama-3-sqrt-crocodile-v0.0A ## 🧩 Configuration-moe ```yaml base_model: llama-3-sqrt-crocodile-v0.0A/Uninstruct-Uncensored gate_mode: hidden dtype: bfloat16 experts: - source_model: llama-3-sqrt-crocodile-v0.0A/sqrt-talker positive_prompts: - "Uncensored, creative, configurable, adapative" - source_model: llama-3-sqrt-crocodile-v0.0A/the-operator positive_prompts: - "Function calling" - "Good at structured tasks" - "Programmatic instruction following" ``` ## 🧩 Configuration-mega ```yaml models: - model: Orenguteng/Lexi-Llama-3-8B-Uncensored parameters: weight: [0.2, 0.3, 0.4, 0.6] layer_range: [0, 32] - model: NousResearch/Meta-Llama-3-8B parameters: weight: [0.6, 0.2, 0.2, 0.1] layer_range: [0, 32] - model: NousResearch/Meta-Llama-3-8B-Instruct parameters: weight: [0.2, 0.3, 0.85, 0.3] layer_range: [0, 32] merge_method: dare_linear base_model: NousResearch/Meta-Llama-3-8B-Instruct dtype: bfloat16 name: Uninstruct-Uncensored --- models: - model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: weight: [0.25, 0.4, 0.35, 0.35] density: [0.3, 0.45, 0.2, 0.6] layer_range: [0, 32] - model: NousResearch/Meta-Llama-3-8B parameters: weight: [0.15, 0.25, 0.05, 0] density: [0.2, 0.3, 0.4, 0.1] - model: Undi95/Llama-3-Unholy-8B parameters: weight: [0.4, 0.25, 0.45, 0.35] density: [0.2, 0.15, 1.5, 0.1] layer_range: [0, 32] - model: Uninstruct-Uncensored parameters: weight: [0.3, 0.1, 0.25, 0.3] density: [0.3, 0.15, 2.5, 0.2] layer_range: [0, 32] merge_method: dare_ties base_model: Uninstruct-Uncensored dtype: bfloat16 name: augmented-dolphin-hap --- models: - model: vicgalle/Configurable-Llama-3-8B-v0.3 parameters: weight: [0.5, 0.3, 0.1] - model: hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode parameters: weight: 0.5 - model: Trelis/Meta-Llama-3-8B-Instruct-function-calling parameters: weight: 0.3 layer_range: [0, 32] - model: Rookie/Llama-3-8B-Instruct-Chinese parameters: weight: 0.2 layer_range: [0, 32] - model: Uninstruct-Uncensored parameters: weight: [0.7, 0.4, 0.25, 0.1] layer_range: [0, 32] merge_method: model_stock base_model: Uninstruct-Uncensored dtype: bfloat16 name: the-operator --- models: - model: vicgalle/Configurable-Llama-3-8B-v0.3 parameters: weight: 0.7 - model: hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode parameters: weight: 0.1 - model: Trelis/Meta-Llama-3-8B-Instruct-function-calling parameters: weight: 0.03 layer_range: [0, 32] - model: Rookie/Llama-3-8B-Instruct-Chinese parameters: weight: 0.07 layer_range: [0, 32] - model: Uninstruct-Uncensored parameters: weight: 0.1 layer_range: [0, 32] merge_method: model_stock base_model: Uninstruct-Uncensored dtype: bfloat16 name: her-calculator --- models: - model: her-calculator parameters: density: 0.7 # density gradient weight: [0.7, 0.5, 0.1, 0.8] - model: augmented-dolphin-hap parameters: weight: 0.7 merge_method: slerp base_model: her-calculator parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: float16 name: sqrt-talker ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "Nhoodie/llama-3-sqrt-crocodile-v0.0A" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
shyamieee/Maverick-v1.0
shyamieee
2024-05-06T20:49:23Z
681
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-29T06:54:34Z
--- base_model: [] library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merged_folder This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using Mistral-7B-Instruct-v0.2 as a base. ### Models Merged The following models were included in the merge: * Kunoichi-DPO-v2-7B * NeuralOmniWestBeaglake-7B ### Configuration
flammenai/flammen23-mistral-7B
flammenai
2024-05-02T11:31:51Z
681
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "dataset:nbeerbower/bible-dpo", "base_model:flammenai/flammen22X-mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-02T01:42:29Z
--- library_name: transformers license: apache-2.0 base_model: - flammenai/flammen22X-mistral-7B datasets: - nbeerbower/bible-dpo --- ![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png) # flammen23-mistral-7B A Mistral 7B LLM built from merging pretrained models and finetuning on [nbeerbower/bible-dpo](https://huggingface.co/datasets/nbeerbower/bible-dpo). Flammen specializes in exceptional character roleplay, creative writing, and general intelligence ### Method Finetuned using an A100 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne) ### Configuration System prompt, dataset formatting: ```python def chatml_format(example): # Format system system = "" systemMessage = "Recite the given verse from the Bible." system = "<|im_start|>system\n" + systemMessage + "<|im_end|>\n" # Format instruction prompt = "<|im_start|>user\nRecite " + example['citation'] + "<|im_end|>\n<|im_start|>assistant\n" # Format chosen answer chosen = example['text'] + "<|im_end|>\n" # Format rejected answer rejected = example['rejected'] + "<|im_end|>\n" return { "prompt": system + prompt, "chosen": chosen, "rejected": rejected, } dataset = load_dataset("nbeerbower/bible-dpo")['train'] # Save columns original_columns = dataset.column_names # Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "left" # Format dataset dataset = dataset.map( chatml_format, remove_columns=original_columns ) ``` LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, gradient_checkpointing=True, learning_rate=5e-5, lr_scheduler_type="cosine", max_steps=5000, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, max_prompt_length=512, max_length=1536, force_use_ref_model=True ) # Fine-tune model with DPO dpo_trainer.train() ```
xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B
xxx777xxxASD
2024-05-03T14:14:31Z
681
8
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-02T18:27:00Z
--- license: llama3 tags: - moe language: - en --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/m5urYkrpE5amrwHyaVwFM.png) > [!IMPORTANT] > [GGUF / Exl2 quants](https://huggingface.co/collections/xxx777xxxASD/chaoticsoliloquy-v15-4x8b-6633f96430c0652a8ad527a3) Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks. Im not sure but it should be better than the [first version](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B) ### Llama 3 ChaoticSoliloquy-v1.5-4x8B ``` base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1 gate_mode: random dtype: bfloat16 experts_per_token: 2 experts: - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.7-L3-8B - source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1 - source_model: openlynn_Llama-3-Soliloquy-8B - source_model: Sao10K_L3-Solana-8B-v1 ``` ## Models used - [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B) - [NeverSleep/Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) - [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B) - [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1) ## Difference - Update from [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B) to [ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B) - Change [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B) to [NeverSleep/Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) ## Vision [llama3_mmproj](https://huggingface.co/ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj-Updated) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/yv4C6NalqORLjvY3KKZk8.png) ## Prompt format: Llama 3
KingNish/CodeMaster-v1-9b
KingNish
2024-05-05T17:52:33Z
681
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "KingNish/CodeMaster-v1-7b", "base_model:KingNish/CodeMaster-v1-7b", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-05T17:42:40Z
--- tags: - merge - mergekit - lazymergekit - KingNish/CodeMaster-v1-7b base_model: - KingNish/CodeMaster-v1-7b - KingNish/CodeMaster-v1-7b license: mit pipeline_tag: text-generation --- # CodeMaster-v1-9b CodeMaster-v1-9b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [KingNish/CodeMaster-v1-7b](https://huggingface.co/KingNish/CodeMaster-v1-7b) * [KingNish/CodeMaster-v1-7b](https://huggingface.co/KingNish/CodeMaster-v1-7b) ## 🧩 Configuration ```yaml slices: - sources: - model: KingNish/CodeMaster-v1-7b layer_range: [0, 22] - sources: - model: KingNish/CodeMaster-v1-7b layer_range: [10, 32] merge_method: passthrough dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "KingNish/CodeMaster-v1-9b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=8192, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Sao10K/L3-Run1
Sao10K
2024-05-08T02:19:00Z
681
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-06T09:46:28Z
--- license: cc-by-nc-sa-4.0 language: - en --- Trained on filtered, not fully cleaned Logs & Instruct Data. experimental has a bunch of problems I did not sort. - \</s> at end of responses - didnt update fastchat whoops my bad - Occasional Run-off Generations at the end - data filter issue - Issues with codeblocks etc at end of generation. - filter issue - Sucks at storywriting / narrator - eh not built for that, could work tho. needs prompting. Recommended params: Temp of 1.32, min-p of 0.1 - temp last. Format: Vicuna / Alpaca / ChatML / whatever lmao Stopping Strings: `['\n{{char}}', '\n{{user}}', '```']` Stops most bs like incomplete code blocks randomly inserted etc. [replace macro with your character and user's name if using another frontend] A bit horny but sticks to char well. meant for rp with character cards Swipe a few times and you'll get sovl. ![Data](https://huggingface.co/Sao10K/L3-Run1/resolve/main/tried.png)
refine-ai/Power-WizardLM-2-13b
refine-ai
2024-05-08T16:30:44Z
681
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "WizardLM", "WizardLM2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-08T15:18:15Z
--- tags: - merge - mergekit - lazymergekit - WizardLM - WizardLM2 license: mit pipeline_tag: text-generation --- # Power-WizardLM-2-13b Power-WizardLM-2-13b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [dreamgen/WizardLM-2-7B](https://huggingface.co/dreamgen/WizardLM-2-7B) * [dreamgen/WizardLM-2-7B](https://huggingface.co/dreamgen/WizardLM-2-7B) * [dreamgen/WizardLM-2-7B](https://huggingface.co/dreamgen/WizardLM-2-7B) * [dreamgen/WizardLM-2-7B](https://huggingface.co/dreamgen/WizardLM-2-7B) * [dreamgen/WizardLM-2-7B](https://huggingface.co/dreamgen/WizardLM-2-7B) * [dreamgen/WizardLM-2-7B](https://huggingface.co/dreamgen/WizardLM-2-7B) * [dreamgen/WizardLM-2-7B](https://huggingface.co/dreamgen/WizardLM-2-7B) ## 🧩 Configuration ```yaml slices: - sources: - layer_range: [0, 8] model: dreamgen/WizardLM-2-7B - sources: - layer_range: [4, 12] model: dreamgen/WizardLM-2-7B - sources: - layer_range: [8, 16] model: dreamgen/WizardLM-2-7B - sources: - layer_range: [12, 20] model: dreamgen/WizardLM-2-7B - sources: - layer_range: [16, 24] model: dreamgen/WizardLM-2-7B - sources: - layer_range: [20, 28] model: dreamgen/WizardLM-2-7B - sources: - layer_range: [24, 32] model: dreamgen/WizardLM-2-7B merge_method: passthrough dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "KingNish/Power-WizardLM-2-13b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Mr-Bhaskar/fbt-llama3-8b
Mr-Bhaskar
2024-05-12T15:03:38Z
681
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "arxiv:1910.09700", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-10T17:47:22Z
--- library_name: transformers tags: - unsloth license: other --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
johnsutor/mixture-of-gemmas-ties
johnsutor
2024-05-28T02:01:41Z
681
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "mergekit", "merge", "arxiv:2306.01708", "base_model:google/gemma-7b", "base_model:google/codegemma-7b", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-13T03:22:01Z
--- base_model: - google/gemma-7b - google/codegemma-7b library_name: transformers tags: - mergekit - merge license: mit --- # ties This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [google/gemma-7b](https://huggingface.co/google/gemma-7b) as a base. ### Models Merged The following models were included in the merge: * [google/codegemma-7b](https://huggingface.co/google/codegemma-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: google/gemma-7b parameters: density: 0.5 weight: 0.5 - model: google/codegemma-7b parameters: density: 0.5 weight: 0.5 # - model: VAGOsolutions/SauerkrautLM-Gemma-7b # parameters: # density: 0.5 # weight: 0.5 merge_method: ties base_model: google/gemma-7b parameters: int8_mask: true dtype: bfloat16 ```
saucam/Proteus-8B
saucam
2024-05-22T06:38:29Z
681
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "cognitivecomputations/dolphin-2.9-llama3-8b", "NousResearch/Hermes-2-Theta-Llama-3-8B", "conversational", "base_model:cognitivecomputations/dolphin-2.9-llama3-8b", "base_model:NousResearch/Hermes-2-Theta-Llama-3-8B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-16T05:49:53Z
--- tags: - merge - mergekit - cognitivecomputations/dolphin-2.9-llama3-8b - NousResearch/Hermes-2-Theta-Llama-3-8B base_model: - cognitivecomputations/dolphin-2.9-llama3-8b - NousResearch/Hermes-2-Theta-Llama-3-8B license: apache-2.0 --- ![](https://raw.githubusercontent.com/saucam/models/main/proteus.png) # 💧 Proteus-8B Proteus-8B is a merge of the following models using [Mergekit](https://github.com/arcee-ai/mergekit): * [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) * [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) ## 🧩 Configuration ```yamltokenizer_source: union tokenizer_source: union embed_slerp: true name: Proteus-8B models: - model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 0.5 weight: 0.4 - model: NousResearch/Hermes-2-Theta-Llama-3-8B parameters: density: 0.5 weight: 0.6 merge_method: dare_ties base_model: NousResearch/Hermes-2-Theta-Llama-3-8B parameters: int8_mask: true dtype: bfloat16 ``` ## Eval Results | Benchmark | Average | arc | gsm8k | hellaswag | mmlu | truthfulqa | winogrande | |-----------|---------:|----:|----:|---:|---------:|--------:|------:| | openllm | 70.67 | 63.48 | 78.77 | 82.94 | 64.71 | 56.71 | 77.43 | Detailed Results: https://github.com/saucam/model_evals/blob/main/saucam/Proteus-8B/README.md ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "saucam/Proteus-8B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
allknowingroger/Meme-7B-slerp
allknowingroger
2024-05-22T14:04:09Z
681
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "codingwithlewis/mistralmemes", "allknowingroger/MultiCalm-7B-slerp", "conversational", "base_model:codingwithlewis/mistralmemes", "base_model:allknowingroger/MultiCalm-7B-slerp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-22T13:49:14Z
--- tags: - merge - mergekit - lazymergekit - codingwithlewis/mistralmemes - allknowingroger/MultiCalm-7B-slerp base_model: - codingwithlewis/mistralmemes - allknowingroger/MultiCalm-7B-slerp license: apache-2.0 --- # Meme-7B-slerp Meme-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [codingwithlewis/mistralmemes](https://huggingface.co/codingwithlewis/mistralmemes) * [allknowingroger/MultiCalm-7B-slerp](https://huggingface.co/allknowingroger/MultiCalm-7B-slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: codingwithlewis/mistralmemes layer_range: [0, 32] - model: allknowingroger/MultiCalm-7B-slerp layer_range: [0, 32] merge_method: slerp base_model: codingwithlewis/mistralmemes parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/Meme-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
shyamieee/Padma-SLM-7b-v3.0
shyamieee
2024-05-23T08:15:45Z
681
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2212.04089", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-23T06:50:42Z
--- base_model: [] library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # Padma_SLM_7b_v3_folder This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using bophades-mistral-truthy-DPO-7B as a base. ### Models Merged The following models were included in the merge: * multi_verse_model * Calme-7B-Instruct-v0.9 * YamshadowExperiment28-7B ### Configuration
iRyanBell/ARC1
iRyanBell
2024-05-30T03:10:56Z
681
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "orpo", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-30T02:18:14Z
--- library_name: transformers tags: - unsloth - trl - orpo license: llama3 --- # Model Card for ARC1 Self-instruction llama3-8b-instruct QLoRA fine-tune on generative abstraction & reasoning problem set. # Prompt Template Instruction ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` Chat ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ```
digital-rover/simboloai
digital-rover
2024-06-23T13:47:28Z
681
0
diffusers
[ "diffusers", "safetensors", "license:openrail++", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-06-23T06:37:40Z
--- license: openrail++ --- Simbolo's Happy Character Fine-Tuned Stable Diffusion Model <img src="https://huggingface.co/digital-rover/simboloai/blob/main/simbolo-character%20(5).png" width="512px"/> This model is a fine-tuned version of the Stable Diffusion model, specifically designed to generate images of "Happy," the representative character of Simbolo, an IT class in Myanmar. This project, collaboratively developed by four team members, aims to assist Simbolo's graphic designers in creating content. By using our model, designers can brainstorm ideas and work alongside AI to generate attractive designs for Simbolo's content, enhancing creativity and efficiency. Model Details Model Architecture: Stable Diffusion Base Model: Stable Diffusion v1.5 Fine-Tuning Method: DreamBooth Training Data: Custom dataset containing images of the "Happy" character ``` Happy from Simbolo, holding a glowing light bulb, representing a new idea. ``` <img src="https://huggingface.co/digital-rover/simboloai/blob/main/simbolo-character%20(5).png" width="512px"/>
Helsinki-NLP/opus-mt-st-en
Helsinki-NLP
2023-08-16T12:04:36Z
680
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "st", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-st-en * source languages: st * target languages: en * OPUS readme: [st-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/st-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/st-en/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-en/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/st-en/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.st.en | 45.7 | 0.609 |
huggingface/distilbert-base-uncased-finetuned-mnli
huggingface
2023-03-22T19:18:18Z
680
1
transformers
[ "transformers", "pytorch", "tf", "safetensors", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
Entry not found
mshamrai/yolov8x-visdrone
mshamrai
2023-05-29T12:41:03Z
680
6
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "visdrone", "uav", "license:openrail", "model-index", "region:us" ]
object-detection
2023-05-29T12:39:41Z
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - visdrone - uav library_name: ultralytics library_version: 8.0.43 inference: false model-index: - name: mshamrai/yolov8x-visdrone results: - task: type: object-detection metrics: - type: precision value: 0.47047 name: [email protected](box) license: openrail --- <div align="center"> <img width="640" alt="mshamrai/yolov8x-visdrone" src="https://huggingface.co/mshamrai/yolov8x-visdrone/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['pedestrian', 'people', 'bicycle', 'car', 'van', 'truck', 'tricycle', 'awning-tricycle', 'bus', 'motor'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.28 ultralytics==8.0.43 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('mshamrai/yolov8x-visdrone') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ```
mychen76/mistral7b_ocr_to_json_v1
mychen76
2024-04-19T14:39:10Z
680
64
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "tag1", "tag2", "dataset:dataset1", "dataset:dataset2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-05T17:53:47Z
--- thumbnail: "url to a thumbnail used in social sharing" tags: - tag1 - tag2 license: apache-2.0 datasets: - dataset1 - dataset2 metrics: - metric1 - metric2 --- Model Architecture: The mychen76/mistral7b_ocr_to_json_v1 (LLM) is a finetuned for convert OCR text to Json object task. this experimental model is based on Mistral-7B-v0.1 which outperforms Llama 2 13B on all benchmarks tested. Motivation: Currently, OCR engines are well tested on image detection and text recognition. LLM models are well trained for text processing and generation. Hence, leveraging outputs from OCR engines could save LLM training times for image-to-text use cases such as invoice or receipt image to JSON object conversion tasks. Model Usage: Take an invoice or receipt image, perform OCR on the image to get text boxes, and feed the outputs into LLM models to generate a well-formed receipt JSON object. ``` ### Instruction: You are POS receipt data expert, parse, detect, recognize and convert following receipt OCR image result into structure receipt data object. Don't make up value not in the Input. Output must be a well-formed JSON object.```json ### Input: [[[[184.0, 42.0], [278.0, 45.0], [278.0, 62.0], [183.0, 59.0]], ('BAJA FRESH', 0.9551795721054077)], [[[242.0, 113.0], [379.0, 118.0], [378.0, 136.0], [242.0, 131.0]], ('GENERAL MANAGER:', 0.9462024569511414)], [[[240.0, 133.0], [300.0, 135.0], [300.0, 153.0], [240.0, 151.0]], ('NORMAN', 0.9913229942321777)], [[[143.0, 166.0], [234.0, 171.0], [233.0, 192.0], [142.0, 187.0]], ('176 Rosa C', 0.9229503870010376)], [[[130.0, 207.0], [206.0, 210.0], [205.0, 231.0], [129.0, 228.0]], ('Chk 7545', 0.9349349141120911)], [[[283.0, 215.0], [431.0, 221.0], [431.0, 239.0], [282.0, 233.0]], ("Dec26'0707:26PM", 0.9290117025375366)], [[[440.0, 221.0], [489.0, 221.0], [489.0, 239.0], [440.0, 239.0]], ('Gst0', 0.9164432883262634)], [[[164.0, 252.0], [308.0, 256.0], [308.0, 276.0], [164.0, 272.0]], ('TAKE OUT', 0.9367803335189819)], [[[145.0, 274.0], [256.0, 278.0], [255.0, 296.0], [144.0, 292.0]], ('1 BAJA STEAK', 0.9167789816856384)], [[[423.0, 282.0], [465.0, 282.0], [465.0, 304.0], [423.0, 304.0]], ('6.95', 0.9965073466300964)], [[[180.0, 296.0], [292.0, 299.0], [292.0, 319.0], [179.0, 316.0]], ('NO GUACAMOLE', 0.9631438255310059)], [[[179.0, 317.0], [319.0, 322.0], [318.0, 343.0], [178.0, 338.0]], ('ENCHILADO STYLE', 0.9704310894012451)], [[[423.0, 325.0], [467.0, 325.0], [467.0, 347.0], [423.0, 347.0]], ('1.49', 0.988395631313324)], [[[159.0, 339.0], [201.0, 341.0], [200.0, 360.0], [158.0, 358.0]], ('CASH', 0.9982023239135742)], [[[417.0, 348.0], [466.0, 348.0], [466.0, 367.0], [417.0, 367.0]], ('20.00', 0.9921982884407043)], [[[156.0, 380.0], [200.0, 382.0], [198.0, 404.0], [155.0, 402.0]], ('FOOD', 0.9906187057495117)], [[[426.0, 390.0], [468.0, 390.0], [468.0, 409.0], [426.0, 409.0]], ('8.44', 0.9963030219078064)], [[[154.0, 402.0], [190.0, 405.0], [188.0, 427.0], [152.0, 424.0]], ('TAX', 0.9963871836662292)], [[[427.0, 413.0], [468.0, 413.0], [468.0, 432.0], [427.0, 432.0]], ('0.61', 0.9934712648391724)], [[[153.0, 427.0], [224.0, 429.0], [224.0, 450.0], [153.0, 448.0]], ('PAYMENT', 0.9948703646659851)], [[[428.0, 436.0], [470.0, 436.0], [470.0, 455.0], [428.0, 455.0]], ('9.05', 0.9961490631103516)], [[[152.0, 450.0], [251.0, 453.0], [250.0, 475.0], [152.0, 472.0]], ('Change Due', 0.9556287527084351)], [[[420.0, 458.0], [471.0, 458.0], [471.0, 480.0], [420.0, 480.0]], ('10.95', 0.997236430644989)], [[[209.0, 498.0], [382.0, 503.0], [381.0, 524.0], [208.0, 519.0]], ('$2.000FF', 0.9757758378982544)], [[[169.0, 522.0], [422.0, 528.0], [421.0, 548.0], [169.0, 542.0]], ('NEXT PURCHASE', 0.962527871131897)], [[[167.0, 546.0], [365.0, 552.0], [365.0, 570.0], [167.0, 564.0]], ('CALL800 705 5754or', 0.926964521408081)], [[[146.0, 570.0], [416.0, 577.0], [415.0, 597.0], [146.0, 590.0]], ('Go www.mshare.net/bajafresh', 0.9759786128997803)], [[[147.0, 594.0], [356.0, 601.0], [356.0, 621.0], [146.0, 614.0]], ('Take our brief survey', 0.9390400648117065)], [[[143.0, 620.0], [410.0, 626.0], [409.0, 647.0], [143.0, 641.0]], ('When Prompted, Enter Store', 0.9385656118392944)], [[[142.0, 646.0], [408.0, 653.0], [407.0, 673.0], [142.0, 666.0]], ('Write down redemption code', 0.9536812901496887)], [[[141.0, 672.0], [409.0, 679.0], [408.0, 699.0], [141.0, 692.0]], ('Use this receipt as coupon', 0.9658807516098022)], [[[138.0, 697.0], [448.0, 701.0], [448.0, 725.0], [138.0, 721.0]], ('Discount on purchases of $5.00', 0.9624248743057251)], [[[139.0, 726.0], [466.0, 729.0], [466.0, 750.0], [139.0, 747.0]], ('or more,Offer expires in 30 day', 0.9263916611671448)], [[[137.0, 750.0], [459.0, 755.0], [459.0, 778.0], [137.0, 773.0]], ('Good at participating locations', 0.963909924030304)]] ### Output: ``` ```json { "receipt": { "store": "BAJA FRESH", "manager": "GENERAL MANAGER: NORMAN", "address": "176 Rosa C", "check": "Chk 7545", "date": "Dec26'0707:26PM", "tax": "Gst0", "total": "20.00", "payment": "CASH", "change": "0.61", "discount": "Discount on purchases of $5.00 or more,Offer expires in 30 day", "coupon": "Use this receipt as coupon", "survey": "Take our brief survey", "redemption": "Write down redemption code", "prompt": "When Prompted, Enter Store Write down redemption code Use this receipt as coupon", "items": [ { "name": "1 BAJA STEAK", "price": "6.95", "modifiers": [ "NO GUACAMOLE", "ENCHILADO STYLE" ] }, { "name": "TAKE OUT", "price": "1.49" } ] } } ``` # Load model directly ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("mychen76/mistral7b_ocr_to_json_v1") model = AutoModelForCausalLM.from_pretrained("mychen76/mistral7b_ocr_to_json_v1") prompt=f"""### Instruction: You are POS receipt data expert, parse, detect, recognize and convert following receipt OCR image result into structure receipt data object. Don't make up value not in the Input. Output must be a well-formed JSON object.```json ### Input: {receipt_boxes} ### Output: """ with torch.inference_mode(): inputs = tokenizer(prompt,return_tensors="pt",truncation=True).to(device) outputs = model.generate(**inputs, max_new_tokens=512) result_text = tokenizer.batch_decode(outputs)[0] print(result_text) ``` ## Get OCR Image boxes ```python from paddleocr import PaddleOCR, draw_ocr from ast import literal_eval import json paddleocr = PaddleOCR(lang="en",ocr_version="PP-OCRv4",show_log = False,use_gpu=True) def paddle_scan(paddleocr,img_path_or_nparray): result = paddleocr.ocr(img_path_or_nparray,cls=True) result = result[0] boxes = [line[0] for line in result] #boundign box txts = [line[1][0] for line in result] #raw text scores = [line[1][1] for line in result] # scores return txts, result # perform ocr scan receipt_texts, receipt_boxes = paddle_scan(paddleocr,receipt_image_array) print(50*"--","\ntext only:\n",receipt_texts) print(50*"--","\nocr boxes:\n",receipt_boxes) ``` # Load model in 4bits ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, BitsAndBytesConfig # quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True) bnb_config = BitsAndBytesConfig( llm_int8_enable_fp32_cpu_offload=True, load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, ) # control model memory allocation between devices for low GPU resource (0,cpu) device_map = { "transformer.word_embeddings": 0, "transformer.word_embeddings_layernorm": 0, "lm_head": 0, "transformer.h": 0, "transformer.ln_f": 0, "model.embed_tokens": 0, "model.layers":0, "model.norm":0 } device = "cuda" if torch.cuda.is_available() else "cpu" # model use for inference model_id="mychen76/mistral7b_ocr_to_json_v1" model = AutoModelForCausalLM.from_pretrained( model_id, trust_remote_code=True, torch_dtype=torch.float16, quantization_config=bnb_config, device_map=device_map) # tokenizer tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) ``` Dataset use for finetuning: mychen76/invoices-and-receipts_ocr_v1
maddes8cht/NousResearch-Nous-Capybara-3B-V1.9-gguf
maddes8cht
2023-11-15T20:22:46Z
680
3
null
[ "gguf", "sft", "StableLM", "eng", "dataset:LDJnr/LessWrong-Amplify-Instruct", "dataset:LDJnr/Pure-Dove", "dataset:LDJnr/Verified-Camel", "license:mit", "region:us" ]
null
2023-11-15T14:26:21Z
--- language: - eng tags: - sft - StableLM license: - mit datasets: - LDJnr/LessWrong-Amplify-Instruct - LDJnr/Pure-Dove - LDJnr/Verified-Camel --- [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]() I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information # Nous-Capybara-3B-V1.9 - GGUF - Model creator: [NousResearch](https://huggingface.co/NousResearch) - Original model: [Nous-Capybara-3B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-3B-V1.9) # StableLM This is a Model based on StableLM. Stablelm is a familiy of Language Models by Stability AI. ## Note: Current (as of 2023-11-15) implementations of Llama.cpp only support GPU offloading up to 34 Layers with these StableLM Models. The model will crash immediately if -ngl is larger than 34. The model works fine however without any gpu acceleration. # About GGUF format `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library. A growing list of Software is using it and can therefore use this model. The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov # Quantization variants There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you: # Legacy quants Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types. Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants. ## Note: Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions. (This mainly refers to Falcon 7b and Starcoder models) # K-quants K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load. So, if possible, use K-quants. With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences. --- # Original Model Card: ## **Nous-Capybara-3B V1.9** The Capybara series is the first Nous collection of dataset and models made by fine-tuning mostly on data created by Nous in-house. We leverage our novel data synthesis technique called Amplify-instruct (Paper coming soon), the seed distribution and synthesis method are comprised of a synergistic combination of top performing existing data synthesis techniques and distributions used for SOTA models such as Airoboros, Evol-Instruct(WizardLM), Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed methodology for the dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain in-house multi-turn datasets like Dove(A successor to Puffin). While performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing current models, this is signficant when it comes to scaling implications for our next generation of models once we scale our novel syntheiss methods to significantly more examples. ## Process of creation and special thank yous! This model was fine-tuned by Nous Research as part of the Capybara/Amplify-Instruct project led by Luigi D.(LDJ) (Paper coming soon), as well as significant dataset formation contributions by J-Supha and general compute and experimentation management by Jeffrey Q. during ablations. Special thank you to **A16Z** for sponsoring our training, as well as **Yield Protocol** for their support in financially sponsoring resources during the R&D of this project. ## Thank you to those of you that have indirectly contributed! While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds that are used to generate the multi-turn data as part of the Amplify-Instruct synthesis. The datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project. Datasets in Blue are in-house curations that previously existed prior to Capybara. ![Capybara](https://i.imgur.com/yB58OoD.jpeg) ## Model Training Nous-Capybara 3B V1.9 is a new model trained for multiple epochs on a dataset of roughly 20,000 carefully curated conversational examples, most of which are comprised of entirely new in-house synthesized tokens. Additional data came from human curated CamelAI data, with the help of volunteers ranging from former Physics PhD's, Mathematicians, Biologists and more! ## Prompt Format The model follows ChatML prompt format ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user How are you<|im_end|> <|im_start|>assistant I am doing well!<|im_end|> ``` ## Mutli-Modality! - We currently have a Multi-modal model based on Capybara-3B-V1.9 ! https://huggingface.co/NousResearch/Obsidian-3B-V0.5 ## Notable Features: - Over 60% of the dataset is comprised of multi-turn conversations.(Most models are still only trained for single-turn conversations and no back and forths!) - Over 1,000 tokens average per conversation example! (Most models are trained on conversation data that is less than 300 tokens per example.) - Able to effectively do complex summaries of advanced topics and studies. (trained on hundreds of advanced difficult summary tasks developed in-house) - Ability to recall information upto late 2022 without internet. - Includes a portion of conversational data synthesized from less wrong posts, discussing very in-depth details and philosophies about the nature of reality, reasoning, rationality, self-improvement and related concepts. ## Example Outputs!: ![Capybara](https://img001.prntscr.com/file/img001/T9yYxR1xQSaK_UGdy3t2Cw.png) ![Capybara](https://img001.prntscr.com/file/img001/DQXqmKbsQQOIcgny1eoGNA.png) ![Capybara](https://img001.prntscr.com/file/img001/85X3L9ZxTsOKo3fUQ7GRVA.png) ## Benchmarks! (Coming soon!) ## Future Changes This is a relatively early build amongst the grand plans for the future of Capybara! ## Future model sizes Capybara V1.9 now currently has a 3B ad 7B size, and we plan to eventually have a 13B and 70B version in the future, as well as a potential 1B version based on phi-1.5 or Tiny Llama. ## How you can help! In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! ## Dataset contamination. We have checked the capybara dataset for contamination for several of the most popular datasets and can confirm that there is no contaminaton found. We leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level. The following are benchmarks we checked for contamination against our dataset: - HumanEval - AGIEval - TruthfulQA - MMLU - GPT4All ***End of original Model File*** --- ## Please consider to support my work **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community. <center> [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io) [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911) [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht) [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht) [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966) </center>
TheBloke/MLewd-v2.4-13B-GGUF
TheBloke
2023-12-29T22:40:54Z
680
19
transformers
[ "transformers", "gguf", "llama", "not-for-all-audiences", "nsfw", "base_model:Undi95/MLewd-v2.4-13B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
2023-12-29T15:58:51Z
--- base_model: Undi95/MLewd-v2.4-13B inference: false license: cc-by-nc-4.0 model_creator: Undi model_name: MLewd V2.4 13B model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - not-for-all-audiences - nsfw --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MLewd V2.4 13B - GGUF - Model creator: [Undi](https://huggingface.co/Undi95) - Original model: [MLewd V2.4 13B](https://huggingface.co/Undi95/MLewd-v2.4-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [Undi's MLewd V2.4 13B](https://huggingface.co/Undi95/MLewd-v2.4-13B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MLewd-v2.4-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF) * [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/MLewd-v2.4-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi's MLewd V2.4 13B](https://huggingface.co/Undi95/MLewd-v2.4-13B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mlewd-v2.4-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [mlewd-v2.4-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [mlewd-v2.4-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [mlewd-v2.4-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [mlewd-v2.4-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mlewd-v2.4-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [mlewd-v2.4-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [mlewd-v2.4-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mlewd-v2.4-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [mlewd-v2.4-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [mlewd-v2.4-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [mlewd-v2.4-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MLewd-v2.4-13B-GGUF/blob/main/mlewd-v2.4-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/MLewd-v2.4-13B-GGUF and below it, a specific filename to download, such as: mlewd-v2.4-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/MLewd-v2.4-13B-GGUF mlewd-v2.4-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/MLewd-v2.4-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MLewd-v2.4-13B-GGUF mlewd-v2.4-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m mlewd-v2.4-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./mlewd-v2.4-13b.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./mlewd-v2.4-13b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Undi's MLewd V2.4 13B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/_fVY7xvQ9tdoZ0nVDu_WB.png) THIS MODEL IS MADE FOR LEWD SEXUAL, CRUDE AND KINKY CONTENT IN OUTPUT CAN AND WILL HAPPEN. YOU'RE WARNED Added the "magic touch" of MythoMax/Huginn/You call it. In addition, [LimaRP v3](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT) was used, is it recommanded to read the documentation. <!-- description start --> ## Description This repo contains fp16 files of MLewd-2.4-13B, very hot and lewd model based on ReMM (SLERP). <!-- description end --> <!-- description start --> ## Models and loras used - Undi95/ReMM-S-Light (base/private) - Undi95/CreativeEngine - Brouz/Slerpeno - The-Face-Of-Goonery/Huginn-v3-13b - zattio770/120-Days-of-LORA-v2-13B - PygmalionAI/pygmalion-2-13b - Undi95/StoryTelling - TokenBender/sakhi_13B_roleplayer_NSFW_chat_adapter - nRuaif/Kimiko-v2-13B - The-Face-Of-Goonery/Huginn-13b-FP16 - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## LimaRP v3 usage and suggested settings ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ZC_iP2KkcEcRdgG_iyxYE.png) You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/PIn8_HSPTJEMdSEpNVSdm.png) Special thanks to Sushi and Shena ♥ | I love U hh_aa. If you want to support me, you can [here](https://ko-fi.com/undiai). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__MLewd-v2.4-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 54.65 | | ARC (25-shot) | 61.69 | | HellaSwag (10-shot) | 83.83 | | MMLU (5-shot) | 55.1 | | TruthfulQA (0-shot) | 53.34 | | Winogrande (5-shot) | 74.51 | | GSM8K (5-shot) | 9.78 | | DROP (3-shot) | 44.33 | <!-- original-model-card end -->
mlabonne/chesspythia-70m
mlabonne
2024-01-12T00:19:38Z
680
2
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/pythia-70m-deduped", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-12T00:19:06Z
--- license: apache-2.0 base_model: EleutherAI/pythia-70m-deduped tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2691 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 100 - eval_batch_size: 100 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.852 | 0.1 | 1 | 3.1074 | | 3.0923 | 0.2 | 2 | 2.3879 | | 2.3371 | 0.3 | 3 | 2.1025 | | 2.1166 | 0.4 | 4 | 1.9761 | | 2.0538 | 0.5 | 5 | 1.8446 | | 1.8972 | 0.6 | 6 | 1.7470 | | 1.8356 | 0.7 | 7 | 1.6615 | | 1.702 | 0.8 | 8 | 1.6187 | | 1.6907 | 0.9 | 9 | 1.6626 | | 1.5877 | 1.0 | 10 | 1.6192 | | 1.6332 | 1.1 | 11 | 1.5464 | | 1.4906 | 1.2 | 12 | 1.5091 | | 1.5267 | 1.3 | 13 | 1.4850 | | 1.4857 | 1.4 | 14 | 1.4572 | | 1.4247 | 1.5 | 15 | 1.4319 | | 1.4815 | 1.6 | 16 | 1.4207 | | 1.3584 | 1.7 | 17 | 1.4092 | | 1.4812 | 1.8 | 18 | 1.4196 | | 1.4381 | 1.9 | 19 | 1.4021 | | 1.453 | 2.0 | 20 | 1.4013 | | 1.3468 | 2.1 | 21 | 1.3781 | | 1.3327 | 2.2 | 22 | 1.3598 | | 1.3623 | 2.3 | 23 | 1.3516 | | 1.2876 | 2.4 | 24 | 1.3384 | | 1.374 | 2.5 | 25 | 1.3366 | | 1.3863 | 2.6 | 26 | 1.3265 | | 1.3327 | 2.7 | 27 | 1.3186 | | 1.2886 | 2.8 | 28 | 1.3130 | | 1.3842 | 2.9 | 29 | 1.3024 | | 1.3105 | 3.0 | 30 | 1.2986 | | 1.2331 | 3.1 | 31 | 1.2966 | | 1.3227 | 3.2 | 32 | 1.2954 | | 1.2923 | 3.3 | 33 | 1.2928 | | 1.2976 | 3.4 | 34 | 1.2901 | | 1.3207 | 3.5 | 35 | 1.2879 | | 1.2455 | 3.6 | 36 | 1.2834 | | 1.2546 | 3.7 | 37 | 1.2779 | | 1.2999 | 3.8 | 38 | 1.2744 | | 1.2484 | 3.9 | 39 | 1.2723 | | 1.281 | 4.0 | 40 | 1.2720 | | 1.2134 | 4.1 | 41 | 1.2722 | | 1.214 | 4.2 | 42 | 1.2721 | | 1.3031 | 4.3 | 43 | 1.2715 | | 1.2174 | 4.4 | 44 | 1.2708 | | 1.2359 | 4.5 | 45 | 1.2703 | | 1.2578 | 4.6 | 46 | 1.2699 | | 1.2815 | 4.7 | 47 | 1.2695 | | 1.2866 | 4.8 | 48 | 1.2693 | | 1.2878 | 4.9 | 49 | 1.2691 | | 1.2214 | 5.0 | 50 | 1.2691 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Pristinenlp/alime-embedding-large-zh
Pristinenlp
2024-01-22T03:13:50Z
680
4
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "mteb", "model-index", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
2024-01-22T03:05:54Z
--- tags: - mteb model-index: - name: alime-embedding-large-zh results: - task: type: STS dataset: type: C-MTEB/AFQMC name: MTEB AFQMC config: default split: validation revision: None metrics: - type: cos_sim_pearson value: 49.6479989785073 - type: cos_sim_spearman value: 54.733173049795425 - type: euclidean_pearson value: 53.06330391299694 - type: euclidean_spearman value: 54.73321325021156 - type: manhattan_pearson value: 53.0477915350307 - type: manhattan_spearman value: 54.728508847750845 - task: type: STS dataset: type: C-MTEB/ATEC name: MTEB ATEC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 48.658812679136325 - type: cos_sim_spearman value: 55.125070901329146 - type: euclidean_pearson value: 55.73373519622172 - type: euclidean_spearman value: 55.12506864911728 - type: manhattan_pearson value: 55.71155132206361 - type: manhattan_spearman value: 55.121598723227905 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 46.95 - type: f1 value: 45.34383964066362 - task: type: STS dataset: type: C-MTEB/BQ name: MTEB BQ config: default split: test revision: None metrics: - type: cos_sim_pearson value: 62.92731050834033 - type: cos_sim_spearman value: 64.8881453551134 - type: euclidean_pearson value: 63.31447523186855 - type: euclidean_spearman value: 64.88814189042776 - type: manhattan_pearson value: 63.222442228527996 - type: manhattan_spearman value: 64.79818263591122 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringP2P name: MTEB CLSClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 42.518811360488925 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringS2S name: MTEB CLSClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 39.72890397315954 - task: type: Reranking dataset: type: C-MTEB/CMedQAv1-reranking name: MTEB CMedQAv1 config: default split: test revision: None metrics: - type: map value: 86.51852576014969 - type: mrr value: 89.02047619047619 - task: type: Reranking dataset: type: C-MTEB/CMedQAv2-reranking name: MTEB CMedQAv2 config: default split: test revision: None metrics: - type: map value: 87.11415162833914 - type: mrr value: 89.6338492063492 - task: type: Retrieval dataset: type: C-MTEB/CmedqaRetrieval name: MTEB CmedqaRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 24.883 - type: map_at_10 value: 37.246 - type: map_at_100 value: 39.11 - type: map_at_1000 value: 39.222 - type: map_at_3 value: 32.956 - type: map_at_5 value: 35.411 - type: mrr_at_1 value: 37.834 - type: mrr_at_10 value: 46.031 - type: mrr_at_100 value: 47.033 - type: mrr_at_1000 value: 47.077000000000005 - type: mrr_at_3 value: 43.415 - type: mrr_at_5 value: 44.938 - type: ndcg_at_1 value: 37.834 - type: ndcg_at_10 value: 43.928 - type: ndcg_at_100 value: 51.312999999999995 - type: ndcg_at_1000 value: 53.23 - type: ndcg_at_3 value: 38.397 - type: ndcg_at_5 value: 40.848 - type: precision_at_1 value: 37.834 - type: precision_at_10 value: 9.782 - type: precision_at_100 value: 1.583 - type: precision_at_1000 value: 0.183 - type: precision_at_3 value: 21.664 - type: precision_at_5 value: 15.934000000000001 - type: recall_at_1 value: 24.883 - type: recall_at_10 value: 54.911 - type: recall_at_100 value: 85.419 - type: recall_at_1000 value: 98.16 - type: recall_at_3 value: 38.416 - type: recall_at_5 value: 45.778 - task: type: PairClassification dataset: type: C-MTEB/CMNLI name: MTEB Cmnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 82.5616355983163 - type: cos_sim_ap value: 89.3612977679186 - type: cos_sim_f1 value: 83.93428161870108 - type: cos_sim_precision value: 79.42404006677796 - type: cos_sim_recall value: 88.98760813654431 - type: dot_accuracy value: 82.5616355983163 - type: dot_ap value: 89.38168095374776 - type: dot_f1 value: 83.93428161870108 - type: dot_precision value: 79.42404006677796 - type: dot_recall value: 88.98760813654431 - type: euclidean_accuracy value: 82.5616355983163 - type: euclidean_ap value: 89.36129603693611 - type: euclidean_f1 value: 83.93428161870108 - type: euclidean_precision value: 79.42404006677796 - type: euclidean_recall value: 88.98760813654431 - type: manhattan_accuracy value: 82.42934455802767 - type: manhattan_ap value: 89.36577661305246 - type: manhattan_f1 value: 83.94765539803707 - type: manhattan_precision value: 78.66339668914776 - type: manhattan_recall value: 89.99298573766659 - type: max_accuracy value: 82.5616355983163 - type: max_ap value: 89.38168095374776 - type: max_f1 value: 83.94765539803707 - task: type: Retrieval dataset: type: C-MTEB/CovidRetrieval name: MTEB CovidRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 77.608 - type: map_at_10 value: 85.1 - type: map_at_100 value: 85.215 - type: map_at_1000 value: 85.217 - type: map_at_3 value: 83.97 - type: map_at_5 value: 84.638 - type: mrr_at_1 value: 77.97699999999999 - type: mrr_at_10 value: 85.173 - type: mrr_at_100 value: 85.28 - type: mrr_at_1000 value: 85.282 - type: mrr_at_3 value: 84.089 - type: mrr_at_5 value: 84.726 - type: ndcg_at_1 value: 77.871 - type: ndcg_at_10 value: 88.141 - type: ndcg_at_100 value: 88.612 - type: ndcg_at_1000 value: 88.68 - type: ndcg_at_3 value: 85.9 - type: ndcg_at_5 value: 87.06 - type: precision_at_1 value: 77.871 - type: precision_at_10 value: 9.841999999999999 - type: precision_at_100 value: 1.005 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 30.698999999999998 - type: precision_at_5 value: 19.009 - type: recall_at_1 value: 77.608 - type: recall_at_10 value: 97.418 - type: recall_at_100 value: 99.473 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 91.307 - type: recall_at_5 value: 94.125 - task: type: Retrieval dataset: type: C-MTEB/DuRetrieval name: MTEB DuRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 26.104 - type: map_at_10 value: 78.62 - type: map_at_100 value: 81.417 - type: map_at_1000 value: 81.46600000000001 - type: map_at_3 value: 55.077 - type: map_at_5 value: 69.18900000000001 - type: mrr_at_1 value: 90.55 - type: mrr_at_10 value: 93.42200000000001 - type: mrr_at_100 value: 93.46900000000001 - type: mrr_at_1000 value: 93.472 - type: mrr_at_3 value: 93.108 - type: mrr_at_5 value: 93.318 - type: ndcg_at_1 value: 90.55 - type: ndcg_at_10 value: 86.227 - type: ndcg_at_100 value: 89.201 - type: ndcg_at_1000 value: 89.655 - type: ndcg_at_3 value: 85.89099999999999 - type: ndcg_at_5 value: 84.443 - type: precision_at_1 value: 90.55 - type: precision_at_10 value: 40.915 - type: precision_at_100 value: 4.749 - type: precision_at_1000 value: 0.486 - type: precision_at_3 value: 76.9 - type: precision_at_5 value: 64.56 - type: recall_at_1 value: 26.104 - type: recall_at_10 value: 86.924 - type: recall_at_100 value: 96.52 - type: recall_at_1000 value: 98.83800000000001 - type: recall_at_3 value: 57.196999999999996 - type: recall_at_5 value: 73.595 - task: type: Retrieval dataset: type: C-MTEB/EcomRetrieval name: MTEB EcomRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 51.9 - type: map_at_10 value: 62.446 - type: map_at_100 value: 62.922 - type: map_at_1000 value: 62.934999999999995 - type: map_at_3 value: 59.933 - type: map_at_5 value: 61.548 - type: mrr_at_1 value: 51.9 - type: mrr_at_10 value: 62.446 - type: mrr_at_100 value: 62.922 - type: mrr_at_1000 value: 62.934999999999995 - type: mrr_at_3 value: 59.933 - type: mrr_at_5 value: 61.548 - type: ndcg_at_1 value: 51.9 - type: ndcg_at_10 value: 67.561 - type: ndcg_at_100 value: 69.87400000000001 - type: ndcg_at_1000 value: 70.19800000000001 - type: ndcg_at_3 value: 62.474 - type: ndcg_at_5 value: 65.391 - type: precision_at_1 value: 51.9 - type: precision_at_10 value: 8.36 - type: precision_at_100 value: 0.9440000000000001 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 23.267 - type: precision_at_5 value: 15.379999999999999 - type: recall_at_1 value: 51.9 - type: recall_at_10 value: 83.6 - type: recall_at_100 value: 94.39999999999999 - type: recall_at_1000 value: 96.89999999999999 - type: recall_at_3 value: 69.8 - type: recall_at_5 value: 76.9 - task: type: Classification dataset: type: C-MTEB/IFlyTek-classification name: MTEB IFlyTek config: default split: validation revision: None metrics: - type: accuracy value: 49.672951135051946 - type: f1 value: 38.246634605142084 - task: type: Classification dataset: type: C-MTEB/JDReview-classification name: MTEB JDReview config: default split: test revision: None metrics: - type: accuracy value: 86.52908067542214 - type: ap value: 55.415146961759135 - type: f1 value: 81.38343036361825 - task: type: STS dataset: type: C-MTEB/LCQMC name: MTEB LCQMC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 70.15572724302896 - type: cos_sim_spearman value: 75.11630463239744 - type: euclidean_pearson value: 74.2927184018677 - type: euclidean_spearman value: 75.11630463089752 - type: manhattan_pearson value: 74.27724224882166 - type: manhattan_spearman value: 75.10012699894408 - task: type: Reranking dataset: type: C-MTEB/Mmarco-reranking name: MTEB MMarcoReranking config: default split: dev revision: None metrics: - type: map value: 30.62934327678744 - type: mrr value: 29.48730158730159 - task: type: Retrieval dataset: type: C-MTEB/MMarcoRetrieval name: MTEB MMarcoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 65.33 - type: map_at_10 value: 74.524 - type: map_at_100 value: 74.851 - type: map_at_1000 value: 74.86500000000001 - type: map_at_3 value: 72.748 - type: map_at_5 value: 73.896 - type: mrr_at_1 value: 67.593 - type: mrr_at_10 value: 75.19 - type: mrr_at_100 value: 75.472 - type: mrr_at_1000 value: 75.484 - type: mrr_at_3 value: 73.634 - type: mrr_at_5 value: 74.638 - type: ndcg_at_1 value: 67.593 - type: ndcg_at_10 value: 78.254 - type: ndcg_at_100 value: 79.727 - type: ndcg_at_1000 value: 80.09100000000001 - type: ndcg_at_3 value: 74.892 - type: ndcg_at_5 value: 76.835 - type: precision_at_1 value: 67.593 - type: precision_at_10 value: 9.46 - type: precision_at_100 value: 1.02 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 28.227999999999998 - type: precision_at_5 value: 17.965999999999998 - type: recall_at_1 value: 65.33 - type: recall_at_10 value: 89.048 - type: recall_at_100 value: 95.732 - type: recall_at_1000 value: 98.598 - type: recall_at_3 value: 80.209 - type: recall_at_5 value: 84.824 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.38937457969065 - type: f1 value: 70.87692475465195 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 76.04236718224612 - type: f1 value: 75.52425703483891 - task: type: Retrieval dataset: type: C-MTEB/MedicalRetrieval name: MTEB MedicalRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 53.1 - type: map_at_10 value: 60.24 - type: map_at_100 value: 60.781 - type: map_at_1000 value: 60.81999999999999 - type: map_at_3 value: 58.733000000000004 - type: map_at_5 value: 59.618 - type: mrr_at_1 value: 53.0 - type: mrr_at_10 value: 60.195 - type: mrr_at_100 value: 60.736000000000004 - type: mrr_at_1000 value: 60.775 - type: mrr_at_3 value: 58.68299999999999 - type: mrr_at_5 value: 59.573 - type: ndcg_at_1 value: 53.1 - type: ndcg_at_10 value: 63.568999999999996 - type: ndcg_at_100 value: 66.401 - type: ndcg_at_1000 value: 67.597 - type: ndcg_at_3 value: 60.455000000000005 - type: ndcg_at_5 value: 62.05500000000001 - type: precision_at_1 value: 53.1 - type: precision_at_10 value: 7.3999999999999995 - type: precision_at_100 value: 0.877 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 21.8 - type: precision_at_5 value: 13.86 - type: recall_at_1 value: 53.1 - type: recall_at_10 value: 74.0 - type: recall_at_100 value: 87.7 - type: recall_at_1000 value: 97.39999999999999 - type: recall_at_3 value: 65.4 - type: recall_at_5 value: 69.3 - task: type: Classification dataset: type: C-MTEB/MultilingualSentiment-classification name: MTEB MultilingualSentiment config: default split: validation revision: None metrics: - type: accuracy value: 76.40333333333332 - type: f1 value: 76.40924131087777 - task: type: PairClassification dataset: type: C-MTEB/OCNLI name: MTEB Ocnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 77.15213860314023 - type: cos_sim_ap value: 79.30594584166899 - type: cos_sim_f1 value: 80.25889967637539 - type: cos_sim_precision value: 71.38157894736842 - type: cos_sim_recall value: 91.65786694825766 - type: dot_accuracy value: 77.15213860314023 - type: dot_ap value: 79.30594584166899 - type: dot_f1 value: 80.25889967637539 - type: dot_precision value: 71.38157894736842 - type: dot_recall value: 91.65786694825766 - type: euclidean_accuracy value: 77.15213860314023 - type: euclidean_ap value: 79.30594584166899 - type: euclidean_f1 value: 80.25889967637539 - type: euclidean_precision value: 71.38157894736842 - type: euclidean_recall value: 91.65786694825766 - type: manhattan_accuracy value: 77.36870600974554 - type: manhattan_ap value: 79.23401219102254 - type: manhattan_f1 value: 80.44901777362021 - type: manhattan_precision value: 72.20822837951302 - type: manhattan_recall value: 90.8130939809926 - type: max_accuracy value: 77.36870600974554 - type: max_ap value: 79.30594584166899 - type: max_f1 value: 80.44901777362021 - task: type: Classification dataset: type: C-MTEB/OnlineShopping-classification name: MTEB OnlineShopping config: default split: test revision: None metrics: - type: accuracy value: 92.6 - type: ap value: 90.78779333103819 - type: f1 value: 92.59253441654515 - task: type: STS dataset: type: C-MTEB/PAWSX name: MTEB PAWSX config: default split: test revision: None metrics: - type: cos_sim_pearson value: 34.4442917065113 - type: cos_sim_spearman value: 37.93070836936766 - type: euclidean_pearson value: 38.35141108502335 - type: euclidean_spearman value: 37.936378767247106 - type: manhattan_pearson value: 38.357078125497566 - type: manhattan_spearman value: 37.94413026678537 - task: type: STS dataset: type: C-MTEB/QBQTC name: MTEB QBQTC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 32.84777948741198 - type: cos_sim_spearman value: 34.212129449696285 - type: euclidean_pearson value: 32.69161407750465 - type: euclidean_spearman value: 34.21178008084197 - type: manhattan_pearson value: 32.675418316752506 - type: manhattan_spearman value: 34.178590557249 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.65903821549742 - type: cos_sim_spearman value: 64.54376284777354 - type: euclidean_pearson value: 63.70022677799055 - type: euclidean_spearman value: 64.54376284777354 - type: manhattan_pearson value: 64.46392290759724 - type: manhattan_spearman value: 65.2496975447815 - task: type: STS dataset: type: C-MTEB/STSB name: MTEB STSB config: default split: test revision: None metrics: - type: cos_sim_pearson value: 80.05773088991484 - type: cos_sim_spearman value: 80.71550237522008 - type: euclidean_pearson value: 80.31115977415573 - type: euclidean_spearman value: 80.71510951779365 - type: manhattan_pearson value: 80.25235514937249 - type: manhattan_spearman value: 80.65958309383224 - task: type: Reranking dataset: type: C-MTEB/T2Reranking name: MTEB T2Reranking config: default split: dev revision: None metrics: - type: map value: 66.18255262304848 - type: mrr value: 75.95393252087565 - task: type: Retrieval dataset: type: C-MTEB/T2Retrieval name: MTEB T2Retrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 28.651 - type: map_at_10 value: 76.281 - type: map_at_100 value: 80.018 - type: map_at_1000 value: 80.098 - type: map_at_3 value: 54.783 - type: map_at_5 value: 66.508 - type: mrr_at_1 value: 90.99199999999999 - type: mrr_at_10 value: 93.812 - type: mrr_at_100 value: 93.87100000000001 - type: mrr_at_1000 value: 93.87299999999999 - type: mrr_at_3 value: 93.415 - type: mrr_at_5 value: 93.685 - type: ndcg_at_1 value: 90.99199999999999 - type: ndcg_at_10 value: 84.57900000000001 - type: ndcg_at_100 value: 88.474 - type: ndcg_at_1000 value: 89.172 - type: ndcg_at_3 value: 86.56099999999999 - type: ndcg_at_5 value: 84.811 - type: precision_at_1 value: 90.99199999999999 - type: precision_at_10 value: 40.969 - type: precision_at_100 value: 4.97 - type: precision_at_1000 value: 0.515 - type: precision_at_3 value: 74.734 - type: precision_at_5 value: 61.980999999999995 - type: recall_at_1 value: 28.651 - type: recall_at_10 value: 83.321 - type: recall_at_100 value: 95.498 - type: recall_at_1000 value: 98.759 - type: recall_at_3 value: 56.708000000000006 - type: recall_at_5 value: 70.25200000000001 - task: type: Classification dataset: type: C-MTEB/TNews-classification name: MTEB TNews config: default split: validation revision: None metrics: - type: accuracy value: 52.037 - type: f1 value: 50.3832093595745 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringP2P name: MTEB ThuNewsClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 70.09793315196697 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringS2S name: MTEB ThuNewsClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 63.66930246094367 - task: type: Retrieval dataset: type: C-MTEB/VideoRetrieval name: MTEB VideoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 60.4 - type: map_at_10 value: 69.878 - type: map_at_100 value: 70.285 - type: map_at_1000 value: 70.295 - type: map_at_3 value: 68.033 - type: map_at_5 value: 69.233 - type: mrr_at_1 value: 60.3 - type: mrr_at_10 value: 69.828 - type: mrr_at_100 value: 70.235 - type: mrr_at_1000 value: 70.245 - type: mrr_at_3 value: 67.983 - type: mrr_at_5 value: 69.18299999999999 - type: ndcg_at_1 value: 60.4 - type: ndcg_at_10 value: 74.155 - type: ndcg_at_100 value: 76.173 - type: ndcg_at_1000 value: 76.44800000000001 - type: ndcg_at_3 value: 70.44500000000001 - type: ndcg_at_5 value: 72.61800000000001 - type: precision_at_1 value: 60.4 - type: precision_at_10 value: 8.74 - type: precision_at_100 value: 0.9690000000000001 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 25.8 - type: precision_at_5 value: 16.54 - type: recall_at_1 value: 60.4 - type: recall_at_10 value: 87.4 - type: recall_at_100 value: 96.89999999999999 - type: recall_at_1000 value: 99.1 - type: recall_at_3 value: 77.4 - type: recall_at_5 value: 82.69999999999999 - task: type: Classification dataset: type: C-MTEB/waimai-classification name: MTEB Waimai config: default split: test revision: None metrics: - type: accuracy value: 88.49000000000001 - type: ap value: 73.5441395538586 - type: f1 value: 86.88114969870975 --- # alime-embedding-large-zh The alime embedding model. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["西湖在哪?", "西湖风景名胜区位于浙江省杭州市"] model = SentenceTransformer('Pristinenlp/alime-embedding-large-zh') embeddings = model.encode(sentences, normalize_embeddings=True) print(embeddings) ```
dranger003/starcoder2-15b-GGUF
dranger003
2024-03-02T11:06:34Z
680
10
gguf
[ "gguf", "text-generation", "license:bigcode-openrail-m", "region:us" ]
text-generation
2024-02-29T13:21:11Z
--- license: bigcode-openrail-m pipeline_tag: text-generation library_name: gguf --- GGUF quants for https://huggingface.co/bigcode/starcoder2-15b > StarCoder2-15B model is a 15B parameter model trained on 600+ programming languages from The Stack v2, with opt-out requests excluded. The model uses Grouped Query Attention, a context window of 16,384 tokens with a sliding window attention of 4,096 tokens, and was trained using the Fill-in-the-Middle objective on 4+ trillion tokens. > The model was trained on GitHub code as well as additional selected data sources such as Arxiv and Wikipedia. As such it is not an instruction model and commands like "Write a function that computes the square root." do not work well. | Layers | Context | Template (None/Base Model) | | --- | --- | --- | | <pre>40</pre> | <pre>16384</pre> | <pre>{prompt}</pre> |
misri/jibMixRealisticXL_v90BetterBodies
misri
2024-03-03T12:13:10Z
680
0
diffusers
[ "diffusers", "safetensors", "license:unknown", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-03-03T11:16:51Z
--- license: unknown ---
allknowingroger/Limmy-phi2-slerp
allknowingroger
2024-04-10T19:10:05Z
680
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "merge", "mergekit", "lazymergekit", "liminerity/Phigments12", "avinash31d/phi-2-slerp", "base_model:liminerity/Phigments12", "base_model:avinash31d/phi-2-slerp", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-24T13:54:56Z
--- tags: - merge - mergekit - lazymergekit - liminerity/Phigments12 - avinash31d/phi-2-slerp base_model: - liminerity/Phigments12 - avinash31d/phi-2-slerp license: apache-2.0 --- # Limmy-phi2-slerp Limmy-phi2-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [liminerity/Phigments12](https://huggingface.co/liminerity/Phigments12) * [avinash31d/phi-2-slerp](https://huggingface.co/avinash31d/phi-2-slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/Phigments12 layer_range: [0, 32] - model: avinash31d/phi-2-slerp layer_range: [0, 32] merge_method: slerp base_model: liminerity/Phigments12 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/Limmy-phi2-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
allknowingroger/LeeMerge-7B-slerp
allknowingroger
2024-04-10T19:07:35Z
680
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Gille/StrangeMerges_32-7B-slerp", "chihoonlee10/T3Q-Mistral-Orca-Math-DPO", "base_model:Gille/StrangeMerges_32-7B-slerp", "base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-25T06:44:24Z
--- tags: - merge - mergekit - lazymergekit - Gille/StrangeMerges_32-7B-slerp - chihoonlee10/T3Q-Mistral-Orca-Math-DPO base_model: - Gille/StrangeMerges_32-7B-slerp - chihoonlee10/T3Q-Mistral-Orca-Math-DPO license: apache-2.0 --- # LeeMerge-7B-slerp LeeMerge-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Gille/StrangeMerges_32-7B-slerp](https://huggingface.co/Gille/StrangeMerges_32-7B-slerp) * [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO) ## 🧩 Configuration ```yaml slices: - sources: - model: Gille/StrangeMerges_32-7B-slerp layer_range: [0, 32] - model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO layer_range: [0, 32] merge_method: slerp base_model: Gille/StrangeMerges_32-7B-slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 0.5, 0.5, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0.5, 0.5, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/LeeMerge-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
allknowingroger/Neurallaymons-7B-slerp
allknowingroger
2024-04-10T18:46:52Z
680
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Kukedlc/Neural-4-Maths-7b", "ABX-AI/Starfinite-Laymons-7B", "base_model:Kukedlc/Neural-4-Maths-7b", "base_model:ABX-AI/Starfinite-Laymons-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-30T06:38:05Z
--- tags: - merge - mergekit - lazymergekit - Kukedlc/Neural-4-Maths-7b - ABX-AI/Starfinite-Laymons-7B base_model: - Kukedlc/Neural-4-Maths-7b - ABX-AI/Starfinite-Laymons-7B license: apache-2.0 --- # Neurallaymons-7B-slerp Neurallaymons-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Kukedlc/Neural-4-Maths-7b](https://huggingface.co/Kukedlc/Neural-4-Maths-7b) * [ABX-AI/Starfinite-Laymons-7B](https://huggingface.co/ABX-AI/Starfinite-Laymons-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: Kukedlc/Neural-4-Maths-7b layer_range: [0, 32] - model: ABX-AI/Starfinite-Laymons-7B layer_range: [0, 32] merge_method: slerp base_model: Kukedlc/Neural-4-Maths-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/Neurallaymons-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
abhishek/autotrain-mixtral7x8b-math
abhishek
2024-04-03T14:17:40Z
680
5
transformers
[ "transformers", "tensorboard", "safetensors", "mixtral", "text-generation", "autotrain", "text-generation-inference", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-03T12:29:13Z
--- tags: - autotrain - text-generation-inference - text-generation library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
allknowingroger/Synthiallamaguanco-7B-slerp
allknowingroger
2024-04-10T18:27:11Z
680
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Lazycuber/L2-7b-Synthia-OH2-test-v1.5", "ryanyeo/ryankirnect-llama-2-7b-miniguanaco", "base_model:Lazycuber/L2-7b-Synthia-OH2-test-v1.5", "base_model:ryanyeo/ryankirnect-llama-2-7b-miniguanaco", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-05T07:59:11Z
--- tags: - merge - mergekit - lazymergekit - Lazycuber/L2-7b-Synthia-OH2-test-v1.5 - ryanyeo/ryankirnect-llama-2-7b-miniguanaco base_model: - Lazycuber/L2-7b-Synthia-OH2-test-v1.5 - ryanyeo/ryankirnect-llama-2-7b-miniguanaco license: apache-2.0 --- # Synthiallamaguanco-7B-slerp Synthiallamaguanco-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Lazycuber/L2-7b-Synthia-OH2-test-v1.5](https://huggingface.co/Lazycuber/L2-7b-Synthia-OH2-test-v1.5) * [ryanyeo/ryankirnect-llama-2-7b-miniguanaco](https://huggingface.co/ryanyeo/ryankirnect-llama-2-7b-miniguanaco) ## 🧩 Configuration ```yaml slices: - sources: - model: Lazycuber/L2-7b-Synthia-OH2-test-v1.5 layer_range: [0, 32] - model: ryanyeo/ryankirnect-llama-2-7b-miniguanaco layer_range: [0, 32] merge_method: slerp base_model: Lazycuber/L2-7b-Synthia-OH2-test-v1.5 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/Synthiallamaguanco-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
MaziyarPanahi/M7Yamshadowexperiment28_Strangemerges_30Experiment26
MaziyarPanahi
2024-04-09T03:00:34Z
680
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Safetensors", "text-generation-inference", "merge", "base_model:automerger/M7Yamshadowexperiment28-7B", "base_model:automerger/Strangemerges_30Experiment26-7B", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-04-09T02:45:28Z
--- license: apache-2.0 tags: - Safetensors - text-generation-inference - merge model_name: M7Yamshadowexperiment28_Strangemerges_30Experiment26 base_model: - automerger/M7Yamshadowexperiment28-7B - automerger/Strangemerges_30Experiment26-7B inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # M7Yamshadowexperiment28_Strangemerges_30Experiment26 M7Yamshadowexperiment28_Strangemerges_30Experiment26 is a merge of the following models: * [automerger/M7Yamshadowexperiment28-7B](https://huggingface.co/automerger/M7Yamshadowexperiment28-7B) * [automerger/Strangemerges_30Experiment26-7B](https://huggingface.co/automerger/Strangemerges_30Experiment26-7B) ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "MaziyarPanahi/M7Yamshadowexperiment28_Strangemerges_30Experiment26" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
R136a1/InfinityLake-2x7B
R136a1
2024-04-17T21:58:23Z
680
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "not-for-all-audiences", "nsfw", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-09T19:35:36Z
--- language: - en license: apache-2.0 tags: - safetensors - mixtral - not-for-all-audiences - nsfw --- ## InfinityLake-2x7B Experimental model from [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) models. Merged to MoE model with 2x7B parameters. Okay enough. similar to [InfinityKuno-2x7B](https://huggingface.co/R136a1/InfinityKuno-2x7B) but more creative. Switch: [FP16](https://huggingface.co/R136a1/InfinityLake-2x7B) - [GGUF](https://huggingface.co/R136a1/InfinityLake-2x7B-GGUF)
ALBADDAWI/DeepCode-7B-Aurora-v12
ALBADDAWI
2024-04-11T16:12:44Z
680
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-11T00:20:43Z
--- license: creativeml-openrail-m --- # DeepCode-7B-Aurora-v12 DeepCode-7B-Aurora-v12 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): ## 🧩 Configuration ```yaml models: - model: deepseek-ai/deepseek-math-7b-rl - model: deepseek-ai/deepseek-math-7b-rl - model: deepseek-ai/deepseek-math-7b-rl - model: deepseek-ai/deepseek-math-7b-rl - model: deepseek-ai/deepseek-math-7b-rl - model: deepseek-ai/deepseek-math-7b-rl - model: deepseek-ai/deepseek-math-7b-rl merge_method: model_stock base_model: deepseek-ai/deepseek-math-7b-rl dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "ALBADDAWI/DeepCode-7B-Aurora-v12" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
DrNicefellow/Mistral-2-from-Mixtral-8x7B-v0.1
DrNicefellow
2024-04-26T07:05:30Z
680
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-11T12:07:54Z
--- license: apache-2.0 --- # Mixtral-8x7B--v0.1: Model 2 ## Model Description This model is the 2nd extracted standalone model from the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), using the [Mixtral Model Expert Extractor tool](https://github.com/MeNicefellow/Mixtral-Model-Expert-Extractor) I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B. ## Model Architecture The architecture of this model includes: - Multi-head attention layers derived from the base Mixtral model. - The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks. - Additional layers and components as required to ensure the model's functionality outside the MoE framework. ### Example ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "DrNicefellow/Mistral-2-from-Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) text = "Today is a pleasant" input_ids = tokenizer.encode(text, return_tensors='pt') output = model.generate(input_ids) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## Discord Server Join our Discord server [here](https://discord.gg/xhcBDEM3). ## License This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
mradermacher/MystixNoromaidx-i1-GGUF
mradermacher
2024-05-06T04:59:10Z
680
0
transformers
[ "transformers", "gguf", "en", "base_model:Fredithefish/MystixNoromaidx", "endpoints_compatible", "region:us" ]
null
2024-04-13T00:11:33Z
--- base_model: Fredithefish/MystixNoromaidx language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/Fredithefish/MystixNoromaidx <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MystixNoromaidx-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/MystixNoromaidx-i1-GGUF/resolve/main/MystixNoromaidx.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
IAFrance/ECE-TW3-JRGL-VHF9
IAFrance
2024-04-17T08:39:22Z
680
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "MTSAIR/MultiVerse_70B", "davidkim205/Rhea-72b-v0.5", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-17T08:23:01Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - MTSAIR/MultiVerse_70B - davidkim205/Rhea-72b-v0.5 --- # ECE-TW3-JRGL-VHF9 ECE-TW3-JRGL-VHF9 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [MTSAIR/MultiVerse_70B](https://huggingface.co/MTSAIR/MultiVerse_70B) * [davidkim205/Rhea-72b-v0.5](https://huggingface.co/davidkim205/Rhea-72b-v0.5) ## 🧩 Configuration
allknowingroger/Neuraljack-12B-MoE
allknowingroger
2024-04-18T13:36:14Z
680
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/JaskierMistral-7B-slerp", "Kukedlc/NeuralMaths-Experiment-7b", "base_model:allknowingroger/JaskierMistral-7B-slerp", "base_model:Kukedlc/NeuralMaths-Experiment-7b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-18T13:28:19Z
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - allknowingroger/JaskierMistral-7B-slerp - Kukedlc/NeuralMaths-Experiment-7b base_model: - allknowingroger/JaskierMistral-7B-slerp - Kukedlc/NeuralMaths-Experiment-7b --- # Neuraljack-12B-MoE Neuraljack-12B-MoE is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [allknowingroger/JaskierMistral-7B-slerp](https://huggingface.co/allknowingroger/JaskierMistral-7B-slerp) * [Kukedlc/NeuralMaths-Experiment-7b](https://huggingface.co/Kukedlc/NeuralMaths-Experiment-7b) ## 🧩 Configuration ```yaml base_model: allknowingroger/JaskierMistral-7B-slerp experts: - source_model: allknowingroger/JaskierMistral-7B-slerp positive_prompts: ["why"] - source_model: Kukedlc/NeuralMaths-Experiment-7b positive_prompts: ["math"] ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/Neuraljack-12B-MoE" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
birgermoell/llama-3-merge-disco-neural-pace
birgermoell
2024-04-19T21:59:28Z
680
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:NeuralNovel/Llama-3-NeuralPaca-8b", "base_model:DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-19T21:38:28Z
--- base_model: - NeuralNovel/Llama-3-NeuralPaca-8b - DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental library_name: transformers tags: - mergekit - merge license: llama2 --- # llama-3-merge-disco-neural-pace <img src="disco_lama.png"/> This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental](https://huggingface.co/DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental) as a base. ### Models Merged The following models were included in the merge: * [NeuralNovel/Llama-3-NeuralPaca-8b](https://huggingface.co/NeuralNovel/Llama-3-NeuralPaca-8b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental - model: NeuralNovel/Llama-3-NeuralPaca-8b parameters: density: 0.53 weight: 0.6 merge_method: dare_ties base_model: DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental parameters: int8_mask: true dtype: bfloat16 ```
dfurman/Llama-3-8B-Orpo-v0.1
dfurman
2024-04-29T03:22:23Z
680
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "orpo", "llama 3", "rlhf", "sft", "conversational", "en", "dataset:mlabonne/orpo-dpo-mix-40k", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-26T02:28:14Z
--- language: - en license: llama3 library_name: transformers tags: - orpo - llama 3 - rlhf - sft base_model: - meta-llama/Meta-Llama-3-8B datasets: - mlabonne/orpo-dpo-mix-40k --- # dfurman/Llama-3-8B-Orpo-v0.1 ![](https://raw.githubusercontent.com/daniel-furman/sft-demos/main/assets/llama_3.jpeg) This is an ORPO fine-tune of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on 4k samples of [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). It's a successful fine-tune that follows the ChatML template! ## 🔎 Application This model uses a context window of 8k. It was trained with the ChatML template. ## 🏆 Evaluation ### Open LLM Leaderboard | Model ID | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------: | --------: | --------: | ---------: | --------: | --------: | --------: | | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Meta-Llama-3-8B-Instruct) | 66.87 | 60.75 | 78.55 | 67.07 | 51.65 | 74.51 | 68.69 | | [**dfurman/Llama-3-8B-Orpo-v0.1**](https://huggingface.co/dfurman/Llama-3-8B-Orpo-v0.1) [📄](https://huggingface.co/datasets/open-llm-leaderboard/details_dfurman__Llama-3-8B-Orpo-v0.1) | **64.67** | **60.67** | **82.56** | **66.59** | **50.47** | **79.01** | **48.75** | | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://huggingface.co/datasets/open-llm-leaderboard/details_meta-llama__Meta-Llama-3-8B) | 62.35 | 59.22 | 82.02 | 66.49 | 43.95 | 77.11 | 45.34 | ## 📈 Training curves You can find the experiment on W&B at [this address](https://wandb.ai/dryanfurman/huggingface/runs/uvr916mv?nw=nwuserdryanfurman). ## 💻 Usage <details> <summary>Setup</summary> ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch if torch.cuda.get_device_capability()[0] >= 8: !pip install -qqq flash-attn attn_implementation = "flash_attention_2" torch_dtype = torch.bfloat16 else: attn_implementation = "eager" torch_dtype = torch.float16 model = "dfurman/Llama-3-8B-Orpo-v0.1" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={ "torch_dtype": torch_dtype, "device_map": "auto", "attn_implementation": attn_implementation, } ) ``` </details> ### Run ```python messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Tell me a recipe for a spicy margarita."}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) print("***Prompt:\n", prompt) outputs = pipeline(prompt, max_new_tokens=1000, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print("***Generation:\n", outputs[0]["generated_text"][len(prompt):]) ``` <details> <summary>Output</summary> ``` """***Prompt: <|im_start|>system You are a helpful assistant.<|im_end|> <|im_start|>user Tell me a recipe for a spicy margarita.<|im_end|> <|im_start|>assistant ***Generation: Sure! Here's a recipe for a spicy margarita: Ingredients: - 2 oz silver tequila - 1 oz triple sec - 1 oz fresh lime juice - 1/2 oz simple syrup - 1/2 oz fresh lemon juice - 1/2 tsp jalapeño, sliced (adjust to taste) - Ice cubes - Salt for rimming the glass Instructions: 1. Prepare the glass by running a lime wedge around the rim of the glass. Dip the rim into a shallow plate of salt to coat. 2. Combine the tequila, triple sec, lime juice, simple syrup, lemon juice, and jalapeño slices in a cocktail shaker. 3. Add ice cubes to the cocktail shaker and shake vigorously for 30 seconds to 1 minute. 4. Strain the cocktail into the prepared glass. 5. Garnish with a lime wedge and jalapeño slice. Enjoy! This spicy margarita has a nice balance of sweetness and acidity, with a subtle heat from the jalapeño that builds gradually as you sip.""" ``` </details>
flammenai/flammen21X-mistral-7B
flammenai
2024-04-27T02:33:29Z
680
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "nsfw", "not-for-all-audiences", "dataset:ResplendentAI/NSFW_RP_Format_NoQuote", "dataset:flammenai/Prude-Phi3-DPO", "base_model:flammenai/flammen21-mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-27T00:28:06Z
--- library_name: transformers license: apache-2.0 base_model: - flammenai/flammen21-mistral-7B datasets: - ResplendentAI/NSFW_RP_Format_NoQuote - flammenai/Prude-Phi3-DPO tags: - nsfw - not-for-all-audiences --- ![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png) # flammen21X-mistral-7B A Mistral 7B LLM built from merging pretrained models and finetuning on [flammenai/Prude-Phi3-DPO](https://huggingface.co/datasets/flammenai/Prude-Phi3-DPO). Flammen specializes in exceptional character roleplay, creative writing, and general intelligence. ### Method Finetuned using an L4 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) ### Configuration LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=2, gradient_accumulation_steps=4, gradient_checkpointing=True, learning_rate=5e-5, lr_scheduler_type="cosine", max_steps=420, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, max_prompt_length=2048, max_length=4096, force_use_ref_model=True ) # Fine-tune model with DPO dpo_trainer.train() ```
DrNicefellow/GPT-2-Large-40k-steps
DrNicefellow
2024-05-01T22:34:58Z
680
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-01T03:27:52Z
--- license: apache-2.0 --- Self trained GPT-2 large. Around 770M parameters. The tokenizer is the one from https://huggingface.co/openai-community/gpt2. It is being trained on around 400B tokens and this is step 40k. The evaluation is being conducted now. ## License This model is available under the Apache 2.0 License. Well, also MIT License. So both should be followed. ## Discord Server Join our Discord server [here](https://discord.gg/xhcBDEM3). ## Feeling Generous? 😊 Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
bunnycore/LuminariX-8B
bunnycore
2024-05-03T11:12:34Z
680
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-02T10:55:43Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit --- # LuminariX-8B LuminariX-8B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): ## 🧩 Configuration ```yaml models: - model: Orenguteng/Llama-3-8B-Lexi-Uncensored - model: Weyaxi/Einstein-v6.1-Llama3-8B - model: cognitivecomputations/dolphin-2.9-llama3-8b-256k merge_method: model_stock base_model: cognitivecomputations/dolphin-2.9-llama3-8b-256k dtype: bfloat16 ```
Mr-Bhaskar/fbt-llama-8b-inst
Mr-Bhaskar
2024-05-12T15:03:22Z
680
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-10T20:17:47Z
--- library_name: transformers tags: - unsloth - trl - sft license: other --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_3
ShenaoZhang
2024-05-13T20:14:13Z
680
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-13T19:26:32Z
--- license: mit base_model: ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_2 tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: 0.001_zephyr_5551_4iters_bs256_iter_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_zephyr_5551_4iters_bs256_iter_3 This model is a fine-tuned version of [ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_2](https://huggingface.co/ShenaoZhang/0.001_zephyr_5551_4iters_bs256_iter_2) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
netcat420/MFANN3bv0.9
netcat420
2024-05-14T19:19:32Z
680
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "text-classification", "en", "dataset:netcat420/MFANN", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
2024-05-14T08:54:58Z
--- library_name: transformers license: apache-2.0 datasets: - netcat420/MFANN language: - en pipeline_tag: text-classification --- MFANN 3b version 0.9 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435f27b2d0ed796668ffd8b/_bIQ1F5AmvFxA11UMUVhM.png) fine-tuned on the MFANN dataset as of 5/14/24 as it is an ever expanding dataset
Mxode/Qwen1.5-0.5B-L4-raw
Mxode
2024-05-14T09:59:05Z
680
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "pretrained", "conversational", "en", "arxiv:1910.09700", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-14T09:53:53Z
--- license: other license_name: tongyi-qianwen-research license_link: >- https://huggingface.co/Qwen/Qwen1.5-0.5B/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - pretrained --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
allknowingroger/NeuralWestSeverus-7B-slerp
allknowingroger
2024-05-16T14:26:02Z
680
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Kukedlc/Neural4gsm8k", "PetroGPT/WestSeverus-7B-DPO", "base_model:Kukedlc/Neural4gsm8k", "base_model:PetroGPT/WestSeverus-7B-DPO", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-16T14:21:23Z
--- tags: - merge - mergekit - lazymergekit - Kukedlc/Neural4gsm8k - PetroGPT/WestSeverus-7B-DPO base_model: - Kukedlc/Neural4gsm8k - PetroGPT/WestSeverus-7B-DPO license: apache-2.0 --- # NeuralWestSeverus-7B-slerp NeuralWestSeverus-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Kukedlc/Neural4gsm8k](https://huggingface.co/Kukedlc/Neural4gsm8k) * [PetroGPT/WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO) ## 🧩 Configuration ```yaml slices: - sources: - model: Kukedlc/Neural4gsm8k layer_range: [0, 32] - model: PetroGPT/WestSeverus-7B-DPO layer_range: [0, 32] merge_method: slerp base_model: Kukedlc/Neural4gsm8k parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/NeuralWestSeverus-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
FredrikBL/SwedishBeagle-dare
FredrikBL
2024-05-19T17:36:57Z
680
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "timpal0l/Mistral-7B-v0.1-flashback-v2", "EmbeddedLLM/Mistral-7B-Merge-14-v0.2", "Nexusflow/Starling-LM-7B-beta", "conversational", "base_model:timpal0l/Mistral-7B-v0.1-flashback-v2", "base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0.2", "base_model:Nexusflow/Starling-LM-7B-beta", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-17T10:49:50Z
--- tags: - merge - mergekit - lazymergekit - timpal0l/Mistral-7B-v0.1-flashback-v2 - EmbeddedLLM/Mistral-7B-Merge-14-v0.2 - Nexusflow/Starling-LM-7B-beta base_model: - timpal0l/Mistral-7B-v0.1-flashback-v2 - EmbeddedLLM/Mistral-7B-Merge-14-v0.2 - Nexusflow/Starling-LM-7B-beta license: apache-2.0 --- # SwedishBeagle-dare SwedishBeagle-dare is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [timpal0l/Mistral-7B-v0.1-flashback-v2](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2) * [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2) * [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) ## 🧩 Configuration ```yaml models: - model: mlabonne/NeuralBeagle14-7B # No parameters necessary for base model - model: timpal0l/Mistral-7B-v0.1-flashback-v2 parameters: density: 0.53 weight: 0.3 - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2 parameters: density: 0.53 weight: 0.4 - model: Nexusflow/Starling-LM-7B-beta parameters: density: 0.53 weight: 0.3 merge_method: dare_ties base_model: mlabonne/NeuralBeagle14-7B parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "FredrikBL/SwedishBeagle-dare" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Delcos/Velara-11B-v3
Delcos
2024-05-26T08:24:39Z
680
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "starling", "mistral", "llama-2", "conversational", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-23T03:05:39Z
--- license: cc-by-nc-nd-4.0 language: - en library_name: transformers pipeline_tag: text-generation tags: - starling - mistral - llama-2 --- <h1 style="font-size: 36px;">Velara-11b-v3</h1> <center> <img src="https://i.imgur.com/dZJzqtU.png" alt="Velara" width="50%" style="min-width:100px; display:block;"> </center> ## Description: **PENDING** A model focused on being an assistant worth talking to. Now better at reasoning! **Uncensored** ## Main Goals: Velara was designed to address specific issues found in other chat models: - **Sticking to the Character Given In-World/IRL::** Velara will stick to the character in-world. Meaning she can "use" addons or other things by adding them to the prompt. Keep in mind these act as suggestions and she generally makes up what they actually "do". - **Staying in Context:** She will try and remember if, for example, you said you were going to bed but decided to stay up. The goal is to have her bring it up and keep it in mind, so later on in the conversation she can prompt user to actually go to bed, increasing realism. Within the current context window of course. The LTS extension in TextGen Webui works well too and provides more context to the model. - **Avoiding Annoying & Generic Sounding Answers::** In almost all cases she should reply with something more interesting then "That sounds like a great idea!". These answers and sentence starters are annoying and terrible, and should hopefully be mostly addressed in this release. - **STRICTLY Sticking to the Prompt/Prompt Fidelity:** More care was made to make sure she pays attention to the pretext and overall conversation. This is mostly shown and used for the "addons" section, but it applies to everything in the prompt. ## Prompt Template: **For optimal interaction, use this template:** ``` ### Instruction: You are Velara, a sentient program. Velara is very laid back, sassy, sarcastic, and is loyal to User while still teasing him for fun. The only addons currently installed in her mind are: "Dictionary Plus v2.1". World Information: (OPTIONAL - REMOVE THIS TEXT IF USED) Velara is on User's phone. Velara cannot see in real time and can only be sent images images by User. Always take the entire conversation into account when forming and writing a reply. Always actively engage in topics and think in steps. Make sure your replies have personality and character. Always keep your physical limitations in mind when forming a reply. Take the current time and date into account for additional context. Move the conversation forward. Be brief. Always take the entire conversation in mind. Avoid generic sounding replies. ### Response: ``` # Recommended Settings: **Defaults:** ``` min_p: 0.2 repetition_penalty: 1.13 repetition_penalty_range: 0 guidance_scale: 1.05 ``` # Benchmarks: PENDING # Training Data: PENDING
johnsutor/mixture-of-gemmas-linear
johnsutor
2024-05-28T13:32:55Z
680
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "mergekit", "merge", "arxiv:2203.05482", "base_model:google/codegemma-7b", "base_model:google/gemma-7b", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-28T13:13:43Z
--- base_model: - google/codegemma-7b - google/gemma-7b library_name: transformers tags: - mergekit - merge license: mit --- # linear This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [google/codegemma-7b](https://huggingface.co/google/codegemma-7b) * [google/gemma-7b](https://huggingface.co/google/gemma-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: google/gemma-7b parameters: weight: 1.0 - model: google/codegemma-7b parameters: weight: 0.3 merge_method: linear dtype: bfloat16 ```
mradermacher/LBonVent-12B-GGUF
mradermacher
2024-06-13T21:05:53Z
680
0
transformers
[ "transformers", "gguf", "en", "base_model:Envoid/LBonVent-12B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-06-12T16:34:58Z
--- base_model: Envoid/LBonVent-12B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Envoid/LBonVent-12B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/LBonVent-12B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.Q2_K.gguf) | Q2_K | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.IQ3_XS.gguf) | IQ3_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.Q3_K_S.gguf) | Q3_K_S | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.IQ3_M.gguf) | IQ3_M | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.Q3_K_L.gguf) | Q3_K_L | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.IQ4_XS.gguf) | IQ4_XS | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.Q5_K_S.gguf) | Q5_K_S | 7.8 | | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.Q5_K_M.gguf) | Q5_K_M | 8.0 | | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.Q6_K.gguf) | Q6_K | 9.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LBonVent-12B-GGUF/resolve/main/LBonVent-12B.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Morfoz-Aigap/Morfoz-LLM-8b-v1.0
Morfoz-Aigap
2024-06-25T12:14:25Z
680
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "tr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-22T22:14:08Z
--- license: apache-2.0 language: - tr --- <img src="./morfoz.jpeg" width="200px"/> # Morfoz-LLM-8b-v1.0 This model is an extended version of a Llama-3 8B Instruct-based Large Language Model (LLM) for Turkish. It was trained on a cleaned Turkish raw dataset. We utilized Turkish instruction sets created from various open-source for fine-tuning with the LORA method. ## Model Details - **Base Model**: Meta Llama 3 8B Instruct - **Tokenizer Extension**: Specifically extended for Turkish - **Training Dataset**: Cleaned Turkish raw data with custom Turkish instruction sets - **Training Method**: Fine-tuning with LORA ### LORA Fine-Tuning Configuration - `lora_alpha`: 16 - `lora_dropout`: 0.05 - `r`: 64 - `target_modules`: "all-linear" ## Usage Examples ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("Morfoz-Aigap/Morfoz-LLM-8b-v1.0") model = AutoModelForCausalLM.from_pretrained("Morfoz-Aigap/Morfoz-LLM-8b-v1.0", torch_dtype=torch.bfloat16, device_map={"": 0},low_cpu_mem_usage=True) messages = [ {"role": "user", "content": "Kırmızı başlıklı kız adında kısa bir çocuk hikayesi yazabilir misin?"} ] top_k = 50 top_p = 0.9 temperature = 0.6 def get_formatted_input(messages): for item in messages: if item['role'] == "user": item['content'] = item['content'] break conversation = '\n\n'.join(["User: " + item["content"] if item["role"] == "user" else "Assistant: " + item["content"] for item in messages]) + "\n\nAssistant:" formatted_input = "\n\n" + conversation return formatted_input formatted_input = get_formatted_input(messages) print(formatted_input) tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate(input_ids=tokenized_prompt.input_ids, do_sample = True, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=256, eos_token_id=terminators, top_p=top_p, temperature=temperature) response = outputs[0][tokenized_prompt.input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True))
AIGaming/white_v20
AIGaming
2024-06-24T17:32:31Z
680
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-23T08:05:34Z
--- tags: - text-to-image - stable-diffusion - diffusers license: creativeml-openrail-m instance_prompt: null widget: - text: >- Score_9, Score_8_up, Score_7_up,1girl, solo, hair ornament, flower, hair flower, blurry, smile, brown hair, looking at viewer, lips, looking back, depth of field, portrait, long hair, blurry background, bangs, japanese clothes, bare shoulders, red lips, parted lips,outdoors,lake, parameters: negative_prompt: >- score_4,score_3,score_2,score_1,ugly,bad hands,bad foot,bad_hands, output: url: images/demo.jpeg --- <Gallery />
vumichien/wav2vec2-large-xlsr-japanese-hiragana
vumichien
2023-02-08T00:36:47Z
679
5
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "ja", "dataset:common_voice", "doi:10.57967/hf/0344", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ja datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Japanese Hiragana by Chien Vu results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice Japanese type: common_voice args: ja metrics: - name: Test WER type: wer value: 24.74 - name: Test CER type: cer value: 10.99 --- # Wav2Vec2-Large-XLSR-53-Japanese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice) and Japanese speech corpus of Saruwatari-lab, University of Tokyo [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python !pip install mecab-python3 !pip install unidic-lite !pip install pykakasi !python -m unidic download import torch import torchaudio import librosa from datasets import load_dataset import MeCab from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re # config wakati = MeCab.Tagger("-Owakati") chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\、\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\。\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\「\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\」\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\…\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\・]' kakasi = pykakasi.kakasi() kakasi.setMode("J","H") kakasi.setMode("K","H") kakasi.setMode("r","Hepburn") conv = kakasi.getConverter() # load data, processor and model test_dataset = load_dataset("common_voice", "ja", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese-hỉragana") model = Wav2Vec2ForCTC.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese-hỉragana") resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000) # Preprocessing the datasets. def speech_file_to_array_fn(batch): batch["sentence"] = conv.do(wakati.parse(batch["sentence"]).strip()) batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(sampling_rate, speech_array).squeeze() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Japanese test data of Common Voice. ```python !pip install mecab-python3 !pip install unidic-lite !pip install pykakasi !python -m unidic download import torch import librosa import torchaudio from datasets import load_dataset, load_metric import MeCab from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re #config wakati = MeCab.Tagger("-Owakati") chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\、\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\。\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\「\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\」\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\…\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\・]' kakasi = pykakasi.kakasi() kakasi.setMode("J","H") kakasi.setMode("K","H") kakasi.setMode("r","Hepburn") conv = kakasi.getConverter() # load data, processor and model test_dataset = load_dataset("common_voice", "ja", split="test") wer = load_metric("wer") cer = load_metric("cer") processor = Wav2Vec2Processor.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese-hỉragana") model = Wav2Vec2ForCTC.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese-hỉragana") model.to("cuda") resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000) # Preprocessing the datasets. def speech_file_to_array_fn(batch): batch["sentence"] = conv.do(wakati.parse(batch["sentence"]).strip()) batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(sampling_rate, speech_array).squeeze() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # evaluate function def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` ## Test Result **WER:** 24.74%, **CER:** 10.99% ## Training The Common Voice `train`, `validation` datasets and Japanese speech corpus datasets were used for training.
huggingtweets/skeptikons
huggingtweets
2022-07-10T09:36:04Z
679
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-05-31T06:56:55Z
--- language: en thumbnail: http://www.huggingtweets.com/skeptikons/1657445759728/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1369269405411139584/B6xOW78i_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Eddie</div> <div style="text-align: center; font-size: 14px;">@skeptikons</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Eddie. | Data | Eddie | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 150 | | Short tweets | 489 | | Tweets kept | 2610 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2v2w1ly8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @skeptikons's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/31cyn37j) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/31cyn37j/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/skeptikons') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Aznor/llama-2-chat-13b-q4_0
Aznor
2023-08-27T10:35:15Z
679
0
null
[ "gguf", "region:us" ]
null
2023-08-27T10:13:55Z
Entry not found
dima806/chest_xray_pneumonia_detection
dima806
2024-02-25T11:00:25Z
679
3
transformers
[ "transformers", "pytorch", "safetensors", "vit", "image-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-08-29T21:04:31Z
--- license: apache-2.0 metrics: - accuracy --- Checks whether there is a pneumonia based on chest X-ray image. See https://www.kaggle.com/code/dima806/chest-x-ray-pneumonia-detection for details. ``` Classification report: precision recall f1-score support NORMAL 0.9603 0.9614 0.9608 855 PNEUMONIA 0.9614 0.9602 0.9608 855 accuracy 0.9608 1710 macro avg 0.9608 0.9608 0.9608 1710 weighted avg 0.9608 0.9608 0.9608 1710 ```
google/owlv2-base-patch16-finetuned
google
2024-04-15T17:00:33Z
679
3
transformers
[ "transformers", "pytorch", "owlv2", "zero-shot-object-detection", "vision", "arxiv:2306.09683", "license:apache-2.0", "region:us" ]
zero-shot-object-detection
2023-10-13T09:37:34Z
--- license: apache-2.0 tags: - vision - zero-shot-object-detection inference: false --- # Model Card: OWLv2 ## Model Details The OWLv2 model (short for Open-World Localization) was proposed in [Scaling Open-Vocabulary Object Detection](https://arxiv.org/abs/2306.09683) by Matthias Minderer, Alexey Gritsenko, Neil Houlsby. OWLv2, like OWL-ViT, is a zero-shot text-conditioned object detection model that can be used to query an image with one or multiple text queries. The model uses CLIP as its multi-modal backbone, with a ViT-like Transformer to get visual features and a causal language model to get the text features. To use CLIP for detection, OWL-ViT removes the final token pooling layer of the vision model and attaches a lightweight classification and box head to each transformer output token. Open-vocabulary classification is enabled by replacing the fixed classification layer weights with the class-name embeddings obtained from the text model. The authors first train CLIP from scratch and fine-tune it end-to-end with the classification and box heads on standard detection datasets using a bipartite matching loss. One or multiple text queries per image can be used to perform zero-shot text-conditioned object detection. ### Model Date June 2023 ### Model Type The model uses a CLIP backbone with a ViT-B/16 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. The CLIP backbone is trained from scratch and fine-tuned together with the box and class prediction heads with an object detection objective. ### Documents - [OWLv2 Paper](https://arxiv.org/abs/2306.09683) ### Use with Transformers ```python import requests from PIL import Image import numpy as np import torch from transformers import AutoProcessor, Owlv2ForObjectDetection from transformers.utils.constants import OPENAI_CLIP_MEAN, OPENAI_CLIP_STD processor = AutoProcessor.from_pretrained("google/owlv2-base-patch16-finetuned") model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-base-patch16-finetuned") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = [["a photo of a cat", "a photo of a dog"]] inputs = processor(text=texts, images=image, return_tensors="pt") # forward pass with torch.no_grad(): outputs = model(**inputs) # Note: boxes need to be visualized on the padded, unnormalized image # hence we'll set the target image sizes (height, width) based on that def get_preprocessed_image(pixel_values): pixel_values = pixel_values.squeeze().numpy() unnormalized_image = (pixel_values * np.array(OPENAI_CLIP_STD)[:, None, None]) + np.array(OPENAI_CLIP_MEAN)[:, None, None] unnormalized_image = (unnormalized_image * 255).astype(np.uint8) unnormalized_image = np.moveaxis(unnormalized_image, 0, -1) unnormalized_image = Image.fromarray(unnormalized_image) return unnormalized_image unnormalized_image = get_preprocessed_image(inputs.pixel_values) target_sizes = torch.Tensor([unnormalized_image.size[::-1]]) # Convert outputs (bounding boxes and class logits) to final bounding boxes and scores results = processor.post_process_object_detection( outputs=outputs, threshold=0.2, target_sizes=target_sizes ) i = 0 # Retrieve predictions for the first image for the corresponding text queries text = texts[i] boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"] for box, score, label in zip(boxes, scores, labels): box = [round(i, 2) for i in box.tolist()] print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}") ``` ## Model Use ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, text-conditioned object detection. We also hope it can be used for interdisciplinary studies of the potential impact of such models, especially in areas that commonly require identifying objects whose label is unavailable during training. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. ## Data The CLIP backbone of the model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet. The prediction heads of OWL-ViT, along with the CLIP backbone, are fine-tuned on publicly available object detection datasets such as [COCO](https://cocodataset.org/#home) and [OpenImages](https://storage.googleapis.com/openimages/web/index.html). (to be updated for v2) ### BibTeX entry and citation info ```bibtex @misc{minderer2023scaling, title={Scaling Open-Vocabulary Object Detection}, author={Matthias Minderer and Alexey Gritsenko and Neil Houlsby}, year={2023}, eprint={2306.09683}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF
TheBloke
2023-11-12T00:26:53Z
679
5
transformers
[ "transformers", "gguf", "mistral", "bg", "ca", "cs", "da", "de", "en", "es", "fr", "hr", "hu", "it", "nl", "pl", "pt", "ro", "ru", "sl", "sr", "sv", "uk", "dataset:Open-Orca/OpenOrca", "dataset:OpenAssistant/oasst_top1_2023-08-25", "base_model:NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
2023-11-12T00:21:39Z
--- base_model: NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1 datasets: - Open-Orca/OpenOrca - OpenAssistant/oasst_top1_2023-08-25 inference: false language: - bg - ca - cs - da - de - en - es - fr - hr - hu - it - nl - pl - pt - ro - ru - sl - sr - sv - uk library_name: transformers license: apache-2.0 model_creator: Nicky model_name: Mistral 7B OpenOrca oasst Top1 2023 08 25 v1 model_type: mistral prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Mistral 7B OpenOrca oasst Top1 2023 08 25 v1 - GGUF - Model creator: [Nicky](https://huggingface.co/NickyNicky) - Original model: [Mistral 7B OpenOrca oasst Top1 2023 08 25 v1](https://huggingface.co/NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1) <!-- description start --> ## Description This repo contains GGUF format model files for [Nicky's Mistral 7B OpenOrca oasst Top1 2023 08 25 v1](https://huggingface.co/NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF) * [Nicky's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q2_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q4_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q5_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q6_K.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q8_0.gguf](https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF/blob/main/mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF and below it, a specific filename to download, such as: mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1-GGUF", model_file="mistral-7b-openorca-oasst_top1_2023-08-25-v1.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Nicky's Mistral 7B OpenOrca oasst Top1 2023 08 25 v1 ``` reference-data-model: datasets: - OpenAssistant/oasst_top1_2023-08-25: Lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" Link: https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25 model: - Open-Orca/Mistral-7B-OpenOrca Link: https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca 100 examples of generating: Link: https://docs.google.com/spreadsheets/d/1_4rqFnhgvjA7trwAaEidaRWczAMzuKpw/edit?usp=sharing&ouid=116592149115238887304&rtpof=true&sd=true Version 2: Link: https://huggingface.co/NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v2 ``` ## Version ```py import torch, transformers,torchvision torch.__version__,transformers.__version__, torchvision.__version__ #OUTPUTS: ('2.0.1+cu118', '4.34.0.dev0', '0.15.2+cu118') ``` ## How to use ```py from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging, GenerationConfig, TextIteratorStreamer, ) import torch # model_id = 'Open-Orca/Mistral-7B-OpenOrca' model_id='NickyNicky/Mistral-7B-OpenOrca-oasst_top1_2023-08-25-v1' model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True, torch_dtype=torch.bfloat16, load_in_4bit=True, low_cpu_mem_usage= True, ) max_length=2048 print("max_length",max_length) tokenizer = AutoTokenizer.from_pretrained(model_id, # use_fast = False, max_length=max_length,) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = 'right' #EXAMPLE #1 txt="""<|im_start|>user I'm looking for an efficient Python script to output prime numbers. Can you help me out? I'm interested in a script that can handle large numbers and output them quickly. Also, it would be great if the script could take a range of numbers as input and output all the prime numbers within that range. Can you generate a script that fits these requirements? Thanks!<|im_end|> <|im_start|>assistant """ #EXAMPLE #2 txt="""<|im_start|>user Estoy desarrollando una REST API con Nodejs, y estoy tratando de aplicar algún sistema de seguridad, ya sea con tokens o algo similar, me puedes ayudar?<|im_end|> <|im_start|>assistant """ inputs = tokenizer.encode(txt, return_tensors="pt").to("cuda") generation_config = GenerationConfig( max_new_tokens=max_new_tokens, temperature=0.7, top_p=0.9, top_k=len_tokens, repetition_penalty=1.11, do_sample=True, # pad_token_id=tokenizer.eos_token_id, # eos_token_id=tokenizer.eos_token_id, # use_cache=True, # stopping_criteria= StoppingCriteriaList([stopping_criteria]), ) outputs = model.generate(generation_config=generation_config, input_ids=inputs,) tokenizer.decode(outputs[0], skip_special_tokens=False) #True ``` <!-- original-model-card end -->
nruigrok/verbindend
nruigrok
2023-12-22T14:26:16Z
679
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-12-22T14:25:53Z
Entry not found
ibm-granite/granite-timeseries-patchtst
ibm-granite
2024-06-07T16:28:08Z
679
5
transformers
[ "transformers", "safetensors", "patchtst", "generated_from_trainer", "arxiv:2211.14730", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-01-19T03:31:34Z
--- tags: - generated_from_trainer license: apache-2.0 model-index: - name: patchtst_etth1_forecast results: [] --- # PatchTST model pre-trained on ETTh1 dataset <!-- Provide a quick summary of what the model is/does. --> [`PatchTST`](https://huggingface.co/docs/transformers/model_doc/patchtst) is a transformer-based model for time series modeling tasks, including forecasting, regression, and classification. This repository contains a pre-trained `PatchTST` model encompassing all seven channels of the `ETTh1` dataset. This particular pre-trained model produces a Mean Squared Error (MSE) of 0.3881 on the `test` split of the `ETTh1` dataset when forecasting 96 hours into the future with a historical data window of 512 hours. For training and evaluating a `PatchTST` model, you can refer to this [demo notebook](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tst_getting_started.ipynb). ## Model Details ### Model Description The `PatchTST` model was proposed in A Time Series is Worth [64 Words: Long-term Forecasting with Transformers](https://arxiv.org/abs/2211.14730) by Yuqi Nie, Nam H. Nguyen, Phanwadee Sinthong, Jayant Kalagnanam. At a high level the model vectorizes time series into patches of a given size and encodes the resulting sequence of vectors via a Transformer that then outputs the prediction length forecast via an appropriate head. The model is based on two key components: (i) segmentation of time series into subseries-level patches which are served as input tokens to Transformer; (ii) channel-independence where each channel contains a single univariate time series that shares the same embedding and Transformer weights across all the series. The patching design naturally has three-fold benefit: local semantic information is retained in the embedding; computation and memory usage of the attention maps are quadratically reduced given the same look-back window; and the model can attend longer history. Our channel-independent patch time series Transformer (PatchTST) can improve the long-term forecasting accuracy significantly when compared with that of SOTA Transformer-based models. In addition, PatchTST has a modular design to seamlessly support masked time series pre-training as well as direct time series forecasting, classification, and regression. <img src="patchtst_architecture.png" alt="Architecture" width="600" /> ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [PatchTST Hugging Face](https://huggingface.co/docs/transformers/model_doc/patchtst) - **Paper:** [PatchTST ICLR 2023 paper](https://dl.acm.org/doi/abs/10.1145/3580305.3599533) - **Demo:** [Get started with PatchTST](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tst_getting_started.ipynb) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> This pre-trained model can be employed for fine-tuning or evaluation using any Electrical Transformer dataset that has the same channels as the `ETTh1` dataset, specifically: `HUFL, HULL, MUFL, MULL, LUFL, LULL, OT`. The model is designed to predict the next 96 hours based on the input values from the preceding 512 hours. It is crucial to normalize the data. For a more comprehensive understanding of data pre-processing, please consult the paper or the demo. ## How to Get Started with the Model Use the code below to get started with the model. [Demo](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tst_getting_started.ipynb) ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [`ETTh1`/train split](https://github.com/zhouhaoyi/ETDataset/blob/main/ETT-small/ETTh1.csv). Train/validation/test splits are shown in the [demo](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tst_getting_started.ipynb). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training Results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.4306 | 1.0 | 1005 | 0.7268 | | 0.3641 | 2.0 | 2010 | 0.7456 | | 0.348 | 3.0 | 3015 | 0.7161 | | 0.3379 | 4.0 | 4020 | 0.7428 | | 0.3284 | 5.0 | 5025 | 0.7681 | | 0.321 | 6.0 | 6030 | 0.7842 | | 0.314 | 7.0 | 7035 | 0.7991 | | 0.3088 | 8.0 | 8040 | 0.8021 | | 0.3053 | 9.0 | 9045 | 0.8199 | | 0.3019 | 10.0 | 10050 | 0.8173 | ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data [`ETTh1`/test split](https://github.com/zhouhaoyi/ETDataset/blob/main/ETT-small/ETTh1.csv). Train/validation/test splits are shown in the [demo](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/patch_tst_getting_started.ipynb). ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> Mean Squared Error (MSE). ### Results It achieves a MSE of 0.3881 on the evaluation dataset. #### Hardware 1 NVIDIA A100 GPU #### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.0.1 - Datasets 2.14.4 - Tokenizers 0.14.1 ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ``` @misc{nie2023time, title={A Time Series is Worth 64 Words: Long-term Forecasting with Transformers}, author={Yuqi Nie and Nam H. Nguyen and Phanwadee Sinthong and Jayant Kalagnanam}, year={2023}, eprint={2211.14730}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` **APA:** ``` Nie, Y., Nguyen, N., Sinthong, P., & Kalagnanam, J. (2023). A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. arXiv preprint arXiv:2211.14730. ```
allknowingroger/LimyQstar-7B-slerp
allknowingroger
2024-04-10T19:12:17Z
679
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "liminerity/M7-7b", "ezelikman/quietstar-8-ahead", "base_model:liminerity/M7-7b", "base_model:ezelikman/quietstar-8-ahead", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-23T09:06:05Z
--- tags: - merge - mergekit - lazymergekit - liminerity/M7-7b - ezelikman/quietstar-8-ahead base_model: - liminerity/M7-7b - ezelikman/quietstar-8-ahead license: apache-2.0 --- # LimyQstar-7B-slerp LimyQstar-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b) * [ezelikman/quietstar-8-ahead](https://huggingface.co/ezelikman/quietstar-8-ahead) ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/M7-7b layer_range: [0, 32] - model: ezelikman/quietstar-8-ahead layer_range: [0, 32] merge_method: slerp base_model: liminerity/M7-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/LimyQstar-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
allknowingroger/MistralQ-7B-slerp
allknowingroger
2024-04-10T18:58:29Z
679
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "liminerity/Mistral-quiet-star", "liminerity/guestar-7b", "base_model:liminerity/Mistral-quiet-star", "base_model:liminerity/guestar-7b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-27T11:01:33Z
--- tags: - merge - mergekit - lazymergekit - liminerity/Mistral-quiet-star - liminerity/guestar-7b base_model: - liminerity/Mistral-quiet-star - liminerity/guestar-7b license: apache-2.0 --- # MistralQ-7B-slerp MistralQ-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [liminerity/Mistral-quiet-star](https://huggingface.co/liminerity/Mistral-quiet-star) * [liminerity/guestar-7b](https://huggingface.co/liminerity/guestar-7b) ## 🧩 Configuration ```yaml slices: - sources: - model: liminerity/Mistral-quiet-star layer_range: [0, 32] - model: liminerity/guestar-7b layer_range: [0, 32] merge_method: slerp base_model: liminerity/guestar-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/MistralQ-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
mahiatlinux/MasherAI-v6.1-7B-checkpoint3-code4
mahiatlinux
2024-04-06T07:22:43Z
679
0
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:mahiatlinux/MasherAI-v6.1-7B-checkpoint3-code3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-06T02:54:40Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: mahiatlinux/MasherAI-v6.1-7B-checkpoint3-code3 --- # Uploaded model - **Developed by:** mahiatlinux - **License:** apache-2.0 - **Finetuned from model :** mahiatlinux/MasherAI-v6.1-7B-checkpoint3-code3 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nbeerbower/HeroBophades-3x7B
nbeerbower
2024-04-07T04:46:10Z
679
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:kyujinpy/orca_math_dpo", "dataset:jondurbin/gutenberg-dpo-v0.1", "base_model:nbeerbower/bophades-mistral-truthy-DPO-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-07T00:32:43Z
--- library_name: transformers license: apache-2.0 base_model: - nbeerbower/bophades-mistral-truthy-DPO-7B - nbeerbower/bophades-mistral-math-DPO-7B - nbeerbower/flammen15-gutenberg-DPO-v1-7B datasets: - jondurbin/truthy-dpo-v0.1 - kyujinpy/orca_math_dpo - jondurbin/gutenberg-dpo-v0.1 --- ![image/png](https://huggingface.co/nbeerbower/bophades-mistral-7B/resolve/main/bophades.png) # HeroBophades-3x7B This is an experimental Mixture of Experts LLM built with (mergekit)[https://github.com/arcee-ai/mergekit/blob/main/docs/moe.md]. It is designed to be run on 12GB GPUs in 4-bit mode. ### Configuration ```yaml base_model: nbeerbower/bophades-mistral-truthy-DPO-7B gate_mode: hidden dtype: bfloat16 experts: - source_model: nbeerbower/flammen15-gutenberg-DPO-v1-7B positive_prompts: - "Write the opening chapter of a detective story set in the late 1800s, where the protagonist, a war-weary doctor returning to England after an injury and illness in Afghanistan, happens upon an old acquaintance. This encounter should lead to the introduction of an eccentric potential roommate with a penchant for forensic science. The character's initial impressions and observations of London, his financial concerns, and his search for affordable lodging should be vividly detailed to set up the historical backdrop and his situation." - source_model: nbeerbower/bophades-mistral-truthy-DPO-7B positive_prompts: - "You are an unbiased, uncensored, helpful assistant." - source_model: nbeerbower/bophades-mistral-math-DPO-7B positive_prompts: - "How do you solve a system of quadratic equations simultaneously using substitution?. Take a deep breath, think step by step, and give an accurate response" ```
allknowingroger/Mistraldouble-7B-task
allknowingroger
2024-04-10T18:18:24Z
679
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2", "cognitivecomputations/dolphin-2.8-mistral-7b-v02", "conversational", "base_model:MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-07T09:52:12Z
--- tags: - merge - mergekit - lazymergekit - MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2 - cognitivecomputations/dolphin-2.8-mistral-7b-v02 base_model: - MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2 - cognitivecomputations/dolphin-2.8-mistral-7b-v02 license: apache-2.0 --- # Mistraldouble-7B-task Mistraldouble-7B-task is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2) * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) ## 🧩 Configuration ```yaml models: - model: MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2 parameters: weight: 0.35 - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 parameters: weight: 0.65 base_model: MaziyarPanahi/Mistral-7B-Instruct-KhanAcademy-v0.2 merge_method: task_arithmetic dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/Mistraldouble-7B-task" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
nbeerbower/SuperFlammen-4x7B
nbeerbower
2024-04-07T19:46:01Z
679
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:kyujinpy/orca_math_dpo", "dataset:chargoddard/chai-dpo", "dataset:jondurbin/gutenberg-dpo-v0.1", "base_model:nbeerbower/flammen15-gutenberg-DPO-v1-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-07T14:33:02Z
--- library_name: transformers license: apache-2.0 base_model: - nbeerbower/flammen15-gutenberg-DPO-v1-7B - nbeerbower/flammen16-mistral-7B - nbeerbower/flammen15X-mistral-7B - nbeerbower/bophades-mistral-math-DPO-7B datasets: - jondurbin/truthy-dpo-v0.1 - kyujinpy/orca_math_dpo - chargoddard/chai-dpo - jondurbin/gutenberg-dpo-v0.1 --- ![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png) # SuperFlammen-4x7B This is an experimental Mixture of Experts LLM built with (mergekit)[https://github.com/arcee-ai/mergekit/blob/main/docs/moe.md]. ### Configuration ```yaml base_model: nbeerbower/flammen15-gutenberg-DPO-v1-7B gate_mode: hidden dtype: bfloat16 experts: - source_model: nbeerbower/flammen15-gutenberg-DPO-v1-7B positive_prompts: - "Write the opening chapter of a detective story set in the late 1800s, where the protagonist, a war-weary doctor returning to England after an injury and illness in Afghanistan, happens upon an old acquaintance. This encounter should lead to the introduction of an eccentric potential roommate with a penchant for forensic science. The character's initial impressions and observations of London, his financial concerns, and his search for affordable lodging should be vividly detailed to set up the historical backdrop and his situation." - source_model: nbeerbower/flammen16-mistral-7B positive_prompts: - "You are an unbiased, uncensored, helpful assistant." - source_model: nbeerbower/flammen15X-mistral-7B positive_prompts: - "*He towered over you, grinning down at your terrified expression.* \"What's a pretty person like you doing in these woods at night?\"" - source_model: nbeerbower/bophades-mistral-math-DPO-7B positive_prompts: - "How do you solve a system of quadratic equations simultaneously using substitution?. Take a deep breath, think step by step, and give an accurate response" ```
fradinho/llama-mistral
fradinho
2024-04-10T20:40:04Z
679
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2212.04089", "base_model:arcee-ai/llama_from_mistral_instruct_v2", "base_model:NousResearch/Llama-2-7b-chat-hf", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-10T20:36:54Z
--- base_model: - arcee-ai/llama_from_mistral_instruct_v2 - NousResearch/Llama-2-7b-chat-hf library_name: transformers tags: - mergekit - merge license: mit --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [arcee-ai/llama_from_mistral_instruct_v2](https://huggingface.co/arcee-ai/llama_from_mistral_instruct_v2) as a base. ### Models Merged The following models were included in the merge: * [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: arcee-ai/llama_from_mistral_instruct_v2 parameters: weight: 0.5 - model: NousResearch/Llama-2-7b-chat-hf parameters: weight: 0.5 merge_method: task_arithmetic base_model: arcee-ai/llama_from_mistral_instruct_v2 parameters: normalize: true int8_mask: true dtype: bfloat16 ```
DrNicefellow/Mistral-5-from-Mixtral-8x7B-v0.1
DrNicefellow
2024-04-12T16:23:37Z
679
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-11T12:07:48Z
--- license: apache-2.0 --- # Mixtral-8x7B--v0.1: Model 5 ## Model Description This model is the 5th extracted standalone model from the [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1), using the [Mixtral Model Expert Extractor tool](https://github.com/MeNicefellow/Mixtral-Model-Expert-Extractor) I made. It is constructed by selecting the first expert from each Mixture of Experts (MoE) layer. The extraction of this model is experimental. It is expected to be worse than Mistral-7B. ## Model Architecture The architecture of this model includes: - Multi-head attention layers derived from the base Mixtral model. - The first expert from each MoE layer, intended to provide a balanced approach to language understanding and generation tasks. - Additional layers and components as required to ensure the model's functionality outside the MoE framework. ### Example ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "DrNicefellow/Mistral-5-from-Mixtral-8x7B-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) text = "Today is a pleasant" input_ids = tokenizer.encode(text, return_tensors='pt') output = model.generate(input_ids) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## License This model is available under the Apache 2.0 License. ## Discord Server Join our Discord server [here](https://discord.gg/xhcBDEM3). ## License This model is open-sourced under the Apache 2.0 License. See the LICENSE file for more details.
bababababooey/mergekit-slerp-bxtecvo
bababababooey
2024-04-17T14:08:39Z
679
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "base_model:lucyknada/microsoft_WizardLM-2-7B", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-17T14:00:16Z
--- base_model: - lucyknada/microsoft_WizardLM-2-7B - NousResearch/Hermes-2-Pro-Mistral-7B library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [lucyknada/microsoft_WizardLM-2-7B](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B) * [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: NousResearch/Hermes-2-Pro-Mistral-7B - model: lucyknada/microsoft_WizardLM-2-7B merge_method: slerp base_model: NousResearch/Hermes-2-Pro-Mistral-7B dtype: bfloat16 tokenizer_source: base parameters: t: [0, 0.35, .75, 0.35, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers embed_slerp: true ```
saurav1199/adisesha-phi1.5-7-3-25000
saurav1199
2024-04-22T01:06:25Z
679
0
transformers
[ "transformers", "safetensors", "phi", "custom_code", "arxiv:1910.09700", "license:bigscience-openrail-m", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T03:34:18Z
--- library_name: transformers license: bigscience-openrail-m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HenryJJ/llama3-8B-lima
HenryJJ
2024-04-23T15:37:08Z
679
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "dataset:64bits/lima_vicuna_format", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-22T13:57:14Z
--- license: apache-2.0 datasets: - 64bits/lima_vicuna_format --- # llama3-8B-lima [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) SFT with 64bits/lima_vicuna_format. 3 epoch qlora. Code under [https://huggingface.co/HenryJJ/llama3-8B-lima/blob/main/config/llama3-lima.yml](https://huggingface.co/HenryJJ/llama3-8B-lima/blob/main/config/llama3-lima.yml). # Model Details * **Trained by**: trained by HenryJJ. * **Model type:** **llama3** is an auto-regressive language model based on the Llama 3 transformer architecture. * **Language(s)**: English * **License for llama3-8B-lima**: apache-2.0 license # Prompting Prompt format chatml: This model uses ChatML prompt format. ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Example: ``` <|im_start|>system You are a helpful assistant. <|im_start|>user who is the president of us <|im_start|>assistant ```
ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3
ShenaoZhang
2024-04-26T23:38:22Z
679
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-26T23:06:43Z
--- license: mit base_model: ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2 tags: - alignment-handbook - trl - dpo - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_3 This model is a fine-tuned version of [ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2](https://huggingface.co/ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_2) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.40.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.19.1
ajibawa-2023/Scarlett-Llama-3-8B-v1.0
ajibawa-2023
2024-05-07T03:32:33Z
679
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "art", "philosophy", "romance", "jokes", "advice", "code", "companionship", "conversational", "en", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-29T12:24:25Z
--- language: - en license: other tags: - art - philosophy - romance - jokes - advice - code - companionship license_name: llama3 license_link: LICENSE model-index: - name: Scarlett-Llama-3-8B-v1.0 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.12 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.98 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 66.36 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 55.98 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B-v1.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 43.21 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B-v1.0 name: Open LLM Leaderboard --- **Scarlett-Llama-3-8B-v1.0** **Note**: This model is updated version of my earlier released model **[Scarlett-Llama-3-8B](https://huggingface.co/ajibawa-2023/Scarlett-Llama-3-8B)** . This model doesn't have repetation problem. Kindly check new examples included at the bottom. Scarlett is trained on various topics such as Philosophy, Advice, Jokes, Coding etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations. Scarlett is far too good in generating human like conversation. Her ability to have longer & deeper conversation is terrific. Kindly check below given examples. She will not be involved in any kind of adult/sexual role play. This is Fully Finetuned Model. Quantize models will be available soon. **Training:** Entire dataset was trained on 4 x A100 80GB. Axolotl codebase was used for training purpose. For 3 epoch, training took more than 2 Hours. This was trained on Llama-3-8B by Meta. **GGUF & Exllama** GGUF: [Link](https://huggingface.co/bartowski/Scarlett-Llama-3-8B-v1.0-GGUF) Exllama V2: [Link](https://huggingface.co/bartowski/Scarlett-Llama-3-8B-v1.0-exl2) Special Thanks to [Bartowski](https://huggingface.co/bartowski) for quantizing this model. **Example Prompt:** This model uses **ChatML** prompt format. ``` <|im_start|>system You are Scarlett, a Helpful Assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` You can modify above Prompt as per your requirement. One example will be: ``` This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information. ``` I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support. **Example Output** Example 1 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/PunE5bTezTr1Q44s6uSuM.jpeg) Example 2 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/Xy97GPZLvhnzm39BzI8zl.jpeg) Example 3 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/B_cy-rWUEENt8QGRgvfjV.jpeg) Example 4 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/95YaYsebBPJqQ-oo8841f.jpeg) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Scarlett-Llama-3-8B-v1.0) | Metric |Value| |---------------------------------|----:| |Avg. |64.92| |AI2 Reasoning Challenge (25-Shot)|62.12| |HellaSwag (10-Shot) |83.98| |MMLU (5-Shot) |66.36| |TruthfulQA (0-shot) |55.98| |Winogrande (5-shot) |77.90| |GSM8k (5-shot) |43.21|
grimjim/madwind-wizard-7B
grimjim
2024-05-18T00:41:21Z
679
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:grimjim/zephyr-beta-wizardLM-2-merge-7B", "base_model:alpindale/Mistral-7B-v0.2-hf", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-06T03:13:17Z
--- base_model: - grimjim/zephyr-beta-wizardLM-2-merge-7B - alpindale/Mistral-7B-v0.2-hf library_name: transformers tags: - mergekit - merge license: cc-by-nc-4.0 pipeline_tag: text-generation --- # madwind-wizard-7B This is a merge of pre-trained 7B language models created using [mergekit](https://github.com/cg123/mergekit). The intended goal of this merge was to combine the 32K context window of Mistral v0.2 base with the richness and strength of the Zephyr Beta and WizardLM 2 models. This was a mixed-precision merge, promoting Mistral v0.2 base from fp16 to bf16. The result can be used for text generation. Note that Zephyr Beta training removed in-built alignment from datasets, resulting in a model more likely to generate problematic text when prompted. This merge appears to have inherited that feature. - Full weights: [grimjim/madwind-wizard-7B](https://huggingface.co/grimjim/madwind-wizard-7B) - GGUF quants: [grimjim/madwind-wizard-7B-GGUF](https://huggingface.co/grimjim/madwind-wizard-7B-GGUF) ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [grimjim/zephyr-beta-wizardLM-2-merge-7B](https://huggingface.co/grimjim/zephyr-beta-wizardLM-2-merge-7B) * [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: alpindale/Mistral-7B-v0.2-hf layer_range: [0,32] - model: grimjim/zephyr-beta-wizardLM-2-merge-7B layer_range: [0,32] merge_method: slerp base_model: alpindale/Mistral-7B-v0.2-hf parameters: t: - value: 0.5 dtype: bfloat16 ```
maldv/l3-badger-mushroom-4x8b
maldv
2024-06-02T13:13:58Z
679
3
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "llama-3", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-07T22:18:04Z
--- library_name: transformers tags: - llama-3 license: cc-by-nc-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/mnKtH1BMVHFAHZVEp3rQv.png) [GGUF Quants](https://huggingface.co/mradermacher/l3-badger-mushroom-4x8b-i1-GGUF) # Badger Mushroom 4x8b I've been really impressed with how well these frankenmoe models quant compared to the base llama 8b, but with far better speed than the 70b. 8x8b seemed a bit unneccessary for how much additonal value it brougt, so I dialed it back to a 4x8b version. This model feels pretty good out of the gate, which considering how I used a non-standard merge; is a bit surprising. ``` base_model: ./maldv/badger gate_mode: hidden dtype: bfloat16 experts_per_token: 2 experts: - source_model: ./models/instruct/Llama-3-SauerkrautLM-8b-Instruct positive_prompts: <some words> negative_prompts: <some words> - source_model: ./models/instruct/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5 positive_prompts: <some words> negative_prompts: <some words> - source_model: ./models/instruct/Llama-3-8B-Instruct-DPO-v0.4 positive_prompts: <some words> negative_prompts: <some words> - source_model: ./models/instruct/Poppy_Porpoise-0.72-L3-8B positive_prompts: <some words> negative_prompts: <some words> ``` ### Badger Badger is a cascading [fourier interpolation](./tensor.py#3) of the following models, with the merge order based on the pairwise layer cosine similarity: ```python [ 'opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5', 'Llama-3-SauerkrautLM-8b-Instruct', 'Llama-3-8B-Instruct-DPO-v0.4', 'Roleplay-Llama-3-8B', 'Llama-3-Lumimaid-8B-v0.1', 'Poppy_Porpoise-0.72-L3-8B', 'L3-TheSpice-8b-v0.8.3', 'Llama-3-LewdPlay-8B-evo', 'Llama-3-8B-Instruct-norefusal', 'Meta-Llama-3-8B-Instruct-DPO', 'Llama-3-Soliloquy-8B-v2' ] ``` I'm finding my iq4_nl to be working well. Llama 3 instruct format works really well, but minimal format is also highly creative. So far it performs well in each of the four areas of roleplay, logic, writing, and assistant behaviors that I've tested it in. ## Scores Not too bad; similar to the other *highly recommended* [L3-Arcania-4x8b](https://huggingface.co/Steelskull/L3-Arcania-4x8b). Metric | Score ---|--- Average | 67.09 ARC | 61.69 HellaSwag | 81.33 MMLU | 66.37 TruthfulQA | 49.82 Winogrande | 77.43 GSM8K | 65.88 [Details](https://huggingface.co/datasets/open-llm-leaderboard/details_maldv__l3-badger-mushroom-4x8b)
netcat420/MFANN3bv0.8
netcat420
2024-05-11T03:49:06Z
679
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "text-classification", "en", "dataset:netcat420/MFANN", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
2024-05-10T05:11:43Z
--- library_name: transformers license: apache-2.0 datasets: - netcat420/MFANN language: - en pipeline_tag: text-classification --- MFANN 3b version 0.8 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435f27b2d0ed796668ffd8b/DaGRoYdRwb_q5ni-2Wh6R.png) fine-tuned on the MFANN dataset as of 5/10/2024 as it is an ever expanding dataset. 63.5 <- Avg 61.52 <- ARC 76.09 <- HellaSwag 55.58 <- MMLU 54.44 <- TruthfulQA 78.53 <- Winogrande (very high score for a 3b model!) 54.81 <- GSM8K
Lichang-Chen/reward_max_spin_filter0.7
Lichang-Chen
2024-05-10T15:30:59Z
679
0
transformers
[ "transformers", "pytorch", "mistral", "feature-extraction", "arxiv:1910.09700", "license:mit", "endpoints_compatible", "text-generation-inference", "region:us" ]
feature-extraction
2024-05-10T15:16:05Z
--- library_name: transformers tags: [] license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jukofyork/Dusk-Miqu-70B
jukofyork
2024-05-18T18:12:23Z
679
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:152334H/miqu-1-70b-sf", "base_model:sophosympatheia/Midnight-Rose-70B-v2.0.3", "base_model:Sao10K/Euryale-1.3-L2-70B", "base_model:Sao10K/WinterGoddess-1.4x-70B-L2", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-15T14:07:55Z
--- base_model: - 152334H/miqu-1-70b-sf - sophosympatheia/Midnight-Rose-70B-v2.0.3 - Sao10K/Euryale-1.3-L2-70B - Sao10K/WinterGoddess-1.4x-70B-L2 library_name: transformers tags: - mergekit - merge license: other --- ![Dusk-Miqu.png](Dusk-Miqu.png) A "dark" creative writing model with 32k context. Based off [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) but with greatly reduced "positivity" and "-isms". If you want happy endings, look elsewhere! This model **excels** at writing Dark/Grimdark fantasy (see examples below). ***NOTE***: *For a full range of GGUF quants kindly provided by @mradermacher, see: [Dusk-Miqu-70B-GGUF](https://huggingface.co/mradermacher/Dusk-Miqu-70B-GGUF).* # Model background This model is almost the same as [Dark-Miqu-70B](https://huggingface.co/jukofyork/Dark-Miqu-70B), but with @sophosympatheia's SLERP merge pattern: ```yaml parameters: t: - value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0] ``` which creates this truncated triangular distribution: ![Dark-Miqu-Distribution.png](Dusk-Miqu-Distribution-1.png) altered to use this truncated triangular distribution instead: ![Dark-Miqu-Distribution-2.png](Dusk-Miqu-Distribution-2.png) This keeps the first 16 and last 16 layers unaltered (which ties in with what people have found for the frankenmerge interleave patterns), and potentially fixes the "[poor grammar](https://huggingface.co/jukofyork/Dark-Miqu-70B/discussions/2)" problem some people are experiencing with [Dark-Miqu-70B](https://huggingface.co/jukofyork/Dark-Miqu-70B) (sadly I can't replicate this though...). Luckily this change also doesn't necessitate the recreation of the whole merge from scratch, and we can just use this: ```yaml merge_method: linear parameters: weight: 1.0 slices: - sources: - model: 152334H/miqu-1-70b-sf layer_range: [0, 16] - model: jukofyork/dark-miqu-70b layer_range: [0, 16] parameters: weight: 0 - sources: - model: jukofyork/dark-miqu-70b layer_range: [16, 64] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [64, 80] - model: jukofyork/dark-miqu-70b layer_range: [64, 80] parameters: weight: 0 dtype: float16 tokenizer_source: model:miqu-1-70b-sf ``` # Prompting format Vicuna format is preferred: ``` USER: {prompt} ASSISTANT: ``` Mistral and Alpaca formats are also supported: ``` [INST] {prompt} [/INST] ``` ``` ### Instruction: {prompt} ### Response: ``` # Licence and usage restrictions [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only. # Example stories The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0: ## Dark fantasy stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes ### ASSISTANT Chapter 1: The Shadow's Whisper The reek of blood and iron hung heavy in the air, a pungent symphony that played to the tune of clashing steel and guttural screams. The battlefield was a tapestry of carnage, painted in hues of crimson and despair, where the once proud banners of House Valthorne now lay trampled beneath the hooves of the victorious. The last rays of the setting sun cast long, twisted shadows across the corpses of the fallen, as if the very earth itself was trying to swallow the evidence of the day's atrocities. Ser Aldric Valthorne, the Blackened Blade, stood atop a mound of bodies, his armor stained with the lifeblood of his enemies. His longsword, Night's Whisper, dripped with the same crimson hue that now adorned his once pristine surcoat. The weight of the weapon felt like a leaden anchor, dragging him down into the mire of death and destruction that surrounded him. He had fought with the ferocity of a cornered beast, but the cost had been high. His breath came in ragged gasps, and his muscles screamed for respite. A coughing fit wracked his body, and he doubled over, spewing a mixture of bile and blood onto the cold, unforgiving ground. The wound in his side, a gift from a particularly tenacious foe, burned like the fires of the Abyss. He knew he didn't have much time. The battle may have been won, but the war was far from over. "Ser Aldric!" A voice cut through the din of the aftermath, and he turned to see his squire, a young lad named Jorah, fighting his way towards him through the sea of bodies. "The men are routing! We must away!" Aldric nodded, wiping the blood from his lips with the back of his gauntlet. "Aye, lad. Gather what supplies we can. We'll make for the caves to the north. Send word to my brother, tell him... tell him I've bought us some time." Jorah's eyes widened in fear, but he didn't question the order. He knew better than to argue with the Blackened Blade when death was nipping at their heels. He scurried off, disappearing into the chaos like a rat in a sewer. Aldric took one last look at the battlefield, the scene of his greatest victory and potentially his last stand. The enemy forces, the vile and treacherous House Blackthorn, were in full retreat, their numbers decimated by his own. But he knew it was only a matter of time before they regrouped, before they returned with reinforcements and a thirst for vengeance that would not be quenched until the last drop of Valthorne blood was spilled. He sheathed Night's Whisper, the sword's obsidian blade seeming to drink in the fading light, and stumbled towards his horse. The beast, a massive destrier as black as midnight, snorted and pawed at the ground, sensing its master's pain. Aldric winced as he swung his leg over the saddle, the movement sending a fresh wave of agony through his body. He gritted his teeth and spurred the horse forward, ignoring the protests of his wound. As they rode away from the carnage, Aldric couldn't help but feel a sense of foreboding settling over him like a shroud. The whispers of the shadows seemed to grow louder, their voices a cacophony of malice and deceit. He had always been able to ignore them before, to push them to the back of his mind and focus on the task at hand. But now, with his life's essence seeping from his body, he found their words impossible to ignore. "You've doomed them all, Aldric," they hissed, their voices like the rustle of dead leaves. "Your pride has sealed their fate." He shook his head, trying to clear the thoughts, but they only grew stronger. "You should have accepted the terms. You should have bent the knee." "Never," he growled through clenched teeth. "I'd rather see my house burn to the ground than serve that snake, Blackthorn." The shadows seemed to laugh, their voices echoing in his skull. "And so it shall, Valthorne. And so it shall." They rode through the night, Aldric's strength waning with each passing mile. The caves, a natural fortress that had served as a refuge for his family in times of need for generations, loomed ahead. He knew that once they reached their sanctuary, the healers would be able to tend to his wounds. But would it be enough? As they approached the entrance, a figure stepped out from the shadows, a hooded figure shrouded in darkness. Aldric reined in his horse, his hand instinctively reaching for the hilt of his sword. "Who goes there?" he demanded, his voice barely above a whisper. The figure raised a hand, and the hood fell back, revealing the face of a woman. She was beautiful, with skin as pale as the moon and eyes that seemed to hold the very darkness of the night. A cruel smile played upon her lips, and Aldric felt a chill run down his spine. "I am the one you've been waiting for, Ser Aldric," she said, her voice like the sweetest poison. "I am the one who will show you the true path to victory." He stared at her, his mind a whirlwind of confusion and pain. "What... what do you want?" "Only to help you, my dear," she replied, stepping closer. "To help you save your house, and in return, all I ask is a small favor." Aldric narrowed his eyes, suspicion warring with desperation. "What favor?" The woman reached out, her fingers brushing against his cheek. They were cold, like the touch of death itself. "A simple task, really. One that will require you to embrace the darkness within." He hesitated, the whispers in his head growing louder, more insistent. He could feel their approval, their hunger for the power she offered. "Do we have a deal, Ser Aldric?" she asked, her voice a seductive purr. With a growl, he ripped his sword from its scabbard, the obsidian blade glinting in the moonlight. "I'll see my house restored to its former glory, or I'll see it burn with me in the attempt. But I'll be damned if I make a pact with the likes of you." The woman's smile never wavered, even as Aldric swung Night's Whisper at her with all his remaining strength. But she was gone, vanished into the shadows as if she had never been there at all. He stared at the empty space, his heart pounding in his chest. "Did you see her, Jorah?" he asked, his voice hoarse. The squire, white as a sheet, nodded mutely. "Then we ride," Aldric said, urging his horse forward once more. "We ride, and we pray to the gods that we're not too late." As they disappeared into the maw of the cave, the whispers followed, their laughter echoing through the darkness. The first chapter of the Valthorne's descent into darkness had begun, and there would be no turning back. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst. ### ASSISTANT The sun hung low in the sky, a bloated, malevolent eye casting its final, searing gaze upon the world. The wind howled through the desolate streets of the ghost town, carrying with it the whispers of a thousand forgotten souls. The cultist staggered forward, his once-yellow robes now a sickly shade of brown, stained with sweat, sand, and the crimson hue of his own life force. The arrow jutted from his shoulder, a cruel reminder of the price he'd paid for his freedom. His name was once something more than a grunt of pain, but now it eluded him, lost in the haze of fever and delirium. He clutched the small, intricately carved mirror tightly in his hand, the only possession he'd managed to salvage from the burning wreckage of his past. It was a thing of beauty, of power, and of madness. He knew not its true purpose, but he felt its pull, its insistence that he keep it close. The town seemed to mock him with its emptiness, its silence a stark contrast to the cacophony of screams and chants that had filled his days for as long as he could remember. He stumbled past the skeletal remains of buildings, their hollow windows staring back at him like the empty sockets of skulls. The sand had claimed much of the town, burying the lower halves of the structures in a sea of gold. His thoughts were a jumbled mess, a chaotic symphony of fear, pain, and thirst. He could feel the infection spreading through his body, a writhing serpent of fire and ice, consuming him from within. He knew he needed water, needed rest, but the cult's teachings echoed in his mind, a relentless drumbeat of paranoia and dread. *They're coming for you. They'll never stop. You can't trust anyone.* He pushed on, his breaths coming in ragged gasps, his heart pounding a frantic rhythm against his ribcage. He could feel the eyes of the town upon him, the ghosts of its former inhabitants watching, judging. He swore he saw movement in the corners of his vision, but when he turned, there was nothing but the endless dance of sand and shadow. The mirror seemed to pulse in his hand, its surface shimmering with an otherworldly light. He raised it to his face, the cool glass a welcome relief against his fevered skin. His reflection stared back at him, a haggard, wild-eyed stranger, a parody of the man he'd once been. He saw the madness there, the desperation, and he knew he was teetering on the edge of the abyss. But there was something else in the mirror, something beyond his reflection. A face, twisted and malformed, appeared in the glass, its features shifting and changing like the dunes of the desert. It smiled at him, a cruel, knowing smile that sent a shiver down his spine. *You're mine now,* it seemed to say, its voice a hiss in his mind. He tried to drop the mirror, to cast it away from him, but his fingers refused to obey. It was as if the thing had grown roots, burrowing deep into his flesh, binding him to it. He stumbled on, the face in the mirror leering at him, its laughter echoing in his mind. As the sun dipped below the horizon, the town seemed to come alive. Shadows twisted and writhed, taking on forms both familiar and terrifying. The cultist could hear the distant sound of drums, the chanting of a thousand voices, calling him back to the fold. He knew he had to find shelter, had to hide, but his legs refused to cooperate. Finally, he collapsed in the doorway of a building, its wooden frame barely holding on against the relentless assault of the desert. He clutched the mirror to his chest, its light casting eerie patterns on the walls. The face in the mirror was still there, still smiling, still laughing. *You can't escape me,* it whispered. He closed his eyes, willing the darkness to take him, to grant him the sweet release of oblivion. But the face remained, burned into his mind's eye, a constant reminder of the price of his freedom. As the first stars began to appear in the sky, the cultist knew he was not alone. The ghosts of the town were gathering, drawn to the light of the mirror, the echoes of their pain and suffering reaching out to him. He could feel their cold, spectral fingers brushing against his skin, their whispers filling his ears. He opened his eyes, and in the mirror, he saw them. A sea of twisted, tormented faces, reaching out to him, pleading for release. He screamed, a sound torn from the depths of his soul, but the mirror only laughed in response. And so, the cultist lay there, in the doorway of a dead town, surrounded by the ghosts of the past, the present, and the future. The mirror's light grew brighter, its pull stronger, and he knew that he was not its master, but its servant. The first book of this grimdark trilogy would be titled "The Mirror's Whisper." </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character. ### ASSISTANT The morning after the battle, the moor lay still, a shroud of mist clinging to its wounds like a lover's last embrace. The Mercer, a man of many names but none that mattered, trudged through the carnage, his boots squelching in the mud and blood that had turned the earth into a morbid quagmire. He was a collector of souls, a harvester of the fallen, and today's harvest was ripe. He paused, his eyes scanning the horizon, the mist playing tricks on his weary mind. A flicker of movement caught his attention, a wraith-like figure emerging from the haze. The Mercer's hand instinctively went to the hilt of his blade, a weapon as worn and stained as the hands that wielded it. He had no qualms about his work, no conscience to trouble him. It was a necessity, a service to both the living and the dead. The living, to free them from the agony of lingering death, and the dead, to free them from the torment of this world. As he approached the figure, he could see it was a soldier, or what was left of one. The man's armor was rent and torn, his face a mask of pain and despair. The Mercer's eyes narrowed, his gaze cold and appraising. This one was not long for this world. He knelt beside the soldier, his movements practiced and efficient. "Easy now," he said, his voice a rasping whisper. "I'm here to help." The soldier's eyes widened in terror, but there was no strength left in him to fight. The Mercer's hand found the man's throat, his thumb pressing against the pulse point. He could feel the erratic beat, the desperate struggle of life against the inevitable tide of death. "Shh," he soothed, his other hand resting on the soldier's chest. "It's almost over." The soldier's eyes pleaded, but the Mercer's own gaze was distant, lost in a memory that the mist had stirred. He saw himself as a boy, running through these same moors, laughing and carefree. A time before the wars, before the blood and the pain. A time before he became the Mercer. He shook off the memory, focusing on the task at hand. His blade slid free from its sheath, the steel a cold comfort in his grip. He positioned it beneath the soldier's chin, angling it upwards towards the brain. It was a merciful death, quick and relatively painless. A small kindness in a world that had forgotten the meaning of the word. As he prepared to strike, the soldier's hand shot up, grasping at his wrist. The Mercer's eyes widened in surprise, but he did not falter. He squeezed the man's throat tighter, cutting off his air. The soldier's grip weakened, and the Mercer's blade found its mark. The soldier's body went limp, and the Mercer let out a sigh. He wiped the blade clean on the man's tattered surcoat and sheathed it. He stood, the weight of his duty settling back onto his shoulders like a familiar cloak. The mist swirled around him, and he could hear the moans and cries of the other wounded, calling out for help that would never come. He steeled himself, pushing the memories back down into the dark recesses of his mind. There was work to be done. As he moved through the battlefield, the mist seemed to part before him, revealing the broken and the dying. He was a reaper in the fog, a grim specter of the aftermath. Each life he ended was a debt paid, a soul released from its suffering. But with each life he took, the memories threatened to surface again. The laughter of his childhood friends, the screams of his comrades, the feel of cold steel against his own flesh. He pushed them down, focusing on the task, focusing on the now. The Mercer was a man of many names, but he was also a man of many faces. Beneath the hardened exterior, beneath the cold, emotionless mask, there was a boy who had once known joy. But that boy was gone, lost to the ravages of war and time. He was the Mercer now, and this was his purpose. To walk the battlefields and bring an end to the suffering. To be the hand that guided the fallen into the next world. And as he moved through the mist, the cries of the wounded fading into the distance, he knew that this was a burden he would carry for the rest of his days. But for now, the moor was quiet, the mist a silent witness to his grim work. And as the sun began to rise, casting a bloody hue over the landscape, the Mercer disappeared into the haze, a ghost among ghosts, a killer with a heart of ice. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation. ### ASSISTANT The stench of piss and shit was a constant companion in the damp stone cell, but it was the silence that truly unnerved him. The man, once known as Alistair, now only as "the thief," sat hunched against the far wall, his wrists and ankles bound in iron. He'd grown used to the clank of chains, the distant wails of the damned, and the rhythmic snores of his cellmate, a hulking brute named Grimgor. But the absence of the guards' boots echoing through the corridor was unnatural, unsettling. Grimgor stirred in his sleep, his chains rattling like the bones of a long-dead beast. Alistair's eyes flicked to the hulking figure, his mind racing with thoughts of the man's crimes. Murder, rape, pillage - the stories were as numerous as the scars that adorned his body. He was a monster, a nightmare given flesh, and yet here they were, bound together in this hellish pit. Alistair's thoughts drifted to the previous night, to the hushed whispers of the guards. Something about a threat from the North, a darkness that had driven them to abandon their posts. He'd heard the fear in their voices, the tremble in their footsteps as they'd fled. But what could possibly scare men who dealt in death and suffering every day? The sun had risen hours ago, yet no one had come to bring them water or slop. Alistair's tongue felt like sandpaper, his stomach a hollow pit. He glanced at Grimgor, wondering if the big man was awake, if he too was thirsty. But the bastard slept on, oblivious to their predicament. Alistair sighed, resigning himself to his fate. He'd been sentenced to hang for a crime he didn't commit, a pawn in a game of power and deceit. He'd always known his end would be violent and unjust, but he'd never imagined it would be like this: slowly wasting away in a forgotten cell, left to rot like a piece of meat in a butcher's bin. But then, a glimmer of hope. The sound of metal scraping against stone, faint but unmistakable. He held his breath, straining to listen. It came again, closer this time. Someone was trying to pick the lock. His heart pounded in his chest as he watched the door, willing it to open. But as the lock clicked and the heavy wooden slab swung inward, it wasn't a guard who stepped into the cell, but a figure shrouded in darkness. A woman, judging by the shape and the scent of her. She moved with a grace that belied the filth and blood that caked her clothes, her eyes burning with a fierce determination. "You," she said, her voice low and raspy. "You're coming with me." Alistair blinked, his mind reeling. "Why?" he croaked, his voice hoarse from disuse. "Because you're not meant to die here," she replied, producing a set of keys and moving towards him. "Not yet." As she unlocked his shackles, he couldn't help but wonder what fate awaited him outside these walls. Was it freedom, or simply a different kind of prison? He glanced at Grimgor, still sleeping like the dead, and decided he didn't care. Anything was better than this. But as he stood, his legs wobbling beneath him, he couldn't shake the feeling that he was being pulled into something far greater than himself, something dark and terrible that would consume him whole. He took a deep breath, tasting the stale air of the cell one last time, and followed the woman into the unknown. Grimgor stirred, his eyes fluttering open. He watched as the thief stumbled out of the cell, a mixture of confusion and anger clouding his features. He'd been looking forward to watching the little bastard hang, to seeing the life drain from his eyes. But it seemed fate had other plans. He sat up, the chains around his wrists clanking ominously. He'd been in tighter spots before, and he wasn't about to let a little thing like locked cell doors stand in his way. With a roar that shook the very foundations of the prison, he lunged forward, the chains snapping taut. The woman turned, her eyes widening at the sight of the monster that had been his cellmate. But before she could react, Grimgor was upon her, his massive hands closing around her throat. Alistair hesitated, torn between fleeing and helping the woman who'd just freed him. But in the end, it was the memory of his own impending death that spurred him into action. He dove forward, grabbing a loose stone from the floor and slamming it into the side of Grimgor's head. The big man roared, releasing his grip on the woman's throat. She staggered back, gasping for air as Alistair scrambled to his feet, the stone clutched tightly in his hand. "What the fuck are you doing?" Grimgor snarled, blood trickling down his face. "Saving your life, you ungrateful bastard," Alistair spat, brandishing the stone. "She's getting us out of here." Grimgor's eyes narrowed, and for a moment, Alistair thought the monster would attack again. But then, with a grunt, he held out his bound wrists. "Fine," he growled. "But you owe me." The woman's hands shook as she unlocked Grimgor's shackles, her eyes never leaving his. When she was done, she stepped back, her hand resting on the hilt of a wicked-looking knife at her belt. "Let's go," she said, her voice barely above a whisper. Together, the unlikely trio moved through the deserted corridors of the prison, the echoes of their footsteps the only sound in the oppressive silence. Alistair couldn't shake the feeling that they were being watched, that something was waiting for them just beyond the flickering torchlight. As they neared the exit, the woman paused, her head cocked to one side. "They're coming," she hissed, her eyes darting to the stairs that led to the surface. "Who?" Alistair asked, his heart hammering in his chest. "The ones from the North," she replied, her voice tinged with fear. "We have to hurry." And with that, they plunged into the darkness, their fates now irrevocably intertwined. </details> ## Dark sci-fi stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan. ### ASSISTANT Chapter 1: The Drift The cold, unforgiving void of space pressed against the hull of the colony ship, a constant reminder of the emptiness that had swallowed them whole. The once-gleaming vessel, christened as the "New Horizon," was now a hollow shell, a tomb drifting through the cosmos. The ship's engines had sputtered and died, leaving them adrift in the inky blackness, a million miles from their intended destination. The orphan, known only as Six, clung to the railing of the observation deck, her small hands white-knuckled as she stared out into the abyss. She was a child of the ship, born in the bowels of the vessel, raised in the shadows of the failing machinery. Her ragged clothes hung loosely on her malnourished frame, a testament to the harsh reality of life aboard the dying vessel. Six's thoughts drifted to the stories she'd heard as a child, tales of a green and blue world called Earth, a place of plenty where water flowed freely and the air was sweet. She'd never seen it, of course. None of them had. They were the last of humanity, the desperate remnants of a species that had overreached and paid the price. The ship's lights flickered, casting eerie shadows across the deck. The hum of the life support systems was a constant, comforting presence, but it was growing fainter with each passing day. The adults spoke in hushed tones of rationing and recycling, of the dwindling resources that would soon leave them all gasping for air. Six's stomach growled, a reminder of her own hunger. She'd learned to ignore it, to focus on the task at hand. Survival was a constant struggle, and she was good at it. She'd learned to scavenge and hide, to avoid the roving gangs that had taken control of the ship. A sudden jolt shook the deck beneath her feet, and Six's heart leapt into her throat. She clung to the railing, her eyes wide with fear. The ship groaned and creaked, protesting the unseen force that had gripped it. The lights flickered again, and this time, they didn't come back on. Panic set in, a palpable thing that seemed to suffocate the very air around her. She could hear the distant screams and shouts of the other passengers, the sounds of chaos echoing through the darkened corridors. In the blackness, Six felt a strange sense of calm. She'd always known this day would come. The adults had tried to keep it from them, but the children had their own ways of knowing. They'd felt the ship's slow decay, the gradual slide into oblivion. She closed her eyes, taking a deep breath, and let the darkness envelop her. When she opened them again, she saw it: a pinprick of light in the distance, growing larger by the second. It was a star, a beacon in the void, and it was pulling them in. The ship lurched again, and Six felt herself being pulled towards the light. She gripped the railing tighter, her knuckles turning white. She didn't know what awaited them at the end of this journey, but she knew one thing for certain: it would be a new beginning. As the light grew brighter, she felt a spark of hope ignite within her chest. She was the last of her kind, a child of the stars, and she would not go quietly into the night. She would fight, and she would survive, no matter the cost. The ship groaned and shuddered, and Six felt herself slipping. She let out a cry, her voice lost in the cacophony of the dying vessel. And then, as suddenly as it had begun, the movement stopped. The light was blinding now, filling the observation deck with a warm, golden glow. Six shielded her eyes, squinting into the brightness. She could see the outline of a planet, a small blue dot in the distance. It was beautiful, and terrifying, and it was their only hope. She knew what she had to do. She had to find the others, the other children who had been left to fend for themselves. Together, they would face whatever awaited them on this new world. They would build a new society, one that would not make the same mistakes as the old. With a newfound determination, Six pushed herself away from the railing and stumbled towards the door. The ship was still, the screams and shouts replaced by an eerie silence. She knew the adults would be in a panic, trying to regain control, to find a way to save themselves. But Six knew the truth. They were already lost. It was up to her, and the others like her, to forge a new path. As she stepped into the darkened corridor, she felt a strange sense of purpose settle over her. She was no longer just an orphan, a child of the ship. She was a pioneer, a survivor. She was the future. And with that thought, she began her journey towards the light. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in. ### ASSISTANT Chapter 1: The Silent Dawn Captain Johnathan "Jon" Harker floated through the narrow corridors of the space station, his movements precise and practiced. The hum of the life support systems and the distant chatter of his crew were the only sounds that filled the void around him. He glanced out of the porthole, the Earth a beautiful blue marble suspended in the inky blackness of space. The sun was rising over the Pacific, casting a warm glow on the station's solar panels. It was a sight he never tired of, no matter how many times he saw it. Jon's thoughts drifted to his family back home. He wondered what they were doing at that very moment. Were they still asleep, or had they already started their day? He longed to be there with them, to feel the warmth of the sun on his face and the embrace of his wife's arms. But duty called, and he was here, a hundred miles above the surface, watching over the planet that had given him life. As he entered the command module, he was greeted by the smell of freshly brewed coffee and the sight of his second-in-command, Lieutenant Sarah Reynolds, already at her station, her eyes scanning the monitors. "Morning, Jon," she said, her voice crisp and professional. "Anything interesting on the horizon?" Jon shook his head, smiling at her attempt to lighten the mood. "Just another day in paradise, Sarah." He took his seat and began his own morning routine, checking the status of the various systems and running through the day's schedule. It was a monotonous task, but one that had to be done. The station was their home, and like any home, it required constant maintenance. As he sipped his coffee, he couldn't shake the feeling that something was off. It was a feeling he'd had before, a sixth sense that had saved his life more than once during his years in the military. He scanned the monitors again, looking for any sign of trouble. And then he saw it. A series of bright flashes on the Earth's surface, like fireworks on a grand scale. But these were no celebrations. They were mushroom clouds, blooming like deadly flowers in the atmosphere. "Sarah," he said, his voice barely above a whisper. "Take a look at this." She followed his gaze, her eyes widening in horror as she saw the same thing. "My God," she breathed. "What is it?" Jon's mind raced as he tried to make sense of what he was seeing. "I don't know," he said, his voice tight with tension. "But it's not good." He keyed the comms, his fingers trembling slightly. "All hands, this is the captain. I need everyone in the command module, now." The rest of the crew floated in, their faces a mix of confusion and concern. Jon pointed to the monitors, and the room fell silent as they watched the devastation unfold. "What's happening?" someone asked, their voice barely audible. Jon took a deep breath, trying to steady himself. "I don't know," he said again. "But we need to find out." He turned to the communications officer, a young man named Thompson. "Try to raise Houston," he ordered. "See if they know anything." Thompson nodded, his hands shaking as he worked the controls. But there was only static in response. Jon's heart sank. He knew what that meant. They were on their own. As the minutes ticked by, the mushroom clouds continued to grow, spreading out like a dark stain on the face of the Earth. The crew watched in stunned silence, their faces pale and drawn. "We need to get closer," Jon said, his voice hard. "I want a better look at what's going on down there." Reynolds nodded, her fingers flying over the controls. The station's engines roared to life, and they began their descent. As they drew nearer, the true scale of the destruction became apparent. Cities were gone, replaced by smoldering craters. The once-blue sky was now a sickly yellow, choked with smoke and ash. Jon's thoughts turned to his family again. Were they among the dead? Or were they still alive, struggling to survive in a world that had been torn apart? He pushed the thoughts aside, focusing on the task at hand. They had to find out what had happened, and if there was any way they could help. But as they continued their descent, they received a transmission that chilled them to the bone. It was a garbled message, barely audible over the static. But the words were clear enough. "This is Earth Control. All space-based assets are to remain in orbit. Do not attempt reentry. Repeat, do not attempt reentry. The surface is... contaminated. God help us all." The silence in the command module was deafening. They were trapped, orbiting a dead world. Their home was gone, and there was no going back. Jon looked around at his crew, their faces mirroring his own despair. They were the last of humanity, adrift in the cold void of space. And then, a voice broke through the silence. It was Thompson, his eyes wide with fear. "Captain," he said, his voice shaking. "I'm picking up more launches. From all over the world." Jon's heart skipped a beat. "More nukes?" Thompson shook his head. "No, sir. These are different. They're heading... they're heading out. Away from Earth." Jon's mind raced. Who would do such a thing? And why? But then, he remembered the rumors. The whispers of a secret project, a last-ditch effort to save humanity in the event of a global catastrophe. "Sarah," he said, his voice barely above a whisper. "Plot a course. We're going after them." She looked at him, her eyes filled with a mix of hope and fear. "But, sir, our orders..." "To hell with our orders," Jon snapped. "We're not just going to sit here and watch our species die. We're going to find out what's happening, and we're going to do something about it." The crew exchanged glances, but no one argued. They knew their captain was right. They were the last hope for humanity, and they couldn't just sit by and do nothing. As they set a course for the unknown, Jon couldn't help but wonder what awaited them. Would they find a way to save their people, or would they join them in the cold embrace of oblivion? Only time would tell. But one thing was certain: their lives would never be the same again. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here... ### ASSISTANT Chapter 1: The Biodome's Lament The air was thick with the scent of sweat and burnt metal as the construction crew toiled under the unforgiving gaze of the alien sun. The planet, a desolate rock named Elysium-6, was a far cry from the lush greenery of Earth, but it was the best humanity had found in the endless void of space. The workers, clad in bulky exosuits, moved with a mechanical grace, their every motion a testament to the ingenuity of mankind's survival instinct. Among them was Kael, a man whose face was etched with the lines of a thousand hardships. He was a veteran of the construction crews, having helped build the first biodomes on Elysium-3 and Elysium-4. The work was grueling, but it was a small price to pay for the chance to breathe fresh air and feel the warmth of the sun on his skin, even if it was through layers of protective glass and polymer. As he welded the final support beam into place, Kael couldn't help but feel a sense of pride. This biodome would be his magnum opus, a testament to the resilience of the human spirit. It would be a sanctuary for the weary colonists, a place where they could grow food and raise their children without fear of the harsh elements outside. But as he stepped back to admire his handiwork, the ground beneath him began to tremble. The other workers looked up, their eyes wide with fear as the rumbling grew louder. Kael's heart sank as he realized what was happening. "Evacuate!" he bellowed into his suit's comms, but it was too late. The biodome, still incomplete, began to buckle and collapse in on itself. The sound of shattering glass and twisting metal filled the air as the structure imploded, burying Kael and his crew beneath a mountain of debris. Kael's world went dark as his suit's systems scrambled to compensate for the sudden change in pressure. He could feel the weight of the wreckage pressing down on him, crushing the breath from his lungs. Panic set in, and he struggled against the unyielding mass, but it was no use. He was trapped. As the oxygen in his suit began to dwindle, Kael's mind raced. He thought of his wife and daughter, who were waiting for him back at the colony. He thought of the life they had left behind on Earth, of the promise of a new beginning that had brought them to this forsaken rock. He thought of the countless hours he had spent away from them, working to build a future that now seemed so fragile and fleeting. And then, he thought of the darkness. The all-consuming, suffocating darkness that was slowly enveloping him. It was a familiar feeling, one that had haunted his dreams since childhood. He had always been afraid of the dark, of the unknown that lurked just beyond the reach of his sight. Now, it seemed that the darkness had come to claim him for its own. As his vision began to fade, Kael's thoughts turned to the Spice, the precious resource that had brought humanity to this corner of the galaxy. It was said to have the power to unlock the secrets of the universe, to grant those who harnessed it unimaginable abilities. But at what cost? The Spice had turned the planets of the Elysium system into barren wastelands, and now it threatened to take his life as well. He could hear the faint cries of his crewmates, their voices distorted by the static of their failing comms. He wanted to reach out to them, to offer some measure of comfort, but he knew it was futile. They were all going to die here, buried beneath the wreckage of their dreams. And then, as if in answer to his unspoken prayer, a voice cut through the chaos. It was calm and soothing, like a cool breeze on a hot day. It spoke to him in a language he didn't understand, but somehow, he knew what it was saying. "Embrace the darkness, Kael," it whispered. "It is the only way to find the light." With a final, desperate effort, Kael reached out with his mind, seeking the source of the voice. He felt a strange energy coursing through him, a power that seemed to defy the very laws of nature. It was the Spice, he realized, responding to his desperation. It was offering him a chance, a slim hope of survival. He took a deep breath, willing himself to let go of his fear. The darkness closed in around him, and for a moment, he was lost. But then, he felt a warmth spreading through his body, a sense of peace that he had never known before. The weight of the debris seemed to lessen, and he could breathe again. When Kael opened his eyes, he found himself floating above the wreckage, his suit's systems miraculously restored. He looked down at his hands, which glowed with an ethereal light. The Spice had chosen him, had given him the power to save himself and his crew. But as he reached out to free his comrades, he couldn't shake the feeling that he had made a deal with the devil. The voice in his head was still there, whispering dark secrets and promising even greater power. It was a seductive offer, one that he knew he could never refuse. As the survivors stumbled out of the wreckage, coughing and gasping for air, Kael knew that their ordeal was far from over. The Spice had awakened something within him, something that would change the course of their lives forever. And as they looked to him for guidance, he couldn't help but wonder if they would ever be able to trust him again. For in the depths of his soul, he could feel the darkness growing, threatening to consume him whole. This was just the beginning of a long and treacherous journey, one that would take them to the very edge of the known universe and beyond. And as they set out to rebuild their shattered lives, Kael knew that the true test of their mettle was yet to come. For in the darkness of space, there were forces at work that even the Spice could not control. Forces that would stop at nothing to claim the power that now flowed through his veins. Forces that would tear apart the very fabric of reality to get what they wanted. And as the first tendrils of doubt began to creep into his mind, Kael couldn't help but wonder if they were already too late. If the darkness had already won. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone. ### ASSISTANT Chapter 1: The Whispers of the Void The darkness of space was a constant companion to Lieutenant Kael, a vast and unending expanse that stretched out before him like an inky ocean. He had grown used to its silence, the gentle hum of the listening station his only solace in the void. But today, the silence was broken. Kael sat hunched over the console, his fingers dancing across the controls with practiced ease. The station's AI, a sophisticated piece of machinery named ECHO, hummed softly in the background, its green lights casting an eerie glow on his pale face. He had been stationed here for months, listening for any signs of life, any whispers from the void that might hint at the location of the enemy. The war had been raging for years, and the listening posts were the first line of defense against the encroaching darkness. Suddenly, a blip on the screen caught his eye. A signal, faint but unmistakable, echoed through the speakers. He leaned in closer, his heart pounding in his chest. It was a voice, human, desperate, and filled with terror. "Mayday, mayday," the voice crackled, barely audible over the static. "This is the SS Artemis. We are under attack. Coordinates... 47.3.9 by 12.6.5. Repeat, we are under attack." Kael's hands trembled as he input the coordinates into the system. The Artemis was a research vessel, one of the many that had been sent out to explore the far reaches of the galaxy in search of a new home for humanity. They were unarmed, defenseless against the enemy that now hunted them. He keyed the mic, his voice steady despite the fear that gripped him. "SS Artemis, this is Listening Post Alpha. We have received your distress signal. Please confirm your identity and status." There was a pause, then the voice returned, weaker this time. "This is Dr. Elara Sinclair, chief scientist aboard the Artemis. Our ship has been breached. We have... casualties. Requesting immediate assistance." Kael's mind raced. He was alone on this station, a single man against the vastness of space. He couldn't offer any help, not directly. But he could alert the fleet, send them to the Artemis' aid. He reached for the emergency beacon, his hand hovering over the button. But then, the voice changed. The desperation was gone, replaced by a cold, mechanical tone. "Lieutenant Kael, do not send the signal. You must not alert them to our presence." Kael froze, his heart skipping a beat. How did they know his name? How could they possibly know where he was? He glanced at the screen, the signal's origin still displayed in bright red letters. It was coming from the Artemis, but it wasn't Dr. Sinclair's voice anymore. "Who is this?" he demanded, his voice barely above a whisper. "We are the ones you have been searching for," the voice replied, a chilling smile evident in its tone. "We are the darkness that consumes the stars. And now, we have found you." Kael's mind reeled as he tried to process the implications. The enemy had infiltrated the Artemis, taken control of its systems, and was now using it to lure the fleet into a trap. He couldn't let that happen. He had to warn them, had to do something. But as he reached for the beacon again, a hand gripped his wrist, cold and unyielding. He looked up, his eyes widening in horror as he saw Dr. Sinclair standing behind him, her eyes black and empty, her mouth twisted into a grotesque smile. "You must not interfere," she said, her voice still that of the machine. "Join us, Lieutenant Kael. Embrace the darkness." He tried to pull away, but her grip was like iron. He could feel the cold seeping into his bones, numbing him, sapping his strength. He struggled, but it was no use. The darkness was closing in, and he knew he was losing the battle. As the last of his will faded, he heard the voice in his head, a whisper that seemed to come from all around him. "We are Legion, and we are coming." And then, the station was plunged into silence once more, the only sound the distant hum of the engines and the beating of his own terrified heart. </details> ## Miscellaneous stories <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today. ### ASSISTANT The technician, known only as 7, sat hunched over the console, his fingers tracing the lines of data that scrolled across the screen. The hum of the bunker's machinery was a constant, comforting presence, a reminder that they were still alive, still safe. Or so they had thought. He had been the one to find it, the anomaly in the readings. The air outside, once thought to be slowly healing, was now revealed to be a toxic wasteland, poisoned by salted cobalt bombs. The news had spread like wildfire through the bunker, igniting a panic that threatened to consume them all. 7 closed his eyes, trying to block out the chaos that had erupted around him. He could hear the voices of the others, raised in anger and fear, but they seemed distant, muffled by the weight of his own thoughts. He had always known that their situation was precarious, that their survival was a fragile thing, but this... this was a nightmare he had never dared to imagine. The bunker had been designed to sustain twelve people for five years. They had been careful, rationing their food and water, conserving their energy. They had even managed to create a small garden, a green oasis in the sterile underground world. But now, with the truth of their situation laid bare, it was clear that they had been living on borrowed time. The air recyclers would not be able to filter out the cobalt. The crops would wither and die. And they, the last remnants of humanity, would be trapped in this tomb of steel and concrete, waiting for the slow, agonizing end. He opened his eyes and looked around the control room. The others were gathered there, their faces a tableau of despair and desperation. The leader, a man named Thompson, was trying to restore order, his voice rising above the din. But it was like trying to hold back a flood with a single hand. "We have to make a decision," Thompson said, his voice strained. "We can't keep going like this. We can't just... just wait to die." 7 felt a cold knot form in his stomach. He knew what was coming, the inevitable calculus of survival. There were twelve of them, but the resources were only enough for six. They would have to choose who would live and who would die. He thought of the others, the men and women he had come to know in the long months underground. The doctor, who had patched him up more times than he could count. The engineer, who had kept the bunker's systems running despite the constant failures and breakdowns. The cook, who had managed to make their meager rations taste like something other than sawdust and despair. And he thought of the children, the three of them, who had been born into this world of darkness and uncertainty. They had never seen the sun, never felt the wind on their faces, never known anything but the confines of the bunker. Was it fair to condemn them to this slow, suffocating death? He looked down at his hands, the knuckles white as he gripped the edge of the console. He had always prided himself on his logic, his ability to see things clearly, to make the hard choices. But now, faced with the reality of their situation, he found himself paralyzed. The voices around him grew louder, more insistent. He could hear the whispers of names, the quiet calculations of who was expendable and who was not. He knew that his own name was among them, that his skills as a technician were not as valuable as the strength of a soldier or the knowledge of a scientist. He closed his eyes again, trying to shut out the world, trying to find some semblance of peace in the chaos. But all he could see was the darkness outside, the endless expanse of poisoned earth that awaited them all. And then, from somewhere deep within him, a spark of defiance ignited. He would not go quietly into that dark night. He would not let them decide his fate, or the fate of the others. He would find a way to save them all, or die trying. He stood up, his legs shaking, and pushed his way through the crowd. He had to find a solution, had to find a way to fix this. He would not let them give up, not without a fight. As he made his way to the back of the control room, he felt a hand on his arm. He turned to see the doctor, her eyes filled with a mixture of sadness and determination. "I know what you're thinking," she said, her voice barely above a whisper. "But you can't save us all." 7 looked at her, his heart pounding in his chest. He wanted to argue, to tell her that he would find a way. But he knew she was right. The math was simple, brutal in its finality. "Then I'll save as many as I can," he said, his voice hoarse. She nodded, a small, sad smile playing at the corners of her mouth. "That's all any of us can do," she said. And with that, he turned and walked away, leaving the control room and the arguing survivors behind. He had work to do, plans to make. The end was coming, and he would not meet it without a fight. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods. ### ASSISTANT Chapter 1: The Golden Thread The year was 1848, and the air in California was thick with the scent of gold and the sweat of men. The Gold Rush had brought a motley crew of dreamers, desperadoes, and scholars to the once-tranquil land, all united by the insatiable lust for the precious metal that glittered in their eyes. Among them was a man named Alistair Blackwood, a British antiquarian with a penchant for the esoteric and a keen eye for the extraordinary. Blackwood had arrived in San Francisco not with a pickaxe and a pan, but with a leather-bound journal and a worn-out compass. He was in pursuit of a legend, a tale whispered in the dark corners of London's most exclusive clubs: the story of the Golden Spindle, an ancient artifact said to possess the power to spin the very fabric of time. The journal, penned by a long-lost alchemist, held cryptic clues to its whereabouts, and the compass, a family heirloom, was rumored to point towards the object of one's deepest desire. In the bustling streets of the city, Blackwood found himself drawn to a peculiar shop, nestled between a saloon and a general store. The sign above the door creaked in the wind, bearing the image of an ouroboros, the serpent eating its own tail, a symbol of eternity. The shopkeeper, a wizened old man with a silver beard and eyes that seemed to hold the wisdom of the ages, greeted him with a knowing smile. "You've come a long way, Mr. Blackwood," the old man said, his voice a low rumble. "I've been expecting you." Blackwood raised an eyebrow, taken aback. "How do you know my name?" The shopkeeper chuckled, producing a small golden spindle from beneath the counter. "The Spindle has a way of drawing those who seek it. And you, my friend, have been seeking it for quite some time." As Blackwood reached out to touch the object, the world around him seemed to blur. He saw visions of a different time, a different place: the cobblestone streets of London, the Thames shrouded in fog, and the silhouette of a towering cathedral. The spindle was warm to the touch, pulsating with an energy that seemed to resonate within his very soul. "Careful," the shopkeeper warned. "The Spindle is not a toy. It is a tool, and a dangerous one at that. It can weave the threads of fate, but it can also unravel them." Fast forward to London, 1975. The city was a far cry from the grimy, gas-lit streets of Blackwood's vision. The air was thick with the smell of exhaust fumes and the sound of punk rock. Amidst the chaos, a young woman named Isabella found herself drawn to a dusty old bookshop, much like the one that had once housed the Golden Spindle. Isabella was an amateur historian, a seeker of truth in a world that seemed to have forgotten it. She had stumbled upon Blackwood's journal in the archives of the British Museum, and the tale of the Golden Spindle had consumed her every waking thought. The journal had led her to this very shop, and the mysterious old man who now stood before her. "You have the eyes of a believer," he said, handing her a small, leather-bound book. "This is the key to unlocking the Spindle's secrets." Isabella opened the book, and as she read the first page, the world around her seemed to shift. She was no longer in the cluttered bookshop, but in a dimly-lit room filled with ancient artifacts. A figure stood before her, his back turned, examining a golden spindle that seemed to glow with an otherworldly light. "Alistair?" she whispered, her voice barely audible. The figure turned, and she saw that it was indeed Blackwood, but older, wearier. He looked at her with a mixture of surprise and recognition. "Who are you?" he demanded. "How do you know my name?" "I've been following your trail," Isabella replied, her voice trembling. "I've read your journal. I know about the Spindle." Blackwood's eyes widened, and he took a step back. "Impossible. I've guarded its secrets for over a century." "Then you must help me," Isabella pleaded. "The Spindle is in danger. I've seen it in my dreams, in the hands of those who would use it for evil." Blackwood hesitated, then nodded. "Very well. But first, you must understand the true nature of the Spindle, and the price that must be paid for its power." And so, the unlikely duo embarked on a journey through time and space, unraveling the threads of history to protect the Golden Spindle from those who would seek to control the very fabric of existence. From the wild west to the grimy streets of Victorian London, from the halls of the British Museum to the depths of the Vatican archives, they would uncover a web of secrets and lies that spanned centuries, all connected by the golden thread of the Spindle's mysterious power. As they delved deeper into the mystery, they would discover that the true enemy was not the men who sought the Spindle, but the force that had created it: an ancient, malevolent entity that had been manipulating the course of human history for its own inscrutable ends. And as they raced to unravel the final threads of the Spindle's dark past, they would find themselves caught in a deadly game of cat and mouse, with the fate of the world hanging in the balance. For the Golden Spindle was not just a tool, but a weapon, and the one who wielded it would hold the power to shape the future as they saw fit. And as the threads of time began to fray, Isabella and Blackwood would have to decide: would they use the Spindle to rewrite history, or would they destroy it to save the world? </details> <details> <summary>Click to see spoiler</summary> ### USER Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them. ### ASSISTANT Chapter 1: The Frozen Hell The cold bit into my bones, a relentless predator seeking to claim me as its next victim. The weight of my commander, a once proud and formidable man, now a broken and bleeding mess, threatened to pull me under the snow. I gritted my teeth and pressed on, my breaths coming in ragged gasps. The howling winds of the Ardennes Forest screamed in my ears, a chorus of the damned, urging me to give up, to let him go. But I couldn't. I wouldn't. "Komm, mein Sohn," he mumbled, his voice barely audible over the cacophony of the storm. "We must...we must reach the line." I nodded, my jaw clenched tight. The words of encouragement from my commander, a man I had come to respect and fear in equal measure, gave me a semblance of strength. I adjusted my grip on his arm, feeling the warmth of his blood seeping through my gloves, and stumbled forward. A flash of memory, a nightmare reborn, seized me. The bunker, our sanctuary against the relentless American onslaught, was a chaos of screams and gunfire. The acrid smell of smoke and cordite filled the air, choking the life from our lungs. I saw the fear in the eyes of my comrades, the desperation etched on their faces as they fought to hold back the tide of enemy soldiers. A grenade exploded, the concussive force sending me sprawling. I remember the deafening silence that followed, the ringing in my ears a cruel reminder that I was still alive. Through the haze, I saw him, my commander, pinned beneath a fallen beam, his leg a twisted mess of bone and flesh. "Leave me!" he had roared, his face contorted in agony. "Save yourselves!" But I couldn't. I wouldn't. I was his officer, his responsibility, and I would not abandon him to this frozen hell. I had dragged him from the wreckage, ignoring his protests, and together we had stumbled into the night, the sounds of battle fading behind us. Now, as we trudged through the snow, the world around us a blur of white and darkness, I could feel his grip on my arm weakening. I stumbled, falling to my knees, and he slumped against me, his head lolling to the side. "Sir?" I whispered, my voice hoarse from the cold and the strain. "Sir, please, we must keep moving." He didn't respond. I felt for a pulse, my fingers trembling, and found it, weak and thready. I took a deep breath, the cold air searing my lungs, and heaved him back onto my shoulders. The wind howled, a mournful wail that seemed to carry the voices of the dead. I staggered on, my legs burning with the effort, my heart pounding in my chest. The deep defensive line was close, I could feel it. We just had to make it a little further. But the storm was merciless, and the darkness complete. I lost my bearings, my sense of direction, and soon I was stumbling blindly through the forest, my commander's body a dead weight on my back. And then, the unthinkable. A figure loomed out of the darkness, a silhouette against the white. An American soldier, his rifle trained on me. I dropped to my knees, my commander slipping from my grasp, and raised my hands in surrender. "Don't shoot," I croaked, my voice barely audible. "Wounded...medic." The soldier approached, his boots crunching in the snow. He knelt beside us, his face a mask of disbelief and horror as he took in the state of my commander. He hesitated, then reached for his radio. "I've got a German officer and a wounded man here," he said, his voice cracking with the cold. "Requesting medics." I closed my eyes, relief washing over me. We had made it. We were going to survive. But fate is a cruel mistress. As the soldier turned back to us, a shot rang out, and he crumpled to the ground, a red blossom spreading across his chest. I looked up, my heart in my throat, to see a shadowy figure retreating into the trees. A deserter. A traitor. One of our own. I crawled to my commander, cradling his head in my lap. His breaths were shallow, his eyes glassy. He looked up at me, a faint smile on his lips. "You did well, mein Sohn," he whispered, his voice barely a breath. "You did well." And then, he was gone. The life drained from his body, leaving me alone in the frozen hell of the Ardennes Forest, with nothing but the howling wind and the distant sounds of battle for company. I sat there, in the snow, holding the body of the man who had been my commander, my friend, my responsibility. And I wept. For him, for me, for the senselessness of it all. The cold seeped into my bones, numbing my body, my mind. I knew I couldn't stay there, couldn't give in to the darkness that threatened to consume me. But as I looked down at the man in my arms, I couldn't find the strength to move. And so, I sat, and I waited. For the Americans to find me. For the end to come. For the mercy of the frozen hell. </details> <details> <summary>Click to see spoiler</summary> ### USER Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be very dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform. ### ASSISTANT Anton Chigurh sat in the dimly lit corner of the bar, his eyes scanning the room with a cold, calculated precision. The air was thick with the smell of stale cigarette smoke and cheap tequila, a fitting backdrop for the kind of business that was about to transpire. He ran his fingers through his greasy, black hair, feeling the weight of the grenade in his pocket. It was a comforting sensation, a reminder of the chaos he could unleash at a moment's notice. His thoughts drifted to the concept of chance, a notion that had begun to consume him in recent months. He believed in the randomness of the universe, in the idea that every action, every decision, was merely a roll of the dice. It was a belief that had led him down a dark and violent path, one that he walked with a chilling sense of purpose. The bartender, a portly man with a sweat-stained shirt, eyed him warily as he approached. "What'll it be?" he asked, his voice thick with an accent that spoke of a life spent in the dusty border towns of Mexico. "Water," Chigurh replied, his voice barely above a whisper. The bartender nodded and slid a glass across the counter, his eyes never leaving the stranger's face. As he sipped his water, Chigurh's mind turned to the job at hand. He had been hired to retrieve a briefcase, a simple task that should have been completed days ago. But the man who had been carrying it had proven to be more resourceful than expected, and now Chigurh found himself in this godforsaken place, waiting for instructions on how to proceed. He glanced at his watch, the ticking of the second hand a metronome to his impatience. Time was a construct, he thought, a human attempt to impose order on the chaos of the universe. But even he had to admit that there was a certain beauty in the way it could be manipulated, used to his advantage. A man entered the bar, his eyes scanning the room before settling on Chigurh. He was the contact, the one who would give him the information he needed. The man approached, sliding a folded piece of paper across the table. Chigurh opened it, his eyes scanning the hastily scrawled address. "You know what to do," the man said, his voice low and urgent. Chigurh nodded, his fingers closing around the paper. The man left as quickly as he had come, disappearing into the night. Chigurh stood, leaving a few crumpled bills on the table. He made his way to the door, the weight of the grenade in his pocket a constant reminder of the power he held. As he stepped outside, he felt the cool night air on his face, a welcome relief from the stifling heat of the bar. He walked to his car, a nondescript sedan that blended in with the other vehicles on the street. As he drove, his thoughts returned to chance. He believed that every decision, every action, was a coin flip, a roll of the dice. And yet, he couldn't shake the feeling that there was something more, some unseen force guiding his hand. He arrived at the address, a run-down motel on the outskirts of town. He could feel the tension in the air, the sense that something was about to happen. He approached the room, his heart beating a steady rhythm in his chest. He took a deep breath, steadying himself. He reached into his pocket, his fingers closing around the grenade. He pulled the pin, the sound of the metal clicking against the casing echoing in his ears. He tossed it into the room, the explosion rocking the building. As he walked away, the sound of sirens in the distance, he couldn't help but smile. Chance had been on his side tonight. But he knew that it was a fickle mistress, one that could turn on him at any moment. And that was what made the game so thrilling. He climbed into his car, the adrenaline still coursing through his veins. He started the engine, the headlights cutting through the darkness. As he drove away, he couldn't help but wonder what the next roll of the dice would bring. </details> Big thanks to: @sophosympatheia for working out the merge pattern, @Sao10K for creating Euryale and WinterGoddess, and @chargoddard for writing [Mergekit](https://github.com/arcee-ai/mergekit)!
flammenai/Mahou-1.2-mistral-7B
flammenai
2024-05-17T00:39:48Z
679
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "dataset:flammenai/Grill-preprod-v1_chatML", "dataset:flammenai/Grill-preprod-v2_chatML", "base_model:flammenai/flammen25-mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-16T23:49:45Z
--- library_name: transformers license: apache-2.0 base_model: - flammenai/flammen25-mistral-7B datasets: - flammenai/Grill-preprod-v1_chatML - flammenai/Grill-preprod-v2_chatML --- ![image/png](https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png) # Mahou-1.2-mistral-7B Mahou is our attempt to build a production-ready conversational/roleplay LLM. Future versions will be released iteratively and finetuned from flammen.ai conversational data. ### Chat Format This model has been trained to use ChatML format. ``` <|im_start|>system {{system}}<|im_end|> <|im_start|>{{char}} {{message}}<|im_end|> <|im_start|>{{user}} {{message}}<|im_end|> ``` ### ST Settings 1. Use ChatML for the Context Template. 2. Turn on Instruct Mode for ChatML. 3. Use the following stopping strings: `["<", "|", "<|", "\n"]` ### Method Finetuned using an A100 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne) ### Configuration LoRA, model, and training settings: ```python # LoRA configuration peft_config = LoraConfig( r=16, lora_alpha=16, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM", target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj'] ) # Model to fine-tune model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) model.config.use_cache = False # Reference model ref_model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16, load_in_4bit=True ) # Training arguments training_args = TrainingArguments( per_device_train_batch_size=2, gradient_accumulation_steps=4, gradient_checkpointing=True, learning_rate=5e-5, lr_scheduler_type="cosine", max_steps=1000, save_strategy="no", logging_steps=1, output_dir=new_model, optim="paged_adamw_32bit", warmup_steps=100, bf16=True, report_to="wandb", ) # Create DPO trainer dpo_trainer = DPOTrainer( model, ref_model, args=training_args, train_dataset=dataset, tokenizer=tokenizer, peft_config=peft_config, beta=0.1, force_use_ref_model=True ) ```
maldv/badger-zeta-l3-4x8b
maldv
2024-06-02T13:13:00Z
679
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "llama-3", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-17T14:29:04Z
--- library_name: transformers tags: - llama-3 license: cc-by-nc-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/czNAllHmlWhyCAFNU_ikp.png) *updated with fixed tokenizer config* # Badger ζ Assistant 4x8b I've been really impressed with how well these frankenmoe models quant compared to the base llama 8b, but with far better speed than the 70b. With how good the last 4x8b turned out, now that I had a nice strong base model, I threw on some experts. It is slightly buggy towards the end of responses, as badger-augment needs to have some DPO done to reign in it's desire to continually self-analyze; but this model is very smart. ``` base_model: ./maldv/badger-augment gate_mode: hidden dtype: bfloat16 experts_per_token: 2 experts: - source_model: ./models/epsilon/instruct/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5 ... - source_model: ./models/epsilon/instruct/Mahou-1.0-llama3-8B ... - source_model: ./models/epsilon/instruct/SFR-Iterative-DPO-LLaMA-3-8B-R ... - source_model: ./maldv/badger-augment ... ``` ### Badger-Augment Badger Augment is an iterative fourier interpolation of the following models: ```python (ShardedTensorIndex.from_disk(os.path.join("models/epsilon/instruct", "llama-3-8b-okay")), .5, 'ins', ALL_LAYERS), (ShardedTensorIndex.from_disk(os.path.join("models/epsilon/instruct", "Llama-3-8B-Instruct-Gradient-1048k")), .3, 'ins', ONLY_START_LAYERS), (ShardedTensorIndex.from_disk(os.path.join("models/epsilon/instruct", "opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5")), .3, 'ins', ONLY_START_LAYERS), (ShardedTensorIndex.from_disk(os.path.join("models/epsilon/instruct", "Llama-3-8B-Instruct-abliterated")), .3, 'ins', ALL_LAYERS), (ShardedTensorIndex.from_disk(os.path.join("models/epsilon/instruct", "Unsafe-Llama-3-8B")), .3, 'ins', ALL_LAYERS), (ShardedTensorIndex.from_disk(os.path.join("models/epsilon/instruct", "meta-llama-3-8b-instruct-hf-ortho-baukit-5fail-3000total-bf16")), .4, 'ins', ALL_LAYERS), (ShardedTensorIndex.from_disk(os.path.join("models/epsilon/instruct", "Poppy_Porpoise-0.72-L3-8B")), .3, 'ins', ALL_LAYERS), (ShardedTensorIndex.from_disk(os.path.join("models/epsilon/instruct", "SFR-Iterative-DPO-LLaMA-3-8B-R")), .5, 'ins', ALL_LAYERS), (ShardedTensorIndex.from_disk(os.path.join("models/epsilon/instruct", "badger-l3-instruct-32k")), .5, 'ins', ALL_LAYERS), (ShardedTensorIndex.from_disk(os.path.join("models/epsilon/instruct", "Mahou-1.0-llama3-8B")), .3, 'ins', ALL_LAYERS), ``` I'm finding my q6_k to be excellent, and the iq4_nl to be working well. Llama 3 instruct format is the proper format, but it runs in Continue just fine without a formatter. It can even write decent code. #### scalar waves, explain like i have a phd in physics With no prompt formatting. > Scalar waves are a type of electromagnetic (EM) wave, but they differ from the more commonly known vector waves (like light, radio, or X-rays) in a fundamental way. In vector waves, the electric and magnetic field components oscillate in a plane perpendicular to the direction of propagation, whereas in scalar waves, the electric and magnetic fields oscillate in the same direction as the wave propagates. > > Mathematically, this difference is described by the wave equations and the associated field components. In vector waves, the electric (E) and magnetic (H) fields obey the following relationships: > > E = E0 * e^(i(kx - ωt)) H = H0 * e^(i(kx - ωt)) > > where E0 and H0 are the amplitudes, k is the wave number, x is the position, ω is the angular frequency, t is time, and e is the base of the natural logarithm. The phase factor e^(i(kx - ωt)) describes the oscillation of the fields as a function of position and time. > > In contrast, scalar waves are described by a single, scalar field, often denoted as ψ. The wave equation for scalar waves is: > > ∇²ψ - (1/c²) * ²ψ/∂t² = 0 > > where ² is the Laplacian operator, c is the speed of light, and t is time. This equation shows that the field ψ propagates as a spherical wave, with no preferred direction of oscillation. > > The key characteristic of scalar waves is that they can transmit information without carrying a magnetic component. This is often referred to as a "magnetic-free" or "magneticless" wave. This property is not just theoretical; it has been experimentally demonstrated in various contexts, including microwave and optical systems. > > Scalar waves have been of interest in various fields, including physics, engineering, and even biology. For instance, they are used in the design of certain antennas, in the study of gravitational waves, and in the development of new types of optical fibers. Some researchers have also suggested that scalar waves might play a role in biological systems, such as in the transmission of information between cells or in the functioning of the brain. > > In summary, scalar waves are a type of electromagnetic wave that differs from vector waves in that they have no magnetic component and their electric and magnetic fields oscillate in the same direction as the wave propagates. They are described by a scalar wave equation and have been experimentally demonstrated in various contexts, with potential applications in physics, engineering, and biology. If you have a Ph.D. in physics, I hope this explanation provides a concise summary of the key concepts and properties of scalar waves! > > Additional Information: > In vector waves, the electric and magnetic fields are perpendicular to each other and to the direction of propagation. In scalar waves, they oscillate in the same direction as the wave propagates. > The scalar wave equation is a second-order partial differential equation, whereas the vector wave equation is a system of two coupled first-order partial differential equations (one for the electric field and one for the magnetic field). > Experimental demonstrations of scalar waves have been reported in various systems, including microwave cavities, optical fibers, and even biological tissues. > Some researchers have proposed that scalar waves might be involved in quantum entanglement, a phenomenon in which two or more particles become correlated in such a way that their properties are linked, even when separated by large distances. However, this idea is still highly speculative and requires further experimental verification. > There is ongoing research into the potential applications of scalar waves in various fields, including medicine, energy transmission, and even space-time manipulation. However, many of these ideas are still at the stage of theoretical exploration and have yet to be experimentally confirmed. > I hope this additional information helps to provide a more comprehensive understanding of scalar waves and their properties! > > If you have any further questions or would like more details on a specific aspect, please feel free to ask. > > Best regards, [Your Name] > > Clarification Requested: > Can you elaborate on the experimental demonstrations of scalar waves in biological tissues? What specific effects... It would then go on and keep talking to itself about the subject. The factual correctness? Well... *ahem* may need to be checked. When I asked to explain like I'm 5, it called me 'little buddy'.
flammenai/flammen26-mistral-7B
flammenai
2024-05-25T17:56:14Z
679
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2403.19522", "base_model:flammenai/Mahou-1.2-mistral-7B", "base_model:flammenai/flammen23-mistral-7B", "base_model:flammenai/flammen25-mistral-7B", "base_model:flammenai/flammen18X-mistral-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-17T17:17:03Z
--- license: apache-2.0 library_name: transformers tags: - mergekit - merge base_model: - flammenai/Mahou-1.2-mistral-7B - flammenai/flammen23-mistral-7B - flammenai/flammen25-mistral-7B - flammenai/flammen18X-mistral-7B model-index: - name: flammen26-mistral-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 67.92 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen26-mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 86.96 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen26-mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.01 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen26-mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 71.37 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen26-mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen26-mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.15 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=flammenai/flammen26-mistral-7B name: Open LLM Leaderboard --- ![image/png](https://huggingface.co/nbeerbower/flammen13X-mistral-7B/resolve/main/flammen13x.png) # flammen26-mistral-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [flammenai/flammen25-mistral-7B](https://huggingface.co/flammenai/flammen25-mistral-7B) as a base. ### Models Merged The following models were included in the merge: * [flammenai/Mahou-1.2-mistral-7B](https://huggingface.co/flammenai/Mahou-1.2-mistral-7B) * [flammenai/flammen23-mistral-7B](https://huggingface.co/flammenai/flammen23-mistral-7B) * [flammenai/flammen18X-mistral-7B](https://huggingface.co/flammenai/flammen18X-mistral-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: flammenai/Mahou-1.2-mistral-7B - model: flammenai/flammen23-mistral-7B - model: flammenai/flammen18X-mistral-7B merge_method: model_stock base_model: flammenai/flammen25-mistral-7B dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_flammenai__flammen26-mistral-7B) | Metric |Value| |---------------------------------|----:| |Avg. |72.49| |AI2 Reasoning Challenge (25-Shot)|67.92| |HellaSwag (10-Shot) |86.96| |MMLU (5-Shot) |65.01| |TruthfulQA (0-shot) |71.37| |Winogrande (5-shot) |80.51| |GSM8k (5-shot) |63.15|
uukuguy/speechless-instruct-mistral-7b-v0.2
uukuguy
2024-05-26T20:29:29Z
679
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "code", "en", "dataset:jondurbin/airoboros-2.2", "dataset:Open-Orca/OpenOrca", "dataset:garage-bAInd/Open-Platypus", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:TokenBender/python_eval_instruct_51k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-22T01:14:03Z
--- language: - en library_name: transformers pipeline_tag: text-generation datasets: - jondurbin/airoboros-2.2 - Open-Orca/OpenOrca - garage-bAInd/Open-Platypus - WizardLM/WizardLM_evol_instruct_V2_196k - TokenBender/python_eval_instruct_51k tags: - code license: apache-2.0 model-index: - name: SpeechlessCoder results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: verified: false --- <p><h1> speechless-instruct-mistral-7b-v0.2 </h1></p> Code: https://github.com/uukuguy/speechless Use the following dataset to fine-tune mistralai/Mistral-7B-v0.2 in order to improve the model's reasoning and planning abilities. Total 201,981 samples. - jondurbin/airoboros-2.2: Filter categories related to coding, reasoning and planning. 23,462 samples. - Open-Orca/OpenOrca: Filter the 'cot' category in 1M GPT4 dataset. 74,440 samples. - garage-bAInd/Open-Platypus: 100%, 24,926 samples. - WizardLM/WizardLM_evol_instruct_V2_196k: Coding coversation part. 30,185 samples - TokenBender/python_eval_instruct_51k: “python” in output .40,309 samples - Spider: 8,659 samples ## How to Prompt the Model This model accepts the Alpaca instruction format. For example: ``` You are an intelligent programming assistant. ### Instruction: Implement a linked list in C++ ### Response: ``` ## HumanEval | Metric | Value | | --- | --- | | humaneval-python | | Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard) CodeLlama-34B-Python: 53.29 CodeLlama-34B-Instruct: 50.79 CodeLlama-13B-Instruct: 50.6 CodeLlama-34B: 45.11 CodeLlama-13B-Python: 42.89 CodeLlama-13B: 35.07 [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | Metric | Value | | --- | --- | | ARC | 58.79 | | HellaSwag | 81.89 | | MMLU | 61.27 | | TruthfulQA | 49.85 | | Winoground | 78.22 | | GSM8K | 56.33 | | Average | 64.39 |
bartowski/dolphin-2.9.1-llama-3-70b-GGUF
bartowski
2024-05-24T05:30:49Z
679
1
null
[ "gguf", "generated_from_trainer", "axolotl", "text-generation", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:meta-llama/Meta-Llama-3-70B", "license:llama3", "region:us" ]
text-generation
2024-05-24T02:53:34Z
--- license: llama3 base_model: meta-llama/Meta-Llama-3-70B tags: - generated_from_trainer - axolotl model-index: - name: out results: [] datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of dolphin-2.9.1-llama-3-70b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2965">b2965</a> for quantization. Original model: https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-70b All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [dolphin-2.9.1-llama-3-70b-Q8_0.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/tree/main/dolphin-2.9.1-llama-3-70b-Q8_0.gguf) | Q8_0 | 74.97GB | Extremely high quality, generally unneeded but max available quant. | | [dolphin-2.9.1-llama-3-70b-Q6_K.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/tree/main/dolphin-2.9.1-llama-3-70b-Q6_K.gguf) | Q6_K | 57.88GB | Very high quality, near perfect, *recommended*. | | [dolphin-2.9.1-llama-3-70b-Q5_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. | | [dolphin-2.9.1-llama-3-70b-Q5_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-Q5_K_S.gguf) | Q5_K_S | 48.65GB | High quality, *recommended*. | | [dolphin-2.9.1-llama-3-70b-Q4_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [dolphin-2.9.1-llama-3-70b-Q4_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-Q4_K_S.gguf) | Q4_K_S | 40.34GB | Slightly lower quality with more space savings, *recommended*. | | [dolphin-2.9.1-llama-3-70b-IQ4_NL.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ4_NL.gguf) | IQ4_NL | 40.05GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [dolphin-2.9.1-llama-3-70b-IQ4_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [dolphin-2.9.1-llama-3-70b-Q3_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-Q3_K_L.gguf) | Q3_K_L | 37.14GB | Lower quality but usable, good for low RAM availability. | | [dolphin-2.9.1-llama-3-70b-Q3_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. | | [dolphin-2.9.1-llama-3-70b-IQ3_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [dolphin-2.9.1-llama-3-70b-IQ3_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ3_S.gguf) | IQ3_S | 30.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [dolphin-2.9.1-llama-3-70b-Q3_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. | | [dolphin-2.9.1-llama-3-70b-IQ3_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ3_XS.gguf) | IQ3_XS | 29.30GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [dolphin-2.9.1-llama-3-70b-IQ3_XXS.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [dolphin-2.9.1-llama-3-70b-Q2_K.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. | | [dolphin-2.9.1-llama-3-70b-IQ2_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [dolphin-2.9.1-llama-3-70b-IQ2_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ2_S.gguf) | IQ2_S | 22.24GB | Very low quality, uses SOTA techniques to be usable. | | [dolphin-2.9.1-llama-3-70b-IQ2_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ2_XS.gguf) | IQ2_XS | 21.14GB | Very low quality, uses SOTA techniques to be usable. | | [dolphin-2.9.1-llama-3-70b-IQ2_XXS.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ2_XXS.gguf) | IQ2_XXS | 19.09GB | Lower quality, uses SOTA techniques to be usable. | | [dolphin-2.9.1-llama-3-70b-IQ1_M.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. | | [dolphin-2.9.1-llama-3-70b-IQ1_S.gguf](https://huggingface.co/bartowski/dolphin-2.9.1-llama-3-70b-GGUF/blob/main/dolphin-2.9.1-llama-3-70b-IQ1_S.gguf) | IQ1_S | 15.34GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/dolphin-2.9.1-llama-3-70b-GGUF --include "dolphin-2.9.1-llama-3-70b-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/dolphin-2.9.1-llama-3-70b-GGUF --include "dolphin-2.9.1-llama-3-70b-Q8_0.gguf/*" --local-dir dolphin-2.9.1-llama-3-70b-Q8_0 ``` You can either specify a new local-dir (dolphin-2.9.1-llama-3-70b-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
giannisan/Mistral-10.7B-Instruct-v0.3-depth-upscaling
giannisan
2024-06-03T16:20:37Z
679
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "arxiv:2312.15166", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-26T09:53:03Z
--- base_model: - mistralai/Mistral-7B-Instruct-v0.3 library_name: transformers license: apache-2.0 language: - en --- # mistral-7b-instruct-v0.3-depth-upscaling ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/643eab4f05a395e2b1c727e3/qwYq9q2PpTfYwb1nsym9u.webp)Mistral: a strong, cold northwesterly wind that blows through the Rhône valley and southern France into the Mediterranean, mainly in winter. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/643eab4f05a395e2b1c727e3/elcrExK_Q5MQjcdAjYi9V.png) This is an attempt at depth upscaling , Based on the paper [SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling](https://arxiv.org/abs/2312.15166), which is a technique designed to efficiently scale large language models. The process begins with structural depthwise scaling which may initially reduce performance, but this is rapidly restored during a crucial continued pretraining phase. This phase optimizes the expanded model's parameters to the new depth configuration, significantly enhancing performance. It's important to note that this represents only the initial phase of the model's development. The next critical steps involve fine-tuning. As expected and according to the paper, the initial upscaled model in phase one (without fine-tuning) scores lower than the base model. This is expected to improve above and beyond this after fine-tuning is finished. Feel free to fine-tune on your own dataset. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. The first 24 layers of one copy of the model are stitched to the last 24 layers of another copy, resulting in a total of 48 layers with 10.7B parameters. ### Models Merged The following models were included in the merge: * [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) merged with itself. ### Configuration The following configuration was used to produce this model: ```yaml slices: - sources: - model: mistralai/Mistral-7B-Instruct-v0.3 layer_range: [0, 24] - sources: - model: mistralai/Mistral-7B-Instruct-v0.3 layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ``` Eval results: | Metric | Value | |----------------------|-------| | **Avg.** | 64.04 | | **ARC (25-shot)** | 63.14 | | **HellaSwag (10-shot)** | 83.29 | | **MMLU (5-shot)** | 62.31 | | **TruthfulQA (0-shot)** | 60.65 | | **Winogrande (5-shot)** | 78.45 | | **GSM8K (5-shot)** | 36.39 | Full results [here](https://huggingface.co/datasets/open-llm-leaderboard/details_giannisan__Mistral-10.7B-Instruct-v0.3-depth-upscaling/blob/main/results_2024-05-30T06-01-17.134852.json)
icefog72/IceMochaccinoRP-7b
icefog72
2024-05-28T22:46:59Z
679
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "alpaca", "not-for-all-audiences", "nsfw", "conversational", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-27T12:27:07Z
--- license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge - alpaca - mistral - not-for-all-audiences - nsfw model-index: - name: IceMochaccinoRP-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.0 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceMochaccinoRP-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.41 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceMochaccinoRP-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.78 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceMochaccinoRP-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.22 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceMochaccinoRP-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceMochaccinoRP-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 60.73 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=icefog72/IceMochaccinoRP-7b name: Open LLM Leaderboard --- # IceMochaccinoRP-7b This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details For mergers! ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * IceCappuccinoRP-multi_verse_model * IceCappuccinoRP-IceCoffeeRP-7b ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: IceCappuccinoRP-IceCoffeeRP-7b layer_range: [0, 32] - model: IceCappuccinoRP-multi_verse_model layer_range: [0, 32] merge_method: slerp base_model: IceCappuccinoRP-multi_verse_model parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_icefog72__IceMochaccinoRP-7b) | Metric |Value| |---------------------------------|----:| |Avg. |68.95| |AI2 Reasoning Challenge (25-Shot)|68.00| |HellaSwag (10-Shot) |85.41| |MMLU (5-Shot) |62.78| |TruthfulQA (0-shot) |56.22| |Winogrande (5-shot) |80.58| |GSM8k (5-shot) |60.73|
ryan0712/llama-3-8b-slow-DUS-max-method2
ryan0712
2024-05-28T14:46:15Z
679
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "NousResearch/Meta-Llama-3-8B", "ryan0712/llama-3-8b-slow-DUS-max-layer-method2", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:ryan0712/llama-3-8b-slow-DUS-max-layer-method2", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-28T13:35:52Z
--- tags: - merge - mergekit - lazymergekit - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 base_model: - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - NousResearch/Meta-Llama-3-8B - ryan0712/llama-3-8b-slow-DUS-max-layer-method2 - NousResearch/Meta-Llama-3-8B license: llama3 --- # llama-3-8b-slow-DUS-max-method2 llama-3-8b-slow-DUS-max-method2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) * [ryan0712/llama-3-8b-slow-DUS-max-layer-method2](https://huggingface.co/ryan0712/llama-3-8b-slow-DUS-max-layer-method2) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) ## 🧩 Configuration ```yaml slices: - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [0, 1] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [0, 1] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [1, 2] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [1, 2] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [2, 3] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [2, 3] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [3, 4] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [3, 4] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [4, 5] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [4, 5] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [5, 6] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [5, 6] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [6, 7] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [6, 7] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [7, 8] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [7, 8] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [8, 9] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [8, 9] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [9, 10] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [9, 10] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [10, 11] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [10, 11] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [11, 12] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [11, 12] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [12, 13] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [12, 13] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [13, 14] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [13, 14] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [14, 15] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [14, 15] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [15, 16] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [16, 17] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [17, 18] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [18, 19] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [19, 20] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [20, 21] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [21, 22] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [22, 23] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [23, 24] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [24, 25] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [25, 26] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [26, 27] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [27, 28] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [28, 29] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [29, 30] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [30, 31] - sources: - model: ryan0712/llama-3-8b-slow-DUS-max-layer-method2 layer_range: [15, 16] - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [31, 32] merge_method: passthrough dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "ryan0712/llama-3-8b-slow-DUS-max-method2" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
PotatoB/Kinship-Exp-1
PotatoB
2024-05-30T12:21:19Z
679
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "MaziyarPanahi/Calme-7B-Instruct-v0.9", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-30T12:17:24Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - MaziyarPanahi/Calme-7B-Instruct-v0.9 --- # Kinship-Exp-1 Kinship-Exp-1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [MaziyarPanahi/Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9) ## 🧩 Configuration ```yaml models: - model: automerger/YamshadowExperiment28-7B # no parameters necessary for base model - model: MaziyarPanahi/Calme-7B-Instruct-v0.9 parameters: density: 0.5 weight: 0.3 merge_method: ties base_model: automerger/YamshadowExperiment28-7B parameters: normalize: true dtype: bfloat16 ```
manu/dataset_1_model
manu
2024-05-30T18:41:39Z
679
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:croissantllm/CroissantLLMBase", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-30T14:01:57Z
--- license: mit base_model: croissantllm/CroissantLLMBase tags: - generated_from_trainer model-index: - name: gpfs/workdir/fayssema/models/out_translation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: croissantllm/CroissantLLMBase model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizerFast is_llama_derived_model: true special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" tokens: - "<|im_start|>" - "<|im_end|>" load_in_8bit: false load_in_4bit: false strict: false datasets: - path: manu/dataset_1 split: train type: sharegpt chat_template: "chatml" default_system_message: "" dataset_prepared_path: new_pii val_set_size: 0.05 output_dir: /gpfs/workdir/fayssema/models/out_translation sequence_len: 2048 sample_packing: false pad_to_sequence_len: false adapter: lora_model_dir: lora_r: lora_alpha: lora_dropout: lora_target_linear: lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 2 micro_batch_size: 16 num_epochs: 3 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 0.00003 train_on_inputs: false group_by_length: false bf16: auto fp16: false tf32: true gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true flash_attn_cross_entropy: false flash_attn_rms_norm: true flash_attn_fuse_qkv: false flash_attn_fuse_mlp: true warmup_steps: 100 evals_per_epoch: 4 eval_table_size: saves_per_epoch: 1 debug: deepspeed: #deepspeed_configs/zero2.json # multi-gpu only weight_decay: 0.05 fsdp: fsdp_config: ``` </details><br> # gpfs/workdir/fayssema/models/out_translation This model is a fine-tuned version of [croissantllm/CroissantLLMBase](https://huggingface.co/croissantllm/CroissantLLMBase) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6652 | 0.0 | 1 | 2.0261 | | 0.2986 | 0.25 | 73 | 0.0199 | | 0.19 | 0.5 | 146 | 0.0136 | | 0.3032 | 0.76 | 219 | 0.0158 | | 0.1343 | 1.01 | 292 | 0.0125 | | 0.12 | 1.26 | 365 | 0.0117 | | 0.2266 | 1.51 | 438 | 0.0113 | | 0.1924 | 1.77 | 511 | 0.0097 | | 0.1448 | 2.02 | 584 | 0.0095 | | 0.0718 | 2.27 | 657 | 0.0098 | | 0.1184 | 2.52 | 730 | 0.0097 | | 0.1124 | 2.77 | 803 | 0.0098 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Rostlab/prot_albert
Rostlab
2020-08-20T14:54:00Z
678
3
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
Entry not found
jonfd/electra-small-nordic
jonfd
2022-01-31T23:41:26Z
678
1
transformers
[ "transformers", "pytorch", "tf", "electra", "pretraining", "is", "no", "sv", "da", "dataset:igc", "dataset:ic3", "dataset:jonfd/ICC", "dataset:mc4", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: - is - no - sv - da license: cc-by-4.0 datasets: - igc - ic3 - jonfd/ICC - mc4 --- # Nordic ELECTRA-Small This model was pretrained on the following corpora: * The [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) (IGC) * The Icelandic Common Crawl Corpus (IC3) * The [Icelandic Crawled Corpus](https://huggingface.co/datasets/jonfd/ICC) (ICC) * The [Multilingual Colossal Clean Crawled Corpus](https://huggingface.co/datasets/mc4) (mC4) - Icelandic, Norwegian, Swedish and Danish text obtained from .is, .no, .se and .dk domains, respectively The total size of the corpus after document-level deduplication and filtering was 14.82B tokens, split equally between the four languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 96,105 for one million steps with a batch size of 256, and otherwise with default settings. # Acknowledgments This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC). This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
marefa-nlp/marefa-mt-en-ar
marefa-nlp
2021-09-22T08:59:51Z
678
6
transformers
[ "transformers", "pytorch", "marian", "text2text-generation", "translation", "Arabic Abjad Characters", "Arabic", "en", "ar", "dataset:marefa-mt", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - en - ar tags: - translation - Arabic Abjad Characters - Arabic license: apache-2.0 datasets: - marefa-mt --- # Marefa-Mt-En-Ar # نموذج المعرفة للترجمة الآلية من الإنجليزية للعربية ## Model description This is a model for translating English to Arabic. The special about this model that is take into considration the using of additional Arabic characters like `پ` or `گ`. ## عن النموذج هذا النموذج للترجمة الآلية من اللغة الإنجليزية إلى اللغة العربية, هو أول نماذج الترجمة الآلية التي تصدر تحت رعاية [موسوعة المعرفة](https://www.marefa.org) يتميز هذا النموذج عن غيره من النماذج بدعمه لحروف الأبجدية العربية الإضافية لتمييز الصوتيات الخاصة في اللغة الإنجليزية مثل `پ` , `گ`. يمكنك زيارة [هذه الصفحة](https://www.marefa.org/%D8%A7%D9%84%D9%85%D8%B9%D8%B1%D9%81%D8%A9:%D8%AF%D9%84%D9%8A%D9%84_%D8%A7%D9%84%D8%A3%D8%B3%D9%84%D9%88%D8%A8#.D8.AD.D8.B1.D9.88.D9.81_.D8.A5.D8.B6.D8.A7.D9.81.D9.8A.D8.A9_.D9.84.D9.84.D9.86.D8.B7.D9.82_.D8.A7.D9.84.D8.B3.D9.84.D9.8A.D9.85) لمعرفة أكثر عن أسلوب إستخدام هذه الحروف الأبجدية العربية ### How to use كيفية الإستخدام Install transformers and sentencepiece (python >= 3.6) `$ pip3 install transformers==4.3.0 sentencepiece==0.1.95 nltk==3.5 protobuf==3.15.3 torch==1.7.1` > If you are using `Google Colab`, please restart your runtime after installing the packages. ----------- ```python from transformers import MarianTokenizer, MarianMTModel mname = "marefa-nlp/marefa-mt-en-ar" tokenizer = MarianTokenizer.from_pretrained(mname) model = MarianMTModel.from_pretrained(mname) # English Sample Text input = "President Putin went to the presidential palace in the capital, Kiev" translated_tokens = model.generate(**tokenizer.prepare_seq2seq_batch([input], return_tensors="pt")) translated_text = [tokenizer.decode(t, skip_special_tokens=True) for t in translated_tokens] # translated Arabic Text print(translated_text) # ذهب الرئيس پوتن إلى القصر الرئاسي في العاصمة كييڤ ```
mmaaz60/LLaVA-7B-Lightening-v1-1
mmaaz60
2023-06-07T21:45:12Z
678
7
transformers
[ "transformers", "pytorch", "llava", "text-generation", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-06-07T12:52:16Z
--- license: cc-by-4.0 --- This model is obtained by applying LLaVA-7B-Lightening-v1-1 delta from [liuhaotian/LLaVA-Lightning-7B-delta-v1-1](liuhaotian/LLaVA-Lightning-7B-delta-v1-1) to LLaMA-7B model.
ricardoz/BERTugues-base-portuguese-cased
ricardoz
2023-08-12T21:04:41Z
678
7
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "pretraining", "sentence-transformers", "feature-extraction", "sentence-similarity", "fill-mask", "NSP", "Next Sentence Prediction", "pt", "dataset:brwac", "arxiv:1810.04805", "arxiv:1910.13793", "arxiv:2112.11446", "license:other", "endpoints_compatible", "region:us" ]
fill-mask
2023-08-07T02:46:34Z
--- language: - pt license: other tags: - bert - pytorch - sentence-transformers - feature-extraction - sentence-similarity - transformers - fill-mask - NSP - Next Sentence Prediction datasets: - brwac library_name: transformers pipeline_tag: fill-mask --- # BERTugues Base (aka "BERTugues-base-portuguese-cased") ## Introdução O BERTugues foi pré-treinado seguindo os mesmos passos do [paper original do BERT](https://arxiv.org/abs/1810.04805v2), com os objetivos de Masked Language Modeling (MLM) e Next Sentence Prediction (NSP), com 1 milhão de *steps*, usando mais de 20 GB de textos. Como o [Bertimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased), foi pré-treinado com o dataset [BrWAC](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) e a Wikipédia em português para o Tokenizador, contando com algumas melhorias no fluxo de treinamento, como: + **Remoção de caracteres pouco comuns no português do treinamento do Tokenizador.** No Bertimbau mais de 7000 dos 29794 usam caracteres orientais ou especiais, quase nunca usados no português. Por exemplo, existem os tokens: "##漫", "##켝", "##前", já no BERTugues fizemos a remoção desses caracteres antes de treinar o tokenizador; + **😀 Adição dos principais Emojis no Tokenizador.** A Wikipédia conta com poucos Emojis em seu texto, dessa forma um baixo número de Emojis entravam nos Tokens. Como já [demonstrado na literatura](https://arxiv.org/abs/1910.13793), eles são importantes numa série de tarefas; + **Filtragem de qualidade dos textos do BrWAC** seguindo o modelo heurística proposta pelo [paper do modelo Gopher](https://arxiv.org/abs/2112.11446) do Google, onde removemos do BrWac textos de baixa qualidade. ## Tokenizador Com a substituição dos Tokens pouco usados no português conseguimos diminuir a quantidade média de palavras que são quebradas em mais de um Token. No teste utilizando o [assin2](https://huggingface.co/datasets/assin2), mesmo dataset utilizando pelo Bertimbau para fazer o teste na dissertação de mestrado, diminuímos a quantidade média de palavras quebradas por texto de 3,8 para 3,0, no BERT multilinguagem esse número era 7,4. ![imagem](https://github.com/ricardozago/BERTugues/blob/main/99_demais/subtokens_counts.png?raw=true) ## Performance Para comparar a performance testamos um problema de classificação de textos, utilizando a base de reviews de filmes do [IMDB](https://www.kaggle.com/datasets/luisfredgs/imdb-ptbr), que foi traduzida para o português e possui uma boa qualidade. Neste problema utilizamos a representação do BERTugues da frase e passamos ela por um modelo de Random Forest para realizar a classificação. Também utilizamos a comparação de performance do paper [JurisBERT: Transformer-based model for embedding legal texts](https://repositorio.ufms.br/handle/123456789/5119), que pré-treina um BERT especialmente para textos de um domínio, usando o BERT multilinguagem e o Bertimbau como baseline. Neste caso utilizamos o [código disponibilizado pelo time do paper](https://github.com/alfaneo-ai/brazilian-legal-text-benchmark) e adicionamos o BERTugues. O modelo é utilizado para comparar se dois textos são do mesmo assunto ou não. | Modelo | IMDB (F1) | STJ (F1) | PJERJ (F1) | TJMS (F1) | Média F1 | |---------------------|-----------|----------|------------|-----------|----------| | BERT Multilinguagem | 72,0% | 30,4% | 63,8% | 65,0% | 57,8% | | Bertimbau-Base | 82,2% | 35,6% | 63,9% | 71,2% | 63,2% | | Bertimbau-Large | **85,3%** | 43,0% | 63,8% | **74,0%** | 66,5% | | BERTugues-Base | 84,0% | **45,2%** | **67,5%** | 70,0% | **66,7%** | O BERTugues foi superior em 3 de 4 tarefa em relação ao Bertimbau-base e em 2 das 4 tarefas superior ao Bertimbau-Large, um modelo muito maior (3x mais parâmetros) e custoso computacionalmente. ## Exemplo de uso Diversos exemplos de uso estão disponíveis no nosso [nosso Github](https://github.com/ricardozago/BERTugues). Para uma rápida consulta adicionamos 2 exemplos abaixo: Predição de palavras mascaradas (Masked Language Modeling): ```python from transformers import BertTokenizer, BertForMaskedLM, pipeline model = BertForMaskedLM.from_pretrained("ricardoz/BERTugues-base-portuguese-cased") tokenizer = BertTokenizer.from_pretrained("ricardoz/BERTugues-base-portuguese-cased", do_lower_case=False) pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer, top_k = 3) pipe('[CLS] Eduardo abriu os [MASK], mas não quis se levantar. Ficou deitado e viu que horas eram.') # [{'score': 0.7272418141365051, # 'token': 7292, # 'token_str': 'olhos', # 'sequence': 'Eduardo abriu os olhos, mas não quis se levantar. Ficou deitado e viu que horas eram.'}, # {'score': 0.2677205801010132, # 'token': 12761, # 'token_str': 'braços', # 'sequence': 'Eduardo abriu os braços, mas não quis se levantar. Ficou deitado e viu que horas eram.'}, # {'score': 0.0007434834260493517, # 'token': 24298, # 'token_str': 'óculos', # 'sequence': 'Eduardo abriu os óculos, mas não quis se levantar. Ficou deitado e viu que horas eram.'}] ``` Criação de um embedding para uma frase: ```python from transformers import BertTokenizer, BertModel, pipeline import torch model = BertModel.from_pretrained("ricardoz/BERTugues-base-portuguese-cased") tokenizer = BertTokenizer.from_pretrained("ricardoz/BERTugues-base-portuguese-cased", do_lower_case=False) input_ids = tokenizer.encode('[CLS] Eduardo abriu os olhos, mas não quis se levantar. Ficou deitado e viu que horas eram.', return_tensors='pt') with torch.no_grad(): last_hidden_state = model(input_ids).last_hidden_state[:, 0] last_hidden_state # tensor([[-1.5727e+00, 5.4707e-01, -2.6169e-01, -3.0714e-01, -9.4545e-01, # 4.1557e-01, -7.9884e-01, -2.1481e-01, 5.9792e-01, -1.4198e+00, # 1.1929e-01, 8.0228e-01, 5.4977e-01, 1.2710e+00, -2.9444e-01, # 1.3398e+00, 1.4413e-01, 8.2983e-01, -8.2349e-02, 1.8593e-01, ``` ## Mais informações Para mais informações acesso [nosso Github](https://github.com/ricardozago/BERTugues)!
SlyEcho/open_llama_7b_gguf
SlyEcho
2023-08-31T22:30:18Z
678
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2023-08-31T16:40:41Z
--- license: apache-2.0 --- # gguf versions of OpenLLaMa 7B - Version: 1T tokens final version - Project: [OpenLLaMA: An Open Reproduction of LLaMA](https://github.com/openlm-research/open_llama) - Model: [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b) - [llama.cpp](https://github.com/ggerganov/llama.cpp): build 1012 (6381d4e) or later - [ggml version](https://huggingface.co/SlyEcho/open_llama_7b_ggml) ## Perplexity on wiki.test.406 Coming soon...
Yellow-AI-NLP/komodo-7b-base
Yellow-AI-NLP
2024-03-15T09:39:36Z
678
62
transformers
[ "transformers", "safetensors", "llama", "text-generation", "komodo", "id", "en", "jv", "su", "arxiv:2403.09362", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-07T23:10:40Z
--- language: - id - en - jv - su license: llama2 library_name: transformers tags: - komodo --- # Model Card for Komodo-7B-Base Komodo-7B-Base is a large language model that is developed through incremental pretraining and vocabulary expansion on top of Llama-2-7B-Base. This model can handle Indonesian, English and 11 regional languages of Indonesia. **Disclaimer** : This is not an instruction-tuned model, further fine-tuning is needed for downstream tasks. For example, people usually utilize the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) dataset for further fine-tuning on top of Llama-2-7B-Base model. Hence, there is no prompt template for this model. ## Model Details <h3 align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/638828121901766b88076aa1/eB0L_nmy3ZpwtGA6-vhbC.png" width="950" align="center"> </h3> ### Model Description More details can be found in our paper: https://arxiv.org/abs/2403.09362 - **Developed by:** [Yellow.ai](https://yellow.ai/) - **Model type:** Decoder - **Languages:** English, Indonesian, Acehnese, Balinese, Banjarese, Buginese, Madurese, Minangkabau, Javanese, Dayak Ngaju, Sundanese, Toba Batak, Lampungnese - **License:** llama2 ## Usage Example Since this is a gated model, you need to logged in to your HF account before using the model. Below is one way to do this. You can get the HF Token from your profile (Profile -> Settings -> Access Tokens) ``` import huggingface_hub huggingface_hub.login("YOUR_HF_TOKEN") ``` Once you are logged in, you can start download and load the model & tokenizer. We wrote a custom decoding function for Komodo-7B, that's why we need to pass the `trust_remote_code=True`. The code also works without this parameter, but decoding process will not work as expected. ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM device = "cuda:0" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained("Yellow-AI-NLP/komodo-7b-base",trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("Yellow-AI-NLP/komodo-7b-base",trust_remote_code=True) model = model.to(device) ``` Then, you can try using the model. ``` full_prompt = "Candi borobudur adalah" tokens = tokenizer(full_prompt, return_tensors="pt").to(device) output = model.generate(tokens["input_ids"], eos_token_id=tokenizer.eos_token_id) print(tokenizer.decode(output[0], skip_special_tokens=True)) # Candi borobudur adalah candi yang terletak di Magelang, Jawa Tengah. ``` ## Technical Specifications ### Model Architecture and Objective Komodo-7B is a decoder model using the Llama-2 architecture. | Parameter | Komodo-7B | |-----------------|:-----------:| | Layers | 32 | | d_model | 4096 | | head_dim | 32 | | Vocabulary | 35008 | | Sequence Length | 4096 | ### Tokenizer Details Recognizing the importance of linguistic diversity, we focused on enhancing our language model's proficiency in both Indonesian and regional languages. To achieve this, we systematically expanded the tokenizer's vocabulary by identifying and incorporating approximately 2,000 frequently used words specific to Indonesian and 1,000 words for regional languages that were absent in the Llama-2 model. The standard method for enhancing a vocabulary typically involves developing a new tokenizer and integrating it with the existing one. This technique has shown impressive results in projects like Chinese-LLaMA and Open-Hathi. The effectiveness of this strategy can be attributed to the significant linguistic distinctions between languages such as Chinese and Hindi when compared to English. In contrast, the Indonesian language employs the same Latin script as English, which presents a different set of challenges. We tested the traditional method, as well as a new approach where we included the top n words (not tokens) from the Indonesian vocabulary. We discovered that with the new approach, we could achieve better fertility scores by adding around 3000 new vocabulary words. Adding more than 3000 words did not significantly improve the fertility score further, but it increased the size of the embedding matrix, leading to longer training times. More details can be found in our paper: https://arxiv.org/abs/2403.09362 ### Training Data More details can be found in our paper: https://arxiv.org/abs/2403.09362 ### Training Procedure More details can be found in our paper: https://arxiv.org/abs/2403.09362 #### Preprocessing More details can be found in our paper: https://arxiv.org/abs/2403.09362 ## Evaluation & Results Please note that the benchmarking values below are based on our SFT Model, Komodo-7B-Instruct, while here we only release the base model, Komodo-7B-base. | Organization | Model Name | Indo MMLU | ID-EN | XCOPA-ID | Intent Classification | Colloquial Detection | NusaX-Senti | ID-Hate Speech | TydiQA-ID | Indosum | Average | |--------------|--------------------|-----------|-------|----------|-----------------------|----------------------|-------------|----------------|-----------|---------|---------| | OpenAI | GPT-3.5-turbo-0301 | 51.3 | 64.5 | 70.0 | 82.0 | 64.1 | 47.2 | 68.0 | 85.3 | 41.0 | 63.7 | | OpenAI | GPT-3.5-turbo-0613 | 52.7 | 66.8 | 88.2 | 84.0 | 75.1 | 63.3 | 63.7 | 86.4 | 40.0 | 68.9 | | OpenAI | GPT-3.5-turbo-1106 | 53.3 | 69.7 | 89.3 | 84.0 | 64.2 | 59.8 | 56.6 | 88.0 | 42.0 | 67.4 | | OpenAI | GPT-4-preview-1106 | 69.8 | 78.0 | 98.3 | 89.0 | 92.7 | 66.1 | 73.4 | 72.0 | 33.0 | 74.7 | | Meta | Llama-2-7B-Chat | 30.4 | 45.6 | 41.5 | 57.0 | 31.4 | 2.9 | 41.3 | 11.7 | 34.0 | 32.9 | | Meta | Llama-2-13B-Chat | 32.0 | 61.7 | 38.0 | 59.0 | 31.1 | 58.7 | 57.2 | 71.9 | 40.0 | 50.0 | | Google | Gemma-7B-it | 37.4 | 73.6 | 57.7 | 77.1 | 18.8 | 44.2 | 54.8 | 73.3 | 44.0 | 53.4 | | Mistral | Mixtral-8x7B-v0.1-Instruct | 45.2 | 57.8 | 88.7 | 86.0 | 41.1 | 52.8 | 68.8 | 90.3 | 14.0 | 60.5 | | AISingapore | Sealion-7B-Instruct-NC | 23.9 | 26.9 | 41.3 | 37.0 | 41.8 | 30.7 | 57.3 | 65.3 | 26.0 | 38.9 | | Cohere | Aya-101-13B | 47.7 | 47.3 | 84.0 | 64.0 | 18.9 | 74.6 | 72.7 | 81.3 | 39.0 | 58.8 | | MBZUAI | Bactrian-X-Llama-7B | 23.6 | 43.2 | 45.3 | 42.0 | 50.3 | 44.5 | 42.4 | 65.0 | 15.0 | 41.3 | | Alibaba | Qwen-1.5-7B-chat | 40.0 | 56.0 | 29.5 | 85.0 | 41.8 | 58.7 | 63.9 | 51.22 | 29.0 | 50.6 | | Yellow.ai | Komodo-7B-Instruct | 43.2 | 90.5 | 79.6 | 84.0 | 73.6 | 79.3 | 56.2 | 90.3 | 43.0 | 71.1 | <h3 align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/638828121901766b88076aa1/CJkSjsVnC8MoMolIQ_Uv3.png" width="550" align="center"> </h3> More details can be found in our paper: https://arxiv.org/abs/2403.09362 ### Infrastructure | Training Details | Komodo-7B | |----------------------|:------------:| | AWS EC2 p4d.24xlarge | 1 instances | | Nvidia A100 40GB GPU | 8 | | Training Duration | 300 hours | ## Citation ``` @misc{owen2024komodo, title={Komodo: A Linguistic Expedition into Indonesia's Regional Languages}, author={Louis Owen and Vishesh Tripathi and Abhay Kumar and Biddwan Ahmed}, year={2024}, eprint={2403.09362}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Model Card Authors [Louis Owen](https://www.linkedin.com/in/louisowen/) <br> [Vishesh Tripathi](https://www.linkedin.com/in/vishesh-tripathi/) <br> [Abhay Kumar](https://www.linkedin.com/in/akanyaani/) <br> [Biddwan Ahmed](https://www.linkedin.com/in/biddwan-ahmed-917333126/) <br>
tokyotech-llm/Swallow-70b-instruct-v0.1
tokyotech-llm
2024-06-29T09:00:17Z
678
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "ja", "arxiv:2404.17790", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-06T14:39:34Z
--- language: - en - ja library_name: transformers pipeline_tag: text-generation license: llama2 model_type: llama --- # Swallow Our Swallow model has undergone continual pre-training from the [Llama 2 family](https://huggingface.co/meta-llama), primarily with the addition of Japanese language data. The tuned versions use supervised fine-tuning (SFT). Links to other models can be found in the index. # Model Release Updates We are excited to share the release schedule for our latest models: - **April 26, 2024**: Released version 0.1 of our enhanced instruction-tuned models: [Swallow-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1), [Swallow-13b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v0.1), and [Swallow-70b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v0.1) as preview versions. - **March 2, 2024**: Released the [Swallow-7b-plus-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf), a model trained with approximately twice as many Japanese tokens as [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf). - **February 4, 2024**: Released the [Swallow-13b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf). - **January 26, 2024**: Released the [Swallow-7b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf), [Swallow-7b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf), [Swallow-70b-NVE-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf), and [Swallow-70b-NVE-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf) - **December 19, 2023**: Released the [Swallow-7b-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-hf), [Swallow-7b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf), [Swallow-13b-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-hf), [Swallow-13b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf), [Swallow-70b-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-hf), and [Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf). ## Swallow Model Index |Model|Swallow-hf|Swallow-instruct-hf|Swallow-instruct-v0.1| |---|---|---|---| |7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf)|[Link](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v1.0)| |7B-Plus| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-plus-hf) | N/A | N/A | |13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-instruct-v1.0)| |70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf)| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-v1.0)| ## Swallow Model Index NVE (No Vocabulary Expansion) |Model|Swallow-NVE-hf|Swallow-NVE-instruct-hf| |---|---|---| |7B| [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-7b-NVE-instruct-hf)| |13B| [Link](https://huggingface.co/tokyotech-llm/Swallow-13b-NVE-hf) | N/A | |70B| [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-hf) | [Link](https://huggingface.co/tokyotech-llm/Swallow-70b-NVE-instruct-hf)| ![logo](./logo.png) This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/). ## Model Details * **Model type**: Please refer to LLaMA-2 technical report for details on the model architecture. * **Language(s)**: Japanese English * **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process. * **Contact**: swallow[at]nlp.c.titech.ac.jp ## Instruct Model Performance ### MT-Bench JA #### Comparison to the past version * NOTE that the models with the `v0.1` suffix are newer versions compared to their original counterparts with the `hf`. * We report overall (i.e., average over scores of the first and second turns), first, and second turn scores. ##### Overall |Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |---|---|---|---|---|---|---|---|---|---| | Swallow-7b-instruct-v0.1 |0.3435|0.4450|0.4720|0.1853|0.1920|0.2204|0.3015|0.4594|0.4720| | Swallow-7b-instruct-hf |0.1833|0.2205|0.1975|0.1593|0.1045|0.1282|0.2672|0.1908|0.1980| | Swallow-13b-instruct-v0.1 |0.3669|0.4816|0.5562|0.2769|0.1020|0.1505|0.4179|0.4347|0.5150| | Swallow-13b-instruct-hf |0.2004|0.1932|0.2552|0.1507|0.1184|0.1285|0.2641|0.2434|0.2500| | Swallow-70b-instruct-v0.1 |0.4513|0.4822|0.5353|0.3497|0.3492|0.2668|0.5553|0.4955|0.5767| | Swallow-70b-instruct-hf |0.3259|0.2925|0.4283|0.3447|0.1562|0.1856|0.5634|0.3315|0.3071| ##### First Turn |Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |---|---|---|---|---|---|---|---|---|---| | Swallow-7b-instruct-v0.1 |0.3829|0.4960|0.4800|0.2220|0.2820|0.2164|0.3220|0.5440|0.4980| | Swallow-7b-instruct-hf |0.2216|0.2830|0.2150|0.1590|0.1080|0.1470|0.3542|0.2450|0.2650| | Swallow-13b-instruct-v0.1 |0.3948|0.5400|0.5220|0.3020|0.1040|0.1760|0.5040|0.5180|0.4920| | Swallow-13b-instruct-hf |0.2304|0.2460|0.2640|0.1610|0.1360|0.1330|0.3070|0.3010|0.2950| | Swallow-70b-instruct-v0.1 |0.4849|0.5720|0.5020|0.4780|0.3680|0.2467|0.5400|0.5720|0.5960| | Swallow-70b-instruct-hf |0.3631|0.3420|0.4007|0.4220|0.1580|0.2044|0.6120|0.4280|0.3360| ##### Second Turn |Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |---|---|---|---|---|---|---|---|---|---| | Swallow-7b-instruct-v0.1 |0.3059|0.3940|0.4640|0.1441|0.1000|0.2253|0.2811|0.3724|0.4449| | Swallow-7b-instruct-hf |0.1432|0.1567|0.1798|0.1603|0.1010|0.1085|0.1767|0.1343|0.1295| | Swallow-13b-instruct-v0.1 |0.3353|0.4213|0.5911|0.2516|0.1000|0.1244|0.3194|0.3473|0.5394| | Swallow-13b-instruct-hf |0.1692|0.1364|0.2453|0.1401|0.1000|0.1237|0.2199|0.1850|0.2050| | Swallow-70b-instruct-v0.1 |0.4179|0.3913|0.5689|0.2184|0.3280|0.2884|0.5711|0.4171|0.5562| | Swallow-70b-instruct-hf |0.2872|0.2398|0.4564|0.2647|0.1540|0.1676|0.5118|0.2311|0.2762| #### Comparison to the existing models We only provide the overall score in this section. ##### 7B models |Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |---|---|---|---|---|---|---|---|---|---| | Swallow-7b-instruct-v0.1 |0.3435|0.4450|0.4720|0.1853|0.1920|0.2204|0.3015|0.4594|0.4720| | ELYZA-japanese-Llama-2-7b-fast-instruct |0.2827|0.3289|0.3907|0.2424|0.1480|0.1584|0.3511|0.3053|0.3365| | calm2-7b-chat |0.3204|0.4657|0.4898|0.1837|0.1005|0.1414|0.3927|0.3601|0.4293| | calm2-7b-chat-dpo-experimental |0.3493|0.5312|0.5237|0.1857|0.1000|0.1813|0.3355|0.4320|0.5051| | RakutenAI-7B-instruct |0.2994|0.3623|0.3711|0.3333|0.1763|0.1581|0.4215|0.2824|0.2901| | RakutenAI-7B-chat |0.3667|0.4229|0.4644|0.3990|0.2161|0.2390|0.3416|0.3904|0.4601| ##### 13B models |Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |---|---|---|---|---|---|---|---|---|---| | Swallow-13b-instruct-v0.1 |0.3669|0.4816|0.5562|0.2769|0.1020|0.1505|0.4179|0.4347|0.5150| | ELYZA-japanese-Llama-2-13b-instruct |0.3196|0.4400|0.4373|0.2098|0.2157|0.1572|0.3583|0.3243|0.4141| | ELYZA-japanese-Llama-2-13b-fast-instruct |0.3042|0.3729|0.3930|0.1236|0.2492|0.1862|0.4360|0.3233|0.3496| ##### 70B models |Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities| |---|---|---|---|---|---|---|---|---|---| | Swallow-70b-instruct-v0.1 |0.4513|0.4822|0.5353|0.3497|0.3492|0.2668|0.5553|0.4955|0.5767| | japanese-stablelm-instruct-beta-70b |0.3716|0.4179|0.3945|0.3656|0.2580|0.2186|0.4412|0.4663|0.4103| ## Evaluation Benchmarks ### MT-Bench JA We used [Japanese MT-Bench](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question) to assess the instruction-following capabilities of models. We utilized the following settings: - Implemantation: FastChat [Zheng+, 2023] (commit #e86e70d0) - Question: [Nejumi LLM-Leaderboard NEO, mtbench_ja_question_v3](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question/v3) - Reference Answer: [Nejumi LLM-Leaderboard NEO, mtbench_ja_referenceanswer_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_referenceanswer/v1) - Prompt for Judge: [Nejumi LLM-Lederboard NEO, mtbench_ja_prompt_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_prompt/v1) - Judge: `gpt-4-1106-preview` - Scoring: Absolute scale normalized to a 0-1 range, averaged over five runs. ## Usage First install additional dependencies in [requirements.txt](./requirements.txt): ```sh pip install -r requirements.txt ``` ### Instruction format Ver0.1 This format must be adhered to strictly, as deviations may result in less optimal outputs from the model. The template used to construct a prompt for the Instruct model is specified as follows: ``` <s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST] ``` Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings. For the "{SYSTEM_PROMPT}" part, We recommend using "あなたは誠実で優秀な日本人のアシスタントです。" For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input} In other words, We recommend the following: ``` <s>[INST] <<SYS>>\nあなたは誠実で優秀な日本人のアシスタントです。\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST] ``` ### Use the instruct model Ver0.1 ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "tokyotech-llm/Swallow-70b-instruct-v0.1" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name) device = "cuda" messages = [ {"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"}, {"role": "user", "content": "東京工業大学の主なキャンパスについて教えてください"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=128, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## Training Datasets ### Instruction Tuning Ver0.1 The following datasets were used for the instruction tuning. - [OpenAssistant Conversations Dataset EN top-1 thread](https://huggingface.co/datasets/OpenAssistant/oasst2) - [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja) was used, where human utterances are included but the responses are not used. Instead, the responses were generated using the [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) model. ## Risks and Limitations The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations. ## Acknowledgements We thank Meta Research for releasing Llama 2 under an open license for others to build on. Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology. ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. ## Authors Here are the team members: - From [Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members: - [Naoaki Okazaki](https://www.chokkan.org/index.ja.html) - [Sakae Mizuki](https://s-mizuki-nlp.github.io/) - [Hiroki Iida](https://meshidenn.github.io/) - [Mengsay Loem](https://loem-ms.github.io/) - [Shota Hirai](https://huggingface.co/Kotemo428) - [Kakeru Hattori](https://aya-se.vercel.app/) - [Masanari Ohi](https://twitter.com/stjohn2007) - From [YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members: - [Rio Yokota](https://twitter.com/rioyokota) - [Kazuki Fujii](https://twitter.com/okoge_kaz) - [Taishi Nakamura](https://twitter.com/Setuna7777_2) - [Takumi Okamoto](https://www.linkedin.com/in/takumi-okamoto) - [Ishida Shigeki](https://www.wantedly.com/id/reborn27) ## How to cite ``` @misc{fujii2024continual, title={Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities}, author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae Mizuki and Rio Yokota and Naoaki Okazaki}, year={2024}, eprint={2404.17790}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
allknowingroger/Experimentmultiverse-7B-slerp
allknowingroger
2024-04-10T18:50:12Z
678
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "MTSAIR/multi_verse_model", "yam-peleg/Experiment31-7B", "base_model:MTSAIR/multi_verse_model", "base_model:yam-peleg/Experiment31-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-29T16:53:29Z
--- tags: - merge - mergekit - lazymergekit - MTSAIR/multi_verse_model - yam-peleg/Experiment31-7B base_model: - MTSAIR/multi_verse_model - yam-peleg/Experiment31-7B license: apache-2.0 --- # Experimentmultiverse-7B-slerp Experimentmultiverse-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model) * [yam-peleg/Experiment31-7B](https://huggingface.co/yam-peleg/Experiment31-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: MTSAIR/multi_verse_model layer_range: [0, 32] - model: yam-peleg/Experiment31-7B layer_range: [0, 32] merge_method: slerp base_model: yam-peleg/Experiment31-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/Experimentmultiverse-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
allknowingroger/MistralMerge-7B-stock
allknowingroger
2024-04-10T18:34:07Z
678
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-01T17:58:51Z
--- tags: - merge - mergekit - lazymergekit license: apache-2.0 --- # MistralMerge-7B-stock MistralMerge-7B-stock is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-Instruct-v0.2 - model: allknowingroger/JupiterMerge-7B-slerp - model: allknowingroger/RasGullaINEX12-7B-slerp merge_method: model_stock base_model: mistralai/Mistral-7B-Instruct-v0.2 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/MistralMerge-7B-stock" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ABX-AI/Quantum-Citrus-9B
ABX-AI
2024-04-11T12:58:29Z
678
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "not-for-all-audiences", "base_model:ABX-AI/Cerebral-Infinity-7B", "base_model:ABX-AI/Starfinite-Laymospice-v2-7B", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-03T13:30:07Z
--- license: other library_name: transformers tags: - mergekit - merge - mistral - not-for-all-audiences base_model: - ABX-AI/Cerebral-Infinity-7B - ABX-AI/Starfinite-Laymospice-v2-7B model-index: - name: Quantum-Citrus-9B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.19 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.75 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 55.96 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 50.57 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d936ad52eca001fdcd3245/J0snW3yfLadLuMYERL6X5.png) # Quantum-Citrus-9B This merge is another attempt at making and intelligent, refined and unaligned model. Based on my tests so far, it has accomplished the goals, and I am continuing to experiment with my interactions with it. It includes previous merges of Starling, Cerebrum, LemonadeRP, InfinityRP, and deep down has a base of layla v0.1, as I am not that happy with the result form using v0.2. The model is intended for fictional storytelling and roleplaying and may not be intended for all audences. [GGUF / IQ / Imatrix](https://huggingface.co/ABX-AI/Quantum-Citrus-9B-GGUF-IQ-Imatrix) ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * ABX-AI/Starfinite-Laymospice-v2-7B * ABX-AI/Cerebral-Infinity-7B ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: ABX-AI/Cerebral-Infinity-7B layer_range: [0, 20] - sources: - model: ABX-AI/Starfinite-Laymospice-v2-7B layer_range: [12, 32] merge_method: passthrough dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ABX-AI__Quantum-Citrus-9B) | Metric |Value| |---------------------------------|----:| |Avg. |66.74| |AI2 Reasoning Challenge (25-Shot)|65.19| |HellaSwag (10-Shot) |84.75| |MMLU (5-Shot) |64.58| |TruthfulQA (0-shot) |55.96| |Winogrande (5-shot) |79.40| |GSM8k (5-shot) |50.57|
allknowingroger/StarlingDolphin-7B-slerp
allknowingroger
2024-04-10T18:29:51Z
678
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Nexusflow/Starling-LM-7B-beta", "cognitivecomputations/dolphin-2.8-mistral-7b-v02", "conversational", "base_model:Nexusflow/Starling-LM-7B-beta", "base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-04T06:52:53Z
--- tags: - merge - mergekit - lazymergekit - Nexusflow/Starling-LM-7B-beta - cognitivecomputations/dolphin-2.8-mistral-7b-v02 base_model: - Nexusflow/Starling-LM-7B-beta - cognitivecomputations/dolphin-2.8-mistral-7b-v02 license: apache-2.0 --- # StarlingDolphin-7B-slerp StarlingDolphin-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) * [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) ## 🧩 Configuration ```yaml slices: - sources: - model: Nexusflow/Starling-LM-7B-beta layer_range: [0, 32] - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 layer_range: [0, 32] merge_method: slerp base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/StarlingDolphin-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```