modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
flammenai/Mahou-1.3a-mistral-7B-GGUF
flammenai
2024-06-04T02:26:13Z
420
1
transformers
[ "transformers", "gguf", "dataset:flammenai/MahouMix-v1", "base_model:flammenai/Mahou-1.3a-mistral-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-02T03:39:19Z
--- library_name: transformers license: apache-2.0 base_model: - flammenai/Mahou-1.3a-mistral-7B datasets: - flammenai/MahouMix-v1 --- ![image/png](https://huggingface.co/flammenai/Mahou-1.0-mistral-7B/resolve/main/mahou1.png) # Mahou-1.3a-mistral-7B Mahou is designed to provide short messages in a conversational context. It is capable of casual conversation and character roleplay. ### Chat Format This model has been trained to use ChatML format. ``` <|im_start|>system {{system}}<|im_end|> <|im_start|>{{char}} {{message}}<|im_end|> <|im_start|>{{user}} {{message}}<|im_end|> ``` ### Roleplay Format - Speech without quotes. - Actions in `*asterisks*` ``` *leans against wall cooly* so like, i just casted a super strong spell at magician academy today, not gonna lie, felt badass. ``` ### SillyTavern Settings 1. Use ChatML for the Context Template. 2. Enable Instruct Mode. 3. Use the [Mahou preset](https://huggingface.co/datasets/flammenai/Mahou-ST-ChatML-Instruct/raw/main/Mahou.json). 4. *Recommended* Additonal stopping strings: `["\n", "<|", "</"]` ### Method DPO finetuned for 6 epochs using an A100 on Google Colab. [Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
cstr/Spaetzle-v85-7b-GGUF-q4
cstr
2024-06-24T19:34:00Z
420
0
null
[ "gguf", "merge", "mergekit", "cstr/Spaetzle-v80-7b", "cstr/Spaetzle-v79-7b", "cstr/Spaetzle-v81-7b", "cstr/Spaetzle-v71-7b", "de", "en", "base_model:cstr/Spaetzle-v80-7b", "base_model:cstr/Spaetzle-v79-7b", "base_model:cstr/Spaetzle-v81-7b", "base_model:cstr/Spaetzle-v71-7b", "license:cc-by-nc-4.0", "region:us" ]
null
2024-06-12T10:58:40Z
--- tags: - merge - mergekit - cstr/Spaetzle-v80-7b - cstr/Spaetzle-v79-7b - cstr/Spaetzle-v81-7b - cstr/Spaetzle-v71-7b base_model: - cstr/Spaetzle-v80-7b - cstr/Spaetzle-v79-7b - cstr/Spaetzle-v81-7b - cstr/Spaetzle-v71-7b license: cc-by-nc-4.0 language: - de - en --- # Spaetzle-v85-7b Spaetzle-v85-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [cstr/Spaetzle-v84-7b](https://huggingface.co/cstr/Spaetzle-v84-7b) * [cstr/Spaetzle-v81-7b](https://huggingface.co/cstr/Spaetzle-v81-7b) * [cstr/Spaetzle-v80-7b](https://huggingface.co/cstr/Spaetzle-v80-7b) * [cstr/Spaetzle-v79-7b](https://huggingface.co/cstr/Spaetzle-v79-7b) * [cstr/Spaetzle-v71-7b](https://huggingface.co/cstr/Spaetzle-v71-7b) ## Evaluation EQ-Bench (v2_de): 65.32, Parseable: 171.0 | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average| |--------------------------------------------------------------|------:|------:|---------:|-------:|------:| |[Spaetzle-v85-7b](https://huggingface.co/cstr/Spaetzle-v85-7b)| 44.35| 75.99| 67.23| 46.55| 58.53| From [Intel/low_bit_open_llm_leaderboard](https://huggingface.co/datasets/Intel/ld_results/blob/main/cstr/Spaetzle-v85-7b-int4-inc/results_2024-06-12-21-00-34.json): | Metric | Value | |--------------|---------| | ARC-c | 62.63 | | ARC-e | 85.56 | | Boolq | 87.77 | | HellaSwag | 66.66 | | Lambada | 70.35 | | MMLU | 61.61 | | Openbookqa | 37.2 | | Piqa | 82.48 | | Truthfulqa | 50.43 | | Winogrande | 78.3 | | Average | 68.3 | From [Occiglot Euro LLM Leaderboard](https://huggingface.co/spaces/occiglot/euro-llm-leaderboard) | Model | 🇪🇺 Average ⬆️ | 🇩🇪 DE | 🇬🇧 EN | 🇬🇧ARC EN | 🇬🇧TruthfulQA EN | 🇬🇧Belebele EN | 🇬🇧HellaSwag EN | 🇬🇧MMLU EN | 🇩🇪ARC DE | 🇩🇪TruthfulQA DE | 🇩🇪Belebele DE | 🇩🇪HellaSwag DE | 🇩🇪MMLU DE | |----------------------------------------------|----------------|--------|--------|-------------|------------------|----------------|----------------|------------|-------------|------------------|----------------|----------------|------------| | mistral-community/Mixtral-8x22B-v0.1 | 68.3 | 66.81 | 72.87 | 70.56 | 52.29 | 93.89 | 70.41 | 77.17 | 63.9 | 29.31 | 92.44 | 77.9 | 70.49 | | **cstr/Spaetzle-v85-7b** | 63.26 | 61.11 | 71.94 | 70.48 | 67.16 | 90.33 | 68.54 | 63.17 | 58.43 | 36.93 | 84.22 | 70.62 | 55.36 | | cstr/Spaetzle-v60-7b | 63.32 | 60.95 | 71.65 | 69.88 | 66.24 | 90.11 | 68.43 | 63.59 | 58 | 37.31 | 84.22 | 70.09 | 55.11 | | VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct| 64.49 | 60.07 | 74.71 | 74.49 | 66.19 | 91.67 | 74.55 | 66.65 | 59.37 | 29.57 | 88.56 | 66.43 | 56.44 | | seedboxai/Llama-3-KafkaLM-8B-v0.1 | 62.27 | 59.67 | 69.75 | 69.03 | 58.14 | 90.78 | 64.35 | 66.43 | 57.66 | 30.33 | 85.89 | 66.88 | 57.58 | | cstr/llama3-8b-spaetzle-v33 | 62.75 | 59.56 | 70.68 | 69.54 | 59.31 | 91.44 | 66.04 | 67.06 | 57.06 | 28.55 | 87.56 | 66.7 | 57.92 | ### AGIEval | Task |Version| Metric |Value| |Stderr| |------------------------------|------:|--------|----:|---|-----:| |agieval_aqua_rat | 0|acc |23.23|± | 2.65| | | |acc_norm|22.44|± | 2.62| |agieval_logiqa_en | 0|acc |37.33|± | 1.90| | | |acc_norm|37.94|± | 1.90| |agieval_lsat_ar | 0|acc |25.22|± | 2.87| | | |acc_norm|23.04|± | 2.78| |agieval_lsat_lr | 0|acc |49.41|± | 2.22| | | |acc_norm|50.78|± | 2.22| |agieval_lsat_rc | 0|acc |64.68|± | 2.92| | | |acc_norm|63.20|± | 2.95| |agieval_sat_en | 0|acc |77.67|± | 2.91| | | |acc_norm|78.16|± | 2.89| |agieval_sat_en_without_passage| 0|acc |46.12|± | 3.48| | | |acc_norm|45.15|± | 3.48| |agieval_sat_math | 0|acc |35.45|± | 3.23| | | |acc_norm|34.09|± | 3.20| Average: 44.35% ### GPT4All | Task |Version| Metric |Value| |Stderr| |-------------|------:|--------|----:|---|-----:| |arc_challenge| 0|acc |63.82|± | 1.40| | | |acc_norm|64.76|± | 1.40| |arc_easy | 0|acc |85.90|± | 0.71| | | |acc_norm|82.32|± | 0.78| |boolq | 1|acc |87.61|± | 0.58| |hellaswag | 0|acc |67.39|± | 0.47| | | |acc_norm|85.36|± | 0.35| |openbookqa | 0|acc |38.80|± | 2.18| | | |acc_norm|48.80|± | 2.24| |piqa | 0|acc |83.03|± | 0.88| | | |acc_norm|84.17|± | 0.85| |winogrande | 0|acc |78.93|± | 1.15| Average: 75.99% ### TruthfulQA | Task |Version|Metric|Value| |Stderr| |-------------|------:|------|----:|---|-----:| |truthfulqa_mc| 1|mc1 |50.80|± | 1.75| | | |mc2 |67.23|± | 1.49| Average: 67.23% ### Bigbench | Task |Version| Metric |Value| |Stderr| |------------------------------------------------|------:|---------------------|----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|54.74|± | 3.62| |bigbench_date_understanding | 0|multiple_choice_grade|68.29|± | 2.43| |bigbench_disambiguation_qa | 0|multiple_choice_grade|39.53|± | 3.05| |bigbench_geometric_shapes | 0|multiple_choice_grade|22.28|± | 2.20| | | |exact_str_match |12.26|± | 1.73| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|32.80|± | 2.10| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.00|± | 1.59| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|59.00|± | 2.84| |bigbench_movie_recommendation | 0|multiple_choice_grade|45.60|± | 2.23| |bigbench_navigate | 0|multiple_choice_grade|51.10|± | 1.58| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|70.10|± | 1.02| |bigbench_ruin_names | 0|multiple_choice_grade|52.68|± | 2.36| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|33.57|± | 1.50| |bigbench_snarks | 0|multiple_choice_grade|71.27|± | 3.37| |bigbench_sports_understanding | 0|multiple_choice_grade|74.54|± | 1.39| |bigbench_temporal_sequences | 0|multiple_choice_grade|40.00|± | 1.55| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|21.52|± | 1.16| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|18.86|± | 0.94| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|59.00|± | 2.84| Average: 46.55% Average score: 58.53% ## 🧩 Configuration ```yaml models: - model: cstr/Spaetzle-v84-7b # no parameters necessary for base model - model: cstr/Spaetzle-v80-7b parameters: density: 0.65 weight: 0.2 - model: cstr/Spaetzle-v79-7b parameters: density: 0.65 weight: 0.2 - model: cstr/Spaetzle-v81-7b parameters: density: 0.65 weight: 0.2 - model: cstr/Spaetzle-v71-7b parameters: density: 0.65 weight: 0.2 merge_method: dare_ties base_model: cstr/Spaetzle-v84-7b parameters: int8_mask: true dtype: bfloat16 random_seed: 0 tokenizer_source: base ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "cstr/Spaetzle-v85-7b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Ali-C137/Q2AW1M-0100
Ali-C137
2024-06-21T20:42:36Z
420
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-21T13:25:17Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ali-C137/Q2AW1M-0001
Ali-C137
2024-06-21T20:39:37Z
420
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-21T14:23:16Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CHE-72/Yi-1.5-6B-Chat-Q6_K-GGUF
CHE-72
2024-06-22T07:16:03Z
420
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:01-ai/Yi-1.5-6B-Chat", "license:apache-2.0", "region:us" ]
null
2024-06-22T07:15:42Z
--- base_model: 01-ai/Yi-1.5-6B-Chat license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # CHE-72/Yi-1.5-6B-Chat-Q6_K-GGUF This model was converted to GGUF format from [`01-ai/Yi-1.5-6B-Chat`](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/Yi-1.5-6B-Chat-Q6_K-GGUF --hf-file yi-1.5-6b-chat-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/Yi-1.5-6B-Chat-Q6_K-GGUF --hf-file yi-1.5-6b-chat-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/Yi-1.5-6B-Chat-Q6_K-GGUF --hf-file yi-1.5-6b-chat-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/Yi-1.5-6B-Chat-Q6_K-GGUF --hf-file yi-1.5-6b-chat-q6_k.gguf -c 2048 ```
Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-IQ4_XS-GGUF
Tanvir1337
2024-06-25T21:55:23Z
420
0
null
[ "gguf", "bangla", "large language model", "llama-cpp", "gguf-my-repo", "bn", "en", "dataset:wikimedia/wikipedia", "base_model:BanglaLLM/BanglaLLama-3-8b-BnWiki-Base", "license:llama3", "region:us" ]
null
2024-06-25T21:54:59Z
--- base_model: BanglaLLM/BanglaLLama-3-8b-BnWiki-Base datasets: - wikimedia/wikipedia language: - bn - en license: llama3 tags: - bangla - large language model - llama-cpp - gguf-my-repo --- # Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-IQ4_XS-GGUF This model was converted to GGUF format from [`BanglaLLM/BanglaLLama-3-8b-BnWiki-Base`](https://huggingface.co/BanglaLLM/BanglaLLama-3-8b-BnWiki-Base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BanglaLLM/BanglaLLama-3-8b-BnWiki-Base) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-IQ4_XS-GGUF --hf-file banglallama-3-8b-bnwiki-base-iq4_xs-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-IQ4_XS-GGUF --hf-file banglallama-3-8b-bnwiki-base-iq4_xs-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-IQ4_XS-GGUF --hf-file banglallama-3-8b-bnwiki-base-iq4_xs-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Base-IQ4_XS-GGUF --hf-file banglallama-3-8b-bnwiki-base-iq4_xs-imat.gguf -c 2048 ```
robbiegwaldd/rephraser-4bit
robbiegwaldd
2024-06-28T17:30:55Z
420
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text2text-generation
2024-06-28T17:30:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DreadPoor/mergekit-model_stock-qokatpj-Q4_K_M-GGUF
DreadPoor
2024-07-01T05:01:45Z
420
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:DreadPoor/mergekit-model_stock-qokatpj", "endpoints_compatible", "region:us" ]
null
2024-07-01T05:01:19Z
--- base_model: DreadPoor/mergekit-model_stock-qokatpj library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # DreadPoor/mergekit-model_stock-qokatpj-Q4_K_M-GGUF This model was converted to GGUF format from [`DreadPoor/mergekit-model_stock-qokatpj`](https://huggingface.co/DreadPoor/mergekit-model_stock-qokatpj) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DreadPoor/mergekit-model_stock-qokatpj) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo DreadPoor/mergekit-model_stock-qokatpj-Q4_K_M-GGUF --hf-file mergekit-model_stock-qokatpj-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo DreadPoor/mergekit-model_stock-qokatpj-Q4_K_M-GGUF --hf-file mergekit-model_stock-qokatpj-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo DreadPoor/mergekit-model_stock-qokatpj-Q4_K_M-GGUF --hf-file mergekit-model_stock-qokatpj-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo DreadPoor/mergekit-model_stock-qokatpj-Q4_K_M-GGUF --hf-file mergekit-model_stock-qokatpj-q4_k_m.gguf -c 2048 ```
bloomberg/KBIR
bloomberg
2023-02-21T18:04:20Z
419
9
transformers
[ "transformers", "pytorch", "roberta", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- license: apache-2.0 --- # Keyphrase Boundary Infilling with Replacement (KBIR) The KBIR model as described in "Learning Rich Representations of Keyphrases from Text" from Findings of NAACL 2022 (https://aclanthology.org/2022.findings-naacl.67.pdf) builds on top of the RoBERTa architecture by adding an Infilling head and a Replacement Classification head that is used during pre-training. However, these heads are not used during the downstream evaluation of the model and we only leverage the pre-trained embeddings. Discarding the heads thereby allows us to be compatible with all AutoModel classes that RoBERTa supports. We provide examples on how to perform downstream evaluation on some of the tasks reported in the paper. ## Downstream Evaluation ### Keyphrase Extraction ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR") model = AutoModelForTokenClassification.from_pretrained("bloomberg/KBIR") from datasets import load_dataset dataset = load_dataset("midas/semeval2017_ke_tagged") ``` Reported Results: | Model | Inspec | SE10 | SE17 | |-----------------------|--------|-------|-------| | RoBERTa+BiLSTM-CRF | 59.5 | 27.8 | 50.8 | | RoBERTa+TG-CRF | 60.4 | 29.7 | 52.1 | | SciBERT+Hypernet-CRF | 62.1 | 36.7 | 54.4 | | RoBERTa+Hypernet-CRF | 62.3 | 34.8 | 53.3 | | RoBERTa-extended-CRF* | 62.09 | 40.61 | 52.32 | | KBI-CRF* | 62.61 | 40.81 | 59.7 | | KBIR-CRF* | 62.72 | 40.15 | 62.56 | ### Named Entity Recognition ``` from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR") model = AutoModelForTokenClassification.from_pretrained("bloomberg/KBIR") from datasets import load_dataset dataset = load_dataset("conll2003") ``` Reported Results: | Model | F1 | |---------------------------------|-------| | LSTM-CRF (Lample et al., 2016) | 91.0 | | ELMo (Peters et al., 2018) | 92.2 | | BERT (Devlin et al., 2018) | 92.8 | | (Akbik et al., 2019) | 93.1 | | (Baevski et al., 2019) | 93.5 | | LUKE (Yamada et al., 2020) | 94.3 | | LUKE w/o entity attention | 94.1 | | RoBERTa (Yamada et al., 2020) | 92.4 | | RoBERTa-extended* | 92.54 | | KBI* | 92.73 | | KBIR* | 92.97 | ### Question Answering ``` from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("bloomberg/KBIR") model = AutoModelForQuestionAnswering.from_pretrained("bloomberg/KBIR") from datasets import load_dataset dataset = load_dataset("squad") ``` Reported Results: | Model | EM | F1 | |------------------------|-------|-------| | BERT | 84.2 | 91.1 | | XLNet | 89.0 | 94.5 | | ALBERT | 89.3 | 94.8 | | LUKE | 89.8 | 95.0 | | LUKE w/o entity attention | 89.2 | 94.7 | | RoBERTa | 88.9 | 94.6 | | RoBERTa-extended* | 88.88 | 94.55 | | KBI* | 88.97 | 94.7 | | KBIR* | 89.04 | 94.75 | ## Any other classification task As mentioned above since KBIR is built on top of the RoBERTa architecture, it is compatible with any AutoModel setting that RoBERTa is also compatible with. We encourage you to try fine-tuning KBIR on different datasets and report the downstream results. ## Citation Please cite this work using the following BibTeX entry: ``` @inproceedings{kulkarni-etal-2022-learning, title = "Learning Rich Representation of Keyphrases from Text", author = "Kulkarni, Mayank and Mahata, Debanjan and Arora, Ravneet and Bhowmik, Rajarshi", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-naacl.67", doi = "10.18653/v1/2022.findings-naacl.67", pages = "891--906", abstract = "In this work, we explore how to train task-specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (upto 8.16 points in F1) over SOTA, when the LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (upto 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks.", } ``` ## Contact For any questions contact [email protected]
GanjinZero/biobart-large
GanjinZero
2023-04-04T07:46:25Z
419
6
transformers
[ "transformers", "pytorch", "safetensors", "bart", "text2text-generation", "biobart", "biomedical", "en", "arxiv:2204.03905", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-12T07:01:05Z
--- language: - en license: apache-2.0 tags: - bart - biobart - biomedical inference: true widget: - text: "Influenza is a <mask> disease." - type: "text-generation" --- Paper: [BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model](https://arxiv.org/pdf/2204.03905.pdf) ``` @misc{BioBART, title={BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model}, author={Hongyi Yuan and Zheng Yuan and Ruyi Gan and Jiaxing Zhang and Yutao Xie and Sheng Yu}, year={2022}, eprint={2204.03905}, archivePrefix={arXiv} } ```
osiria/blaze-it
osiria
2024-01-08T22:10:40Z
419
3
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "fill-mask", "it", "dataset:wikipedia", "arxiv:1810.04805", "arxiv:1910.01108", "arxiv:2010.05609", "arxiv:1906.08101", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-03-05T15:42:44Z
--- license: apache-2.0 datasets: - wikipedia language: - it widget: - text: "milano è una [MASK] dell'italia" example_title: "Example 1" - text: "il sole è una [MASK] della via lattea" example_title: "Example 2" - text: "l'italia è una [MASK] dell'unione europea" example_title: "Example 3" --- -------------------------------------------------------------------------------------------------- <body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: BLAZE 🔥</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body> -------------------------------------------------------------------------------------------------- <h3>Introduction</h3> This model is a <b>lightweight</b> and uncased version of <b>BERT</b> <b>[1]</b> for the <b>Italian</b> language. Its <b>55M parameters</b> and <b>220MB</b> size make it <b>50% lighter</b> than a typical mono-lingual BERT model. It is ideal when memory consumption and execution speed are critical while maintaining high-quality results. <h3>AILC CLiC-IT 2023 Proceedings</h3> "Blaze-IT: a lightweight BERT model for the Italian language" has been accepted at AILC CLiC-IT 2023 and published in the conference proceedings. <br> You can find the proceedings here: https://clic2023.ilc.cnr.it/proceedings/ <br> And the published paper here: https://ceur-ws.org/Vol-3596/paper43.pdf <h3>Model description</h3> The model builds on the multilingual <b>DistilBERT</b> <b>[2]</b> model (from the HuggingFace team: [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased)) as a starting point, focusing it on the Italian language while at the same time turning it into an uncased model by modifying the embedding layer (as in <b>[3]</b>, but computing document-level frequencies over the <b>Wikipedia</b> dataset and setting a frequency threshold of 0.1%), which brings a considerable reduction in the number of parameters. To compensate for the deletion of cased tokens, which now forces the model to exploit lowercase representations of words previously capitalized, the model has been further pre-trained on the Italian split of the [Wikipedia](https://huggingface.co/datasets/wikipedia) dataset, using the <b>whole word masking [4]</b> technique to make it more robust to the new uncased representations. The resulting model has 55M parameters, a vocabulary of 13.832 tokens, and a size of 220MB, which makes it <b>50% lighter</b> than a typical mono-lingual BERT model and 20% lighter than a standard mono-lingual DistilBERT model. <h3>Training procedure</h3> The model has been trained for <b>masked language modeling</b> on the Italian <b>Wikipedia</b> (~3GB) dataset for 10K steps, using the AdamW optimizer, with a batch size of 512 (obtained through 128 gradient accumulation steps), a sequence length of 512, and a linearly decaying learning rate starting from 5e-5. The training has been performed using <b>dynamic masking</b> between epochs and exploiting the <b>whole word masking</b> technique. <h3>Performances</h3> The following metrics have been computed on the Part of Speech Tagging and Named Entity Recognition tasks, using the <b>UD Italian ISDT</b> and <b>WikiNER</b> datasets, respectively. The PoST model has been trained for 5 epochs, and the NER model for 3 epochs, both with a constant learning rate, fixed at 1e-5. For Part of Speech Tagging, the metrics have been computed on the default test set provided with the dataset, while for Named Entity Recognition the metrics have been computed with a 5-fold cross-validation | Task | Recall | Precision | F1 | | ------ | ------ | ------ | ------ | | Part of Speech Tagging | 97.48 | 97.29 | 97.37 | | Named Entity Recognition | 89.29 | 89.84 | 89.53 | The metrics have been computed at the token level and macro-averaged over the classes. <h3>Demo</h3> You can try the model online (fine-tuned on named entity recognition) using this web app: https://huggingface.co/spaces/osiria/blaze-it-demo <h3>Quick usage</h3> ```python from transformers import AutoTokenizer, DistilBertForMaskedLM from transformers import pipeline tokenizer = AutoTokenizer.from_pretrained("osiria/blaze-it") model = DistilBertForMaskedLM.from_pretrained("osiria/blaze-it") pipeline_mlm = pipeline(task="fill-mask", model=model, tokenizer=tokenizer) ``` <h3>Limitations</h3> This lightweight model is mainly trained on Wikipedia, so it's particularly suitable as an agile analyzer for large volumes of natively digital text from the world wide web, written in a correct and fluent form (like wikis, web pages, news, etc.). However, it may show limitations when it comes to chaotic text, containing errors and slang expressions (like social media posts) or when it comes to domain-specific text (like medical, financial or legal content). <h3>References</h3> [1] https://arxiv.org/abs/1810.04805 [2] https://arxiv.org/abs/1910.01108 [3] https://arxiv.org/abs/2010.05609 [4] https://arxiv.org/abs/1906.08101 <h3>License</h3> The model is released under <b>Apache-2.0</b> license
xinlai/LISA-13B-llama2-v1-explanatory
xinlai
2023-08-29T13:24:58Z
419
3
transformers
[ "transformers", "pytorch", "llava", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-08-29T11:42:07Z
Entry not found
TheBloke/Marcoroni-7b-GGUF
TheBloke
2023-09-27T12:48:49Z
419
3
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "base_model:AIDC-ai-business/Marcoroni-7b", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
text-generation
2023-09-11T20:35:59Z
--- language: - en license: cc-by-nc-4.0 datasets: - Open-Orca/OpenOrca model_name: Marcoroni 7b base_model: AIDC-ai-business/Marcoroni-7b inference: false model_creator: AIDC-ai-business model_type: llama pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Marcoroni 7b - GGUF - Model creator: [AIDC-ai-business](https://huggingface.co/AIDC-ai-business) - Original model: [Marcoroni 7b](https://huggingface.co/AIDC-ai-business/Marcoroni-7b) <!-- description start --> ## Description This repo contains GGUF format model files for [AIDC-ai-business's Marcoroni 7b](https://huggingface.co/AIDC-ai-business/Marcoroni-7b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Marcoroni-7b-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Marcoroni-7b-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF) * [AIDC-ai-business's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/AIDC-ai-business/Marcoroni-7b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [AIDC-ai-business's Marcoroni 7b](https://huggingface.co/AIDC-ai-business/Marcoroni-7b). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [marcoroni-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [marcoroni-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [marcoroni-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [marcoroni-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [marcoroni-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [marcoroni-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [marcoroni-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [marcoroni-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [marcoroni-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [marcoroni-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [marcoroni-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [marcoroni-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Marcoroni-7b-GGUF/blob/main/marcoroni-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Marcoroni-7b-GGUF and below it, a specific filename to download, such as: marcoroni-7b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Marcoroni-7b-GGUF marcoroni-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Marcoroni-7b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Marcoroni-7b-GGUF marcoroni-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m marcoroni-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Marcoroni-7b-GGUF", model_file="marcoroni-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: AIDC-ai-business's Marcoroni 7b # Marcoroni-7B Fine-tuned from Llama2-7B,we use Orca-style data and other open source data for fine-tuning. # Model Details * **Trained by**: trained by AIDC AI-Business. * **Model type:** **Marcoroni-7B** is an auto-regressive language model based on the Llama 2 transformer architecture. * **Language(s)**: English * **License for Marcoroni-7B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) # Prompting ## Prompt Template for alpaca style ``` ### Instruction: <prompt> (without the <>) ### Response: ``` # Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-------| | Avg. | 60.1 | | ARC (25-shot) | 58.11 | | HellaSwag (10-shot) | 80.08 | | MMLU (5-shot) | 51.36 | | TruthfulQA (0-shot) | 50.85 | <!-- original-model-card end -->
Undi95/Toppy-M-7B-GGUF
Undi95
2023-12-18T14:11:16Z
419
39
null
[ "gguf", "not-for-all-audiences", "nsfw", "merge", "license:cc-by-nc-4.0", "region:us" ]
null
2023-11-03T16:36:40Z
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences - nsfw - merge --- <!-- description start --> ## Description This repo contains quantized files of Toppy-M-7B, a merge I have done with the new task_arithmetic merge method from mergekit. This project was a request from [BlueNipples](https://huggingface.co/BlueNipples) : [link](https://huggingface.co/Undi95/Utopia-13B/discussions/1) <!-- description end --> <!-- description start --> ## Models and loras used - [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) - [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) - [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) - [lemonilia/AshhLimaRP-Mistral-7B](lemonilia/AshhLimaRP-Mistral-7B) - [Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b](https://huggingface.co/Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b) - [Undi95/Mistral-pippa-sharegpt-7b-qlora](Undi95/Mistral-pippa-sharegpt-7b-qlora) <!-- description end --> ## The sauce ``` openchat/openchat_3.5 lemonilia/AshhLimaRP-Mistral-7B (LoRA) x 0.38 NousResearch/Nous-Capybara-7B-V1.9 Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b x 0.27 HuggingFaceH4/zephyr-7b-beta Undi95/Mistral-pippa-sharegpt-7b-qlora x 0.38 merge_method: task_arithmetic base_model: mistralai/Mistral-7B-v0.1 models: - model: mistralai/Mistral-7B-v0.1 - model: Undi95/zephyr-7b-beta-pippa-sharegpt parameters: weight: 0.42 - model: Undi95/Nous-Capybara-7B-V1.9-120-Days parameters: weight: 0.29 - model: Undi95/openchat_3.5-LimaRP-13B parameters: weight: 0.48 dtype: bfloat16 ``` <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` If you want to support me, you can [here](https://ko-fi.com/undiai).
TheBloke/deepseek-llm-7B-base-GGUF
TheBloke
2023-11-29T23:14:38Z
419
2
transformers
[ "transformers", "gguf", "deepseek", "base_model:deepseek-ai/deepseek-llm-7b-base", "license:other", "region:us" ]
null
2023-11-29T17:58:06Z
--- base_model: deepseek-ai/deepseek-llm-7b-base inference: false license: other license_link: LICENSE license_name: deepseek model_creator: DeepSeek model_name: Deepseek LLM 7B Base model_type: deepseek prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Deepseek LLM 7B Base - GGUF - Model creator: [DeepSeek](https://huggingface.co/deepseek-ai) - Original model: [Deepseek LLM 7B Base](https://huggingface.co/deepseek-ai/deepseek-llm-7b-base) <!-- description start --> ## Description This repo contains GGUF format model files for [DeepSeek's Deepseek LLM 7B Base](https://huggingface.co/deepseek-ai/deepseek-llm-7b-base). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/deepseek-llm-7B-base-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF) * [DeepSeek's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/deepseek-ai/deepseek-llm-7b-base) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [deepseek-llm-7b-base.Q2_K.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q2_K.gguf) | Q2_K | 2 | 2.99 GB| 5.49 GB | smallest, significant quality loss - not recommended for most purposes | | [deepseek-llm-7b-base.Q3_K_S.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q3_K_S.gguf) | Q3_K_S | 3 | 3.14 GB| 5.64 GB | very small, high quality loss | | [deepseek-llm-7b-base.Q3_K_M.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q3_K_M.gguf) | Q3_K_M | 3 | 3.47 GB| 5.97 GB | very small, high quality loss | | [deepseek-llm-7b-base.Q3_K_L.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q3_K_L.gguf) | Q3_K_L | 3 | 3.75 GB| 6.25 GB | small, substantial quality loss | | [deepseek-llm-7b-base.Q4_0.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q4_0.gguf) | Q4_0 | 4 | 4.00 GB| 6.50 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [deepseek-llm-7b-base.Q4_K_S.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q4_K_S.gguf) | Q4_K_S | 4 | 4.03 GB| 6.53 GB | small, greater quality loss | | [deepseek-llm-7b-base.Q4_K_M.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q4_K_M.gguf) | Q4_K_M | 4 | 4.22 GB| 6.72 GB | medium, balanced quality - recommended | | [deepseek-llm-7b-base.Q5_0.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q5_0.gguf) | Q5_0 | 5 | 4.81 GB| 7.31 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [deepseek-llm-7b-base.Q5_K_S.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q5_K_S.gguf) | Q5_K_S | 5 | 4.81 GB| 7.31 GB | large, low quality loss - recommended | | [deepseek-llm-7b-base.Q5_K_M.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q5_K_M.gguf) | Q5_K_M | 5 | 4.93 GB| 7.43 GB | large, very low quality loss - recommended | | [deepseek-llm-7b-base.Q6_K.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q6_K.gguf) | Q6_K | 6 | 5.67 GB| 8.17 GB | very large, extremely low quality loss | | [deepseek-llm-7b-base.Q8_0.gguf](https://huggingface.co/TheBloke/deepseek-llm-7B-base-GGUF/blob/main/deepseek-llm-7b-base.Q8_0.gguf) | Q8_0 | 8 | 7.35 GB| 9.85 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/deepseek-llm-7B-base-GGUF and below it, a specific filename to download, such as: deepseek-llm-7b-base.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/deepseek-llm-7B-base-GGUF deepseek-llm-7b-base.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/deepseek-llm-7B-base-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/deepseek-llm-7B-base-GGUF deepseek-llm-7b-base.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m deepseek-llm-7b-base.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./deepseek-llm-7b-base.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "{prompt}", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./deepseek-llm-7b-base.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: DeepSeek's Deepseek LLM 7B Base <p align="center"> <img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true"> </p> <p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://chat.deepseek.com/">[🤖 Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(微信)]</a> </p> <hr> ### 1. Introduction of Deepseek LLM Introducing DeepSeek LLM, an advanced language model comprising 7 billion parameters. It has been trained from scratch on a vast dataset of 2 trillion tokens in both English and Chinese. In order to foster research, we have made DeepSeek LLM 7B/67B Base and DeepSeek LLM 7B/67B Chat open source for the research community. ### 2. Model Summary `deepseek-llm-7b-base` is a 7B parameter model with Multi-Head Attention trained on 2 trillion tokens from scratch. - **Home Page:** [DeepSeek](https://deepseek.com/) - **Repository:** [deepseek-ai/deepseek-LLM](https://github.com/deepseek-ai/deepseek-LLM) - **Chat With DeepSeek LLM:** [DeepSeek-LLM](https://chat.deepseek.com/) ### 3. How to Use Here give some examples of how to use our model. #### Text Completion ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig model_name = "deepseek-ai/deepseek-llm-7b-base" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") model.generation_config = GenerationConfig.from_pretrained(model_name) model.generation_config.pad_token_id = model.generation_config.eos_token_id text = "An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs.to(model.device), max_new_tokens=100) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result) ``` ### 4. License This code repository is licensed under the MIT License. The use of DeepSeek LLM models is subject to the Model License. DeepSeek LLM supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-LLM/blob/main/LICENSE-MODEL) for more details. ### 5. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]). <!-- original-model-card end -->
digiplay/yiSMix2.8D_v1
digiplay
2023-12-13T20:38:34Z
419
4
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-12-13T20:21:13Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/228301/yismix28d
TheBloke/Solar-10.7B-SLERP-GGUF
TheBloke
2023-12-14T19:21:11Z
419
14
transformers
[ "transformers", "gguf", "solar", "en", "base_model:janhq/Solar-10.7B-SLERP", "license:apache-2.0", "region:us" ]
null
2023-12-14T17:57:10Z
--- base_model: janhq/Solar-10.7B-SLERP inference: false language: - en license: apache-2.0 model_creator: Jan model_name: Solar 10.7B SLERP model_type: solar prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Solar 10.7B SLERP - GGUF - Model creator: [Jan](https://huggingface.co/janhq) - Original model: [Solar 10.7B SLERP](https://huggingface.co/janhq/Solar-10.7B-SLERP) <!-- description start --> ## Description This repo contains GGUF format model files for [Jan's Solar 10.7B SLERP](https://huggingface.co/janhq/Solar-10.7B-SLERP). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF) * [Jan's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/janhq/Solar-10.7B-SLERP) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [solar-10.7b-slerp.Q2_K.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q2_K.gguf) | Q2_K | 2 | 4.55 GB| 7.05 GB | smallest, significant quality loss - not recommended for most purposes | | [solar-10.7b-slerp.Q3_K_S.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q3_K_S.gguf) | Q3_K_S | 3 | 4.67 GB| 7.17 GB | very small, high quality loss | | [solar-10.7b-slerp.Q3_K_M.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q3_K_M.gguf) | Q3_K_M | 3 | 5.19 GB| 7.69 GB | very small, high quality loss | | [solar-10.7b-slerp.Q3_K_L.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q3_K_L.gguf) | Q3_K_L | 3 | 5.65 GB| 8.15 GB | small, substantial quality loss | | [solar-10.7b-slerp.Q4_0.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q4_0.gguf) | Q4_0 | 4 | 6.07 GB| 8.57 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [solar-10.7b-slerp.Q4_K_S.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q4_K_S.gguf) | Q4_K_S | 4 | 6.10 GB| 8.60 GB | small, greater quality loss | | [solar-10.7b-slerp.Q4_K_M.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q4_K_M.gguf) | Q4_K_M | 4 | 6.46 GB| 8.96 GB | medium, balanced quality - recommended | | [solar-10.7b-slerp.Q5_0.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q5_0.gguf) | Q5_0 | 5 | 7.40 GB| 9.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [solar-10.7b-slerp.Q5_K_S.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q5_K_S.gguf) | Q5_K_S | 5 | 7.40 GB| 9.90 GB | large, low quality loss - recommended | | [solar-10.7b-slerp.Q5_K_M.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q5_K_M.gguf) | Q5_K_M | 5 | 7.60 GB| 10.10 GB | large, very low quality loss - recommended | | [solar-10.7b-slerp.Q6_K.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q6_K.gguf) | Q6_K | 6 | 8.81 GB| 11.31 GB | very large, extremely low quality loss | | [solar-10.7b-slerp.Q8_0.gguf](https://huggingface.co/TheBloke/Solar-10.7B-SLERP-GGUF/blob/main/solar-10.7b-slerp.Q8_0.gguf) | Q8_0 | 8 | 11.41 GB| 13.91 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Solar-10.7B-SLERP-GGUF and below it, a specific filename to download, such as: solar-10.7b-slerp.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Solar-10.7B-SLERP-GGUF solar-10.7b-slerp.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Solar-10.7B-SLERP-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Solar-10.7B-SLERP-GGUF solar-10.7b-slerp.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m solar-10.7b-slerp.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./solar-10.7b-slerp.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./solar-10.7b-slerp.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Jan's Solar 10.7B SLERP <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <p align="center"> <a href="https://jan.ai/">Jan</a > - <a href="https://discord.gg/AsJ8krTT3N">Discord</a> </p> <!-- header end --> # Model Description This model uses the `Slerp` merge method from the best models on 14th Dec on the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard): 1. [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) 2. [janhq/Pandora-v1-10.7B](https://huggingface.co/janhq/Pandora-v1-10.7B) - base model: [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) The yaml config file for this model is here: ```yaml slices: - sources: - model: upstage/SOLAR-10.7B-Instruct-v1.0 layer_range: [0, 48] - model: janhq/Pandora-v1-10.7B layer_range: [0, 48] merge_method: slerp base_model: upstage/SOLAR-10.7B-Instruct-v1.0 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` # Prompt template - **ChatML** ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` # Run this model You can run this model using [Jan](https://jan.ai/) on Mac, Windows, or Linux. **Jan is an open source, ChatGPT alternative that is:** 💻 **100% offline on your machine**: Your conversations remain confidential, and visible only to you. 🗂️ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time. 🌐 **OpenAI Compatible**: Local server on port ` 1337` with OpenAI compatible endpoints 🌍 **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq) - Please use the [Solar-10.7B-SLERP-GGUF](https://huggingface.co/janhq/Solar-10.7B-SLERP-GGUF) when using on Jan. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/r7VmEBLGXpPLTu2MImM7S.png) # About Jan Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones. Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life. # Jan Model Merger This is a test project for merging models. # Open LLM Leaderboard Evaluation Results Detailed results can be found here. | Metric | Value | |-----------------------|---------------------------| | Avg. | ?| | ARC (25-shot) | ? | | HellaSwag (10-shot) | ? | | MMLU (5-shot) | ?| | TruthfulQA (0-shot) | ? | | Winogrande (5-shot) | ? | | GSM8K (5-shot) | ? | # Acknowlegement - [mergekit](https://github.com/cg123/mergekit) - [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md) - [SLERP](https://github.com/Digitous/LLM-SLERP-Merge) - [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) <!-- original-model-card end -->
Kooten/DaringMaid-20B-V1.1-GGUF
Kooten
2024-01-21T11:00:01Z
419
3
null
[ "gguf", "Merge", "text-generation", "en", "license:cc-by-nc-4.0", "region:us" ]
text-generation
2024-01-21T01:34:14Z
--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation tags: - Merge --- # DaringMaid-20B-V1.1 ## Whats New? This is an updated version of [DaringMaid-20B](https://huggingface.co/Kooten/DaringMaid-20B), it is pretty much the same but with Noromaid-13b v0.3 instead of v0.1.1 and with a slightly higher weight for Noromaid. I used v0.3 since it was the last to use Alpaca as to not break anything. # Quants EXL2: [6bpw](https://huggingface.co/Kooten/DaringMaid-20B-V1.1-6bpw-exl2), [3.5bpw](https://huggingface.co/Kooten/DaringMaid-20B-V1.1-3.5bpw-exl2), [3bpw](https://huggingface.co/Kooten/DaringMaid-20B-V1.1-3bpw-exl2) [GGUF](https://huggingface.co/Kooten/DaringMaid-20B-V1.1-GGUF): [Q3_K_M](https://huggingface.co/Kooten/DaringMaid-20B-V1.1-GGUF/blob/main/DaringMaid-20B-V1.1-Q3_K_M.gguf) - [Q4_K_M](https://huggingface.co/Kooten/DaringMaid-20B-V1.1-GGUF/blob/main/DaringMaid-20B-V1.1-Q4_K_M.gguf) - [Q5_K_M](https://huggingface.co/Kooten/DaringMaid-20B-V1.1-GGUF/blob/main/DaringMaid-20B-V1.1-Q5_K_M.gguf) - [Q6_K_M](https://huggingface.co/Kooten/DaringMaid-20B-V1.1-GGUF/blob/main/DaringMaid-20B-V1.1-Q6_K.gguf) [FP16](https://huggingface.co/Kooten/DaringMaid-20B-V1.1) ## Recipe: - [sequelbox/DynamicFactor](https://huggingface.co/sequelbox/DynamicFactor) - [NeverSleep/Noromaid](https://huggingface.co/NeverSleep/Noromaid-13b-v0.3) - [Undi95/Utopia](https://huggingface.co/Undi95/Utopia-13B) - [CalderaAI/Thorns](https://huggingface.co/CalderaAI/13B-Thorns-l2) - [Gryphe/MythoMax](https://huggingface.co/Gryphe/MythoMax-L2-13b) # Prompt template: I have been using Undi/Ikaris SillyTavern presets for Noromaid: [Context template](https://files.catbox.moe/l5hguf.json), [Instruct template](https://files.catbox.moe/nzygwn.json). ## Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Input: {input} ### Response: ``` ### Contact Kooten on discord.
LumiOpen/Viking-7B
LumiOpen
2024-05-16T07:00:15Z
419
35
transformers
[ "transformers", "safetensors", "llama", "text-generation", "fi", "en", "da", "sv", "no", "nn", "is", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:mc4", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-16T10:38:01Z
--- license: apache-2.0 datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata - mc4 language: - fi - en - da - sv - 'no' - nn - is --- # Viking 7B Viking 7B is a 7B parameter decoder-only transformer pretrained on Finnish, English, Swedish, Danish, Norwegian, Icelandic and code. It has been trained on 2 trillion tokens. Viking 7B is a fully open source model and is made available under the Apache 2.0 License. Viking was created in a collaboration between the [TurkuNLP group](https://turkunlp.org/) of the University of Turku, [SiloGen](https://www.silo.ai/silogen) from [Silo AI](https://www.silo.ai/),and [High Performance Language Technologies](https://hplt-project.org/) (HPLT). Training was conducted on the [LUMI supercomputer](https://www.lumi-supercomputer.eu/), using compute resources generously provided by [CSC](https://csc.fi/) - IT Center for Science, Finland. This project is part of an ongoing effort to create open source large language models for non-English and especially low resource languages like Finnish. The mode is fluent in Finnish, English, the Scandinavian languages and capable of basic translation between them. It is also able to understand and generate code. ## Model Family Viking is the second set of models released by LumiOpen and is available at 3 parameter counts: [Viking 7B](https://huggingface.co/LumiOpen/Viking-7B) [Viking 13B](https://huggingface.co/LumiOpen/Viking-13B) [Viking 33B](https://huggingface.co/LumiOpen/Viking-33B) ## Model Overview _**NOTE:** Viking is a base model which needs further fine tuning for most use cases._ Viking is a generative pretrained transformer using a LLaMA-like GPT architecture, and makes use of rotary positional embeddings and flash attention. | Hyperparameter | Value | | :------------- | :----: | | n_parameters | 7.55B | | n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab_size | 131072 | | sequence_length | 4096 | ## Training Viking 7B was trained on the LUMI supercomputer, using 256 AMD MI250X GPUs. Each MI250X GPU has two Graphics Complex Dies (GCDs) for a world size of 512 during training, using activation checkpointing, a micro batch size of 1, gradient accumulation of 16, and a 3D parallelism strategy of TP=1, PP=4, DP=128. Training began in September 2023 using a [custom fork](https://github.com/LumiOpen/Megatron-DeepSpeed) of the Megatron-Deepspeed framework. ## Training Hyperparameters | Hyperparameter | Value | Comment | | :------------: | :---: | :------:| | Precision | bfloat16 | | | Optimizer | AdamW | | | Learning rate | 3e-4 | 10B tokens warm-up, cosine decay to 3e-5 | | Weight decay | 1e-1 | | | Batch size | 1024 | 1024 samples x 4096 tokens = 4194304 tokens | ## Tokenizer Viking uses a custom 128K Bloom tokenizer trained on the same English, Finnish, Swedish, Danish, Norwegian, Icelandic and code dataset used to train the model. ## Dataset Viking is being trained on a 2 trillion token mixed dataset of English, Finnish, Swedish, Danish, Norwegian, Icelandic and code. More details on exact dataset will be published soon. ## Evaluation Results Full evaluation results will be published with the final model. ## Training Checkpoints Training Checkpoints are available as branches in the repository. Checkpoints will be released roughly every 100B tokens. The main branch will always point to the latest checkpoint. The following checkpoints are available: * [100B](https://huggingface.co/LumiOpen/Viking-7B/tree/100B) * [200B](https://huggingface.co/LumiOpen/Viking-7B/tree/200B) * [300B](https://huggingface.co/LumiOpen/Viking-7B/tree/300B) * [400B](https://huggingface.co/LumiOpen/Viking-7B/tree/400B) * [500B](https://huggingface.co/LumiOpen/Viking-7B/tree/500B) * [600B](https://huggingface.co/LumiOpen/Viking-7B/tree/600B) * [700B](https://huggingface.co/LumiOpen/Viking-7B/tree/700B) * [800B](https://huggingface.co/LumiOpen/Viking-7B/tree/800B) * [900B](https://huggingface.co/LumiOpen/Viking-7B/tree/900B) * [1000B](https://huggingface.co/LumiOpen/Viking-7B/tree/1000B) * [1100B](https://huggingface.co/LumiOpen/Viking-7B/tree/1100B) * [1200B](https://huggingface.co/LumiOpen/Viking-7B/tree/1200B) * [1300B](https://huggingface.co/LumiOpen/Viking-7B/tree/1300B) * [1400B](https://huggingface.co/LumiOpen/Viking-7B/tree/1400B) * [1500B](https://huggingface.co/LumiOpen/Viking-7B/tree/1500B) * [1600B](https://huggingface.co/LumiOpen/Viking-7B/tree/1600B) * [1700B](https://huggingface.co/LumiOpen/Viking-7B/tree/1700B) * [1800B](https://huggingface.co/LumiOpen/Viking-7B/tree/1800B) * [1900B](https://huggingface.co/LumiOpen/Viking-7B/tree/1900B) * [2000B](https://huggingface.co/LumiOpen/Viking-7B/tree/2000B) The transformers library allows you to load a checkpoint from a branch as follows: ```python branch = "2000B" model = transformers.AutoModelForCausalLM.from_pretrained( "LumiOpen/Viking-7B", torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16, revision=branch, ) ``` ## Ethical Considerations and Limitations _Viking 7B is a release of a partially trained model, and special care should be taken when using any output._ Viking is an advanced language model, primarily optimized for English, Finnish, Swedish, Norwegian, Danish, Icelandic and code, with no meaningful proficiency in any other languages. As with most AI-driven systems, Viking is a product of the vast data it has been trained on, which may reflect the imperfections, biases, and idiosyncrasies of the wider web. Viking may, at times, produce outputs that can be considered inaccurate, prejudiced, or controversial. Users and developers engaging with Viking should exercise discretion and consider additional evaluation and customization to ensure the model's responses align with their specific needs and ethical standards. ## License Viking is released under the Apache 2.0 license.
ManuelHettich/acne3_unfiltered_selection_LoRA
ManuelHettich
2024-02-23T21:41:35Z
419
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-02-23T21:41:33Z
--- license: openrail++ library_name: diffusers tags: - text-to-image - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of TOK acne3 widget: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - ManuelHettich/acne3_LoRA <Gallery /> ## Model description These are ManuelHettich/acne3_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK acne3 to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](ManuelHettich/acne3_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
second-state/Llava-v1.6-Vicuna-7B-GGUF
second-state
2024-03-20T08:22:23Z
419
2
transformers
[ "transformers", "gguf", "llava", "text-generation", "base_model:liuhaotian/llava-v1.6-vicuna-7b", "license:llama2", "autotrain_compatible", "region:us" ]
text-generation
2024-02-25T15:46:16Z
--- base_model: liuhaotian/llava-v1.6-vicuna-7b inference: false library_name: transformers license: llama2 model_creator: liuhaotian model_name: Llava v1.6 Vicuna 7B pipeline_tag: text-generation quantized_by: Second State Inc. --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llava-v1.6-Vicuna-7B-GGUF ## Original Model [liuhaotian/llava-v1.6-vicuna-7b](https://huggingface.co/liuhaotian/llava-v1.6-vicuna-7b) ## Run with LlamaEdge - LlamaEdge version: comming soon - Prompt template - Prompt type: `vicuna-llava` - Prompt string ```text <system_prompt>\nUSER:<image_embeddings>\n<textual_prompt>\nASSISTANT: ``` - Context size: `4096` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:llava-v1.6-vicuna-7b-Q5_K_M.gguf llama-api-server.wasm -p vicuna-llava -c 4096 --llava-mmproj llava-v1.6-vicuna-7b-mmproj-model-f16.gguf -m llava-v1.6-vicuna-7b ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [llava-v1.6-vicuna-7b-Q2_K.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q2_K.gguf) | Q2_K | 2 | 2.53 GB| smallest, significant quality loss - not recommended for most purposes | | [llava-v1.6-vicuna-7b-Q3_K_L.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q3_K_L.gguf) | Q3_K_L | 3 | 3.6 GB| small, substantial quality loss | | [llava-v1.6-vicuna-7b-Q3_K_M.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q3_K_M.gguf) | Q3_K_M | 3 | 3.3 GB| very small, high quality loss | | [llava-v1.6-vicuna-7b-Q3_K_S.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| very small, high quality loss | | [llava-v1.6-vicuna-7b-Q4_0.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [llava-v1.6-vicuna-7b-Q4_K_M.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| medium, balanced quality - recommended | | [llava-v1.6-vicuna-7b-Q4_K_S.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| small, greater quality loss | | [llava-v1.6-vicuna-7b-Q5_0.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [llava-v1.6-vicuna-7b-Q5_K_M.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| large, very low quality loss - recommended | | [llava-v1.6-vicuna-7b-Q5_K_S.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| large, low quality loss - recommended | | [llava-v1.6-vicuna-7b-Q6_K.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q6_K.gguf) | Q6_K | 6 | 5.53 GB| very large, extremely low quality loss | | [llava-v1.6-vicuna-7b-Q8_0.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| very large, extremely low quality loss - not recommended | | [llava-v1.6-vicuna-7b-mmproj-model-f16.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-mmproj-model-f16.gguf) | f16 | 8 | 624 MB| | *Quantized with llama.cpp b2230*
mradermacher/Aguie_v0.1-GGUF
mradermacher
2024-05-06T05:16:03Z
419
0
transformers
[ "transformers", "gguf", "ko", "en", "base_model:Heoni/Aguie_v0.1", "license:cc-by-nc-nd-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-05T05:01:17Z
--- base_model: Heoni/Aguie_v0.1 language: - ko - en library_name: transformers license: cc-by-nc-nd-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Heoni/Aguie_v0.1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q2_K.gguf) | Q2_K | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.IQ3_XS.gguf) | IQ3_XS | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.IQ3_S.gguf) | IQ3_S | 5.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q3_K_S.gguf) | Q3_K_S | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.IQ3_M.gguf) | IQ3_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q3_K_L.gguf) | Q3_K_L | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.IQ4_XS.gguf) | IQ4_XS | 7.3 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q4_K_S.gguf) | Q4_K_S | 7.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q4_K_M.gguf) | Q4_K_M | 8.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q5_K_S.gguf) | Q5_K_S | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q5_K_M.gguf) | Q5_K_M | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q6_K.gguf) | Q6_K | 11.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Aguie_v0.1-GGUF/resolve/main/Aguie_v0.1.Q8_0.gguf) | Q8_0 | 14.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/LemonWizardv3-GGUF
mradermacher
2024-05-06T04:41:47Z
419
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Statuo/LemonWizardv3", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T20:42:10Z
--- base_model: Statuo/LemonWizardv3 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Statuo/LemonWizardv3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LemonWizardv3-GGUF/resolve/main/LemonWizardv3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
scb10x/typhoon-v1.5-72b
scb10x
2024-06-03T15:33:39Z
419
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "th", "en", "arxiv:2312.13951", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-06T06:16:04Z
--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen1.5-72B/raw/main/LICENSE language: - th - en pipeline_tag: text-generation --- **Typhoon-v1.5-72B: Thai Large Language Model (Pretrained)** **Typhoon-v1.5-72B** is a *pretrained* Thai 🇹🇭 large language model with 72 billion parameters, and it is based on Qwen1.5-72B. For release post, please see our [blog](https://blog.opentyphoon.ai/typhoon-1-5-release-a9364cb8e8d7). ## **Model Description** - **Model type**: A 72B instruct decoder-only model based on Qwen1.5 archtecture. - **Requirement**: transformers 4.38.0 or newer. - **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧 - **License**: [Qwen License](https://huggingface.co/Qwen/Qwen1.5-72B/raw/main/LICENSE) ## **Intended Uses & Limitations** This model is a pretrained base model. Thus, it may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses. ## **Follow us** **https://twitter.com/opentyphoon** ## **Support / Ask any question** **https://discord.gg/CqyBscMFpg** ## **SCB10X AI Team** - Kunat Pipatanakul, Potsawee Manakul, Sittipong Sripaisarnmongkol, Natapong Nitarach, Pathomporn Chokchainant, Kasima Tharnpipitchai - If you find Typhoon-72B useful for your work, please cite it using: ``` @article{pipatanakul2023typhoon, title={Typhoon: Thai Large Language Models}, author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai}, year={2023}, journal={arXiv preprint arXiv:2312.13951}, url={https://arxiv.org/abs/2312.13951} } ``` ## **Contact Us** - General & Collaboration: **[[email protected]](mailto:[email protected])**, **[[email protected]](mailto:[email protected])** - Technical: **[[email protected]](mailto:[email protected])**
irfanfadhullah/winagent-8b-Instruct-bnb-q4_k_m-gguf
irfanfadhullah
2024-06-11T05:53:31Z
419
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "text-generation", "en", "dataset:osunlp/Mind2Web", "dataset:rootsautomation/ScreenSpot", "dataset:ScreenAgent", "dataset:WinAgent", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-05-19T00:45:54Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-bnb-4bit datasets: - osunlp/Mind2Web - rootsautomation/ScreenSpot - ScreenAgent - WinAgent metrics: - bleu - f1 - accuracy pipeline_tag: text-generation --- # Uploaded model - **Developed by:** irfanfadhullah - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
QuantFactory/LLaMA3-iterative-DPO-final-GGUF
QuantFactory
2024-05-26T03:03:20Z
419
1
transformers
[ "transformers", "gguf", "llama", "conversational", "text-generation", "base_model:RLHFlow/LLaMA3-iterative-DPO-final", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
2024-05-25T02:59:43Z
--- license: llama3 library_name: transformers pipeline_tag: text-generation tags: - llama - conversational base_model: RLHFlow/LLaMA3-iterative-DPO-final --- # LLaMA3-iterative-DPO-final-GGUF This is quantized version of [RLHFlow/LLaMA3-iterative-DPO-final](https://huggingface.co/RLHFlow/LLaMA3-iterative-DPO-final) created using llama.cpp # Model Description We release an unofficial checkpoint of a state-of-the-art instruct model of its class, **LLaMA3-iterative-DPO-final**. On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it), and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling. Even better, we provide a [detailed recipe](https://github.com/RLHFlow/Online-RLHF) to reproduce the model. Enjoy! ## Model Releases See the [collection](https://huggingface.co/collections/RLHFlow/online-rlhf-663ae95fade1a39663dab218) of the training set, reward/preference model, SFT model. - [SFT model](https://huggingface.co/RLHFlow/LLaMA3-SFT) - [Reward model](https://huggingface.co/sfairXC/FsfairX-LLaMA3-RM-v0.1) ## Dataset - [Preference data mix](https://huggingface.co/datasets/hendrydong/preference_700K) - [Prompt collection for RLHF training](https://huggingface.co/datasets/RLHFlow/prompt-collection-v0.1) ## Training methods We have developed a simple and efficient online RLHF recipe for LLM instruct training. Our recipe is DPO-based and thus much cheaper and simpler to train and tune compared to PPO-based approaches. Unlike widely-used offline DPO, the online component of our approach effectively mitigates distribution shifts during policy optimization. For a detailed exposition, please refer to our accompanying technical report. ## Chat Benchmarks | **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** | |-------------------------|----------|-------------------|-----------------------|--------------|---------------------| | **Small Open-Sourced Models** | | | | | | | Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 | | Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - | | Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 | | Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - | | Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 | | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 | | **Ours** | | | | | | | Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 | | Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 | | Ours (Online RLHF) | 8B | Iterative DPO | **37.2** | **8.46** | **29.1** | | **Large Open-Sourced Models** | | | | | | | Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 | | Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 | | Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 | | Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 | | LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 | | Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 | | **Proprietary Models** | | | | | | | GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 | | GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 | | GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 | | Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 | | GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 | ## Academic Benchmarks | **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** | |----------------------------|----------|-----------------|------------|----------|---------------|----------------|---------|----------| | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 | | Ours (SFT baseline) | 8B | SFT | 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 | | Ours (DPO baseline) | 8B | Vanilla DPO | 79.8 | 64.5 | 63.4 | 61.8 | 65.2 | 60.3 | | Ours (Iterative RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 | ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" model = AutoModelForCausalLM.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final") tokenizer = AutoTokenizer.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final") messages = [ {"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"}, ] model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = model_inputs.to(device) model.to(device) output_tokens = model.generate(model_inputs, max_new_tokens=1024, do_sample=True) model_outputs = tokenizer.batch_decode(output_tokens) print(model_outputs[0]) ``` ## Limitations RLHFlow/LLaMA3-iterative-DPO-final is an unofficial checkpoint developed to illustrate the power of online iterative RLHF and is for research purpose. While safety and ethical considerations are integral to our alignment process, there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions. We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage.
bongbongs/NewMes-v15
bongbongs
2024-06-02T19:52:08Z
419
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-29T16:28:50Z
--- license: llama3 --- Base model: Llama-3-8B ## Model Description - **Developed by:** bongbongs - **Model type:** LLM - **Language(s) (NLP):** English - **Finetuned from model:** llama-3-8b Fine-tuned on medical training datsets
bunkalab/Phi-3-mini-128k-instruct-GPT4Choice-4.6k-DPO
bunkalab
2024-05-31T21:34:12Z
419
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "en", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-31T21:21:52Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/maritaca-ai_-_sabia-7b-gguf
RichardErkhov
2024-06-01T13:51:18Z
419
0
null
[ "gguf", "arxiv:2304.07880", "region:us" ]
null
2024-06-01T09:56:41Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) sabia-7b - GGUF - Model creator: https://huggingface.co/maritaca-ai/ - Original model: https://huggingface.co/maritaca-ai/sabia-7b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [sabia-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q2_K.gguf) | Q2_K | 2.36GB | | [sabia-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.IQ3_XS.gguf) | IQ3_XS | 2.6GB | | [sabia-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.IQ3_S.gguf) | IQ3_S | 2.75GB | | [sabia-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [sabia-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.IQ3_M.gguf) | IQ3_M | 2.9GB | | [sabia-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q3_K.gguf) | Q3_K | 3.07GB | | [sabia-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [sabia-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [sabia-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [sabia-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q4_0.gguf) | Q4_0 | 3.56GB | | [sabia-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.IQ4_NL.gguf) | IQ4_NL | 3.58GB | | [sabia-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [sabia-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q4_K.gguf) | Q4_K | 3.8GB | | [sabia-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [sabia-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q4_1.gguf) | Q4_1 | 3.95GB | | [sabia-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q5_0.gguf) | Q5_0 | 4.33GB | | [sabia-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [sabia-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q5_K.gguf) | Q5_K | 4.45GB | | [sabia-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q5_K_M.gguf) | Q5_K_M | 4.45GB | | [sabia-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q5_1.gguf) | Q5_1 | 4.72GB | | [sabia-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q6_K.gguf) | Q6_K | 5.15GB | | [sabia-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/maritaca-ai_-_sabia-7b-gguf/blob/main/sabia-7b.Q8_0.gguf) | Q8_0 | 4.88GB | Original model description: --- language: - pt model-index: - name: sabia-7b results: - task: type: text-generation name: Text Generation dataset: name: ENEM Challenge (No Images) type: eduagarcia/enem_challenge split: train args: num_few_shot: 3 metrics: - type: acc value: 55.07 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=maritaca-ai/sabia-7b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BLUEX (No Images) type: eduagarcia-temp/BLUEX_without_images split: train args: num_few_shot: 3 metrics: - type: acc value: 47.71 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=maritaca-ai/sabia-7b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: OAB Exams type: eduagarcia/oab_exams split: train args: num_few_shot: 3 metrics: - type: acc value: 41.41 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=maritaca-ai/sabia-7b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 RTE type: assin2 split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 46.68 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=maritaca-ai/sabia-7b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 STS type: eduagarcia/portuguese_benchmark split: test args: num_few_shot: 15 metrics: - type: pearson value: 1.89 name: pearson source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=maritaca-ai/sabia-7b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: FaQuAD NLI type: ruanchaves/faquad-nli split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 58.34 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=maritaca-ai/sabia-7b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HateBR Binary type: ruanchaves/hatebr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 61.93 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=maritaca-ai/sabia-7b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: PT Hate Speech Binary type: hate_speech_portuguese split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 64.13 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=maritaca-ai/sabia-7b name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: tweetSentBR type: eduagarcia-temp/tweetsentbr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 46.64 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=maritaca-ai/sabia-7b name: Open Portuguese LLM Leaderboard --- Sabiá-7B is Portuguese language model developed by [Maritaca AI](https://www.maritaca.ai/). **Input:** The model accepts only text input. **Output:** The Model generates text only. **Model Architecture:** Sabiá-7B is an auto-regressive language model that uses the same architecture of LLaMA-1-7B. **Tokenizer:** It uses the same tokenizer as LLaMA-1-7B. **Maximum sequence length:** 2048 tokens. **Pretraining data:** The model was pretrained on 7 billion tokens from the Portuguese subset of ClueWeb22, starting with the weights of LLaMA-1-7B and further trained for an additional 10 billion tokens, approximately 1.4 epochs of the training dataset. **Data Freshness:** The pretraining data has a cutoff of mid-2022. **License:** The licensing is the same as LLaMA-1's, restricting the model's use to research purposes only. **Paper:** For more details, please refer to our paper: [Sabiá: Portuguese Large Language Models](https://arxiv.org/pdf/2304.07880.pdf) ## Few-shot Example Given that Sabiá-7B was trained solely on a language modeling objective without fine-tuning for instruction following, it is recommended for few-shot tasks rather than zero-shot tasks, like in the example below. ```python import torch from transformers import LlamaTokenizer, LlamaForCausalLM tokenizer = LlamaTokenizer.from_pretrained("maritaca-ai/sabia-7b") model = LlamaForCausalLM.from_pretrained( "maritaca-ai/sabia-7b", device_map="auto", # Automatically loads the model in the GPU, if there is one. Requires pip install acelerate low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 # If your GPU does not support bfloat16, change to torch.float16 ) prompt = """Classifique a resenha de filme como "positiva" ou "negativa". Resenha: Gostei muito do filme, é o melhor do ano! Classe: positiva Resenha: O filme deixa muito a desejar. Classe: negativa Resenha: Apesar de longo, valeu o ingresso. Classe:""" input_ids = tokenizer(prompt, return_tensors="pt") output = model.generate( input_ids["input_ids"].to("cuda"), max_length=1024, eos_token_id=tokenizer.encode("\n")) # Stop generation when a "\n" token is dectected # The output contains the input tokens, so we have to skip them. output = output[0][len(input_ids["input_ids"][0]):] print(tokenizer.decode(output, skip_special_tokens=True)) ``` If your GPU does not have enough RAM, try using int8 precision. However, expect some degradation in the model output quality when compared to fp16 or bf16. ```python model = LlamaForCausalLM.from_pretrained( "maritaca-ai/sabia-7b", device_map="auto", low_cpu_mem_usage=True, load_in_8bit=True, # Requires pip install bitsandbytes ) ``` ## Results in Portuguese Below we show the results on the Poeta benchmark, which consists of 14 Portuguese datasets. For more information on the Normalized Preferred Metric (NPM), please refer to our paper. |Model | NPM | |--|--| |LLaMA-1-7B| 33.0| |LLaMA-2-7B| 43.7| |Sabiá-7B| 48.5| ## Results in English Below we show the average results on 6 English datasets: PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c, and OpenBookQA. |Model | NPM | |--|--| |LLaMA-1-7B| 50.1| |Sabiá-7B| 49.0| ## Citation Please use the following bibtex to cite our paper: ``` @InProceedings{10.1007/978-3-031-45392-2_15, author="Pires, Ramon and Abonizio, Hugo and Almeida, Thales Sales and Nogueira, Rodrigo", editor="Naldi, Murilo C. and Bianchi, Reinaldo A. C.", title="Sabi{\'a}: Portuguese Large Language Models", booktitle="Intelligent Systems", year="2023", publisher="Springer Nature Switzerland", address="Cham", pages="226--240", isbn="978-3-031-45392-2" } ``` # [Open Portuguese LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/maritaca-ai/sabia-7b) | Metric | Value | |--------------------------|---------| |Average |**47.09**| |ENEM Challenge (No Images)| 55.07| |BLUEX (No Images) | 47.71| |OAB Exams | 41.41| |Assin2 RTE | 46.68| |Assin2 STS | 1.89| |FaQuAD NLI | 58.34| |HateBR Binary | 61.93| |PT Hate Speech Binary | 64.13| |tweetSentBR | 46.64|
marulyanova/another-another-test
marulyanova
2024-06-05T12:31:16Z
419
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-06-05T12:27:09Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### another_another_test Dreambooth model trained by marulyanova with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
mradermacher/JailbrokeAI-GGUF
mradermacher
2024-06-06T06:23:48Z
419
1
transformers
[ "transformers", "gguf", "en", "base_model:TroyDoesAI/JailbrokeAI", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-06T05:28:30Z
--- base_model: TroyDoesAI/JailbrokeAI language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/TroyDoesAI/JailbrokeAI <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/JailbrokeAI-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.Q2_K.gguf) | Q2_K | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.IQ3_XS.gguf) | IQ3_XS | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.Q3_K_S.gguf) | Q3_K_S | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.IQ3_S.gguf) | IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.IQ3_M.gguf) | IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.Q3_K_L.gguf) | Q3_K_L | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.IQ4_XS.gguf) | IQ4_XS | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.Q4_K_S.gguf) | Q4_K_S | 8.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.Q4_K_M.gguf) | Q4_K_M | 9.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.Q5_K_S.gguf) | Q5_K_S | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.Q5_K_M.gguf) | Q5_K_M | 10.9 | | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.Q6_K.gguf) | Q6_K | 12.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/JailbrokeAI-GGUF/resolve/main/JailbrokeAI.Q8_0.gguf) | Q8_0 | 16.3 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Ali-C137/Q2AW1M-1010
Ali-C137
2024-06-21T20:40:35Z
419
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-21T14:11:32Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
luishcarvalho/llama_smart_contract_3
luishcarvalho
2024-06-23T17:39:59Z
419
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-23T16:51:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf
RichardErkhov
2024-06-29T13:57:22Z
419
0
null
[ "gguf", "region:us" ]
null
2024-06-29T13:53:51Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) zephyr-smol_llama-100m-dpo-full - GGUF - Model creator: https://huggingface.co/amazingvince/ - Original model: https://huggingface.co/amazingvince/zephyr-smol_llama-100m-dpo-full/ | Name | Quant method | Size | | ---- | ---- | ---- | | [zephyr-smol_llama-100m-dpo-full.Q2_K.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q2_K.gguf) | Q2_K | 0.04GB | | [zephyr-smol_llama-100m-dpo-full.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.IQ3_XS.gguf) | IQ3_XS | 0.05GB | | [zephyr-smol_llama-100m-dpo-full.IQ3_S.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.IQ3_S.gguf) | IQ3_S | 0.05GB | | [zephyr-smol_llama-100m-dpo-full.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q3_K_S.gguf) | Q3_K_S | 0.05GB | | [zephyr-smol_llama-100m-dpo-full.IQ3_M.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.IQ3_M.gguf) | IQ3_M | 0.05GB | | [zephyr-smol_llama-100m-dpo-full.Q3_K.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q3_K.gguf) | Q3_K | 0.05GB | | [zephyr-smol_llama-100m-dpo-full.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q3_K_M.gguf) | Q3_K_M | 0.05GB | | [zephyr-smol_llama-100m-dpo-full.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q3_K_L.gguf) | Q3_K_L | 0.05GB | | [zephyr-smol_llama-100m-dpo-full.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.IQ4_XS.gguf) | IQ4_XS | 0.06GB | | [zephyr-smol_llama-100m-dpo-full.Q4_0.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q4_0.gguf) | Q4_0 | 0.06GB | | [zephyr-smol_llama-100m-dpo-full.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.IQ4_NL.gguf) | IQ4_NL | 0.06GB | | [zephyr-smol_llama-100m-dpo-full.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q4_K_S.gguf) | Q4_K_S | 0.06GB | | [zephyr-smol_llama-100m-dpo-full.Q4_K.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q4_K.gguf) | Q4_K | 0.06GB | | [zephyr-smol_llama-100m-dpo-full.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q4_K_M.gguf) | Q4_K_M | 0.06GB | | [zephyr-smol_llama-100m-dpo-full.Q4_1.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q4_1.gguf) | Q4_1 | 0.06GB | | [zephyr-smol_llama-100m-dpo-full.Q5_0.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q5_0.gguf) | Q5_0 | 0.07GB | | [zephyr-smol_llama-100m-dpo-full.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q5_K_S.gguf) | Q5_K_S | 0.07GB | | [zephyr-smol_llama-100m-dpo-full.Q5_K.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q5_K.gguf) | Q5_K | 0.07GB | | [zephyr-smol_llama-100m-dpo-full.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q5_K_M.gguf) | Q5_K_M | 0.07GB | | [zephyr-smol_llama-100m-dpo-full.Q5_1.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q5_1.gguf) | Q5_1 | 0.07GB | | [zephyr-smol_llama-100m-dpo-full.Q6_K.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q6_K.gguf) | Q6_K | 0.08GB | | [zephyr-smol_llama-100m-dpo-full.Q8_0.gguf](https://huggingface.co/RichardErkhov/amazingvince_-_zephyr-smol_llama-100m-dpo-full-gguf/blob/main/zephyr-smol_llama-100m-dpo-full.Q8_0.gguf) | Q8_0 | 0.1GB | Original model description: --- license: apache-2.0 base_model: amazingvince/zephyr-smol_llama-100m-sft-full tags: - generated_from_trainer model-index: - name: zephyr-smol_llama-100m-dpo-full results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-smol_llama-100m-dpo-full This model is a fine-tuned version of [amazingvince/zephyr-smol_llama-100m-sft-full](https://huggingface.co/amazingvince/zephyr-smol_llama-100m-sft-full) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5465 - Rewards/chosen: -0.0518 - Rewards/rejected: -0.7661 - Rewards/accuracies: 0.7170 - Rewards/margins: 0.7143 - Logps/rejected: -450.2018 - Logps/chosen: -588.7877 - Logits/rejected: -4.9602 - Logits/chosen: -5.2468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6549 | 0.26 | 1000 | 0.6037 | -0.1205 | -0.4850 | 0.6550 | 0.3644 | -447.3903 | -589.4750 | -4.7410 | -5.0341 | | 0.5349 | 0.52 | 2000 | 0.5779 | -0.0126 | -0.5080 | 0.6770 | 0.4955 | -447.6208 | -588.3951 | -4.8645 | -5.1463 | | 0.6029 | 0.77 | 3000 | 0.5657 | 0.0902 | -0.4636 | 0.6900 | 0.5538 | -447.1767 | -587.3674 | -5.0016 | -5.2911 | | 0.5273 | 1.03 | 4000 | 0.5596 | 0.0496 | -0.5449 | 0.7040 | 0.5944 | -447.9891 | -587.7738 | -4.9972 | -5.2892 | | 0.5 | 1.29 | 5000 | 0.5557 | 0.0585 | -0.6110 | 0.7050 | 0.6695 | -448.6505 | -587.6843 | -5.0108 | -5.3047 | | 0.5056 | 1.55 | 6000 | 0.5499 | 0.0054 | -0.6719 | 0.7130 | 0.6773 | -449.2598 | -588.2154 | -4.9988 | -5.2907 | | 0.4608 | 1.81 | 7000 | 0.5500 | -0.0376 | -0.7494 | 0.7030 | 0.7118 | -450.0341 | -588.6455 | -5.0549 | -5.3406 | | 0.426 | 2.07 | 8000 | 0.5472 | -0.0106 | -0.7021 | 0.7100 | 0.6916 | -449.5617 | -588.3751 | -4.9750 | -5.2626 | | 0.3875 | 2.32 | 9000 | 0.5464 | -0.0011 | -0.7171 | 0.7140 | 0.7159 | -449.7113 | -588.2810 | -4.9935 | -5.2796 | | 0.397 | 2.58 | 10000 | 0.5462 | -0.0391 | -0.7566 | 0.7190 | 0.7175 | -450.1064 | -588.6602 | -4.9737 | -5.2618 | | 0.4486 | 2.84 | 11000 | 0.5459 | -0.0493 | -0.7667 | 0.7110 | 0.7174 | -450.2074 | -588.7629 | -4.9569 | -5.2441 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
Tech-Meld/Llama3-ChatQA-1.5-8B-Q4_K_S-GGUF
Tech-Meld
2024-06-30T17:49:12Z
419
1
null
[ "gguf", "nvidia", "chatqa-1.5", "chatqa", "llama-3", "pytorch", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:nvidia/Llama3-ChatQA-1.5-8B", "license:llama3", "region:us" ]
text-generation
2024-06-30T17:48:50Z
--- base_model: nvidia/Llama3-ChatQA-1.5-8B language: - en license: llama3 pipeline_tag: text-generation tags: - nvidia - chatqa-1.5 - chatqa - llama-3 - pytorch - llama-cpp - gguf-my-repo --- # Tech-Meld/Llama3-ChatQA-1.5-8B-Q4_K_S-GGUF This model was converted to GGUF format from [`nvidia/Llama3-ChatQA-1.5-8B`](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Tech-Meld/Llama3-ChatQA-1.5-8B-Q4_K_S-GGUF --hf-file llama3-chatqa-1.5-8b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Tech-Meld/Llama3-ChatQA-1.5-8B-Q4_K_S-GGUF --hf-file llama3-chatqa-1.5-8b-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Tech-Meld/Llama3-ChatQA-1.5-8B-Q4_K_S-GGUF --hf-file llama3-chatqa-1.5-8b-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Tech-Meld/Llama3-ChatQA-1.5-8B-Q4_K_S-GGUF --hf-file llama3-chatqa-1.5-8b-q4_k_s.gguf -c 2048 ```
csarron/mobilebert-uncased-squad-v1
csarron
2023-04-05T17:53:38Z
418
0
transformers
[ "transformers", "pytorch", "safetensors", "mobilebert", "question-answering", "en", "dataset:squad", "arxiv:2004.02984", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en thumbnail: license: mit tags: - question-answering - mobilebert datasets: - squad metrics: - squad widget: - text: "Which name is also used to describe the Amazon rainforest in English?" context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species." - text: "How many square kilometers of rainforest is covered in the basin?" context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species." --- ## MobileBERT fine-tuned on SQuAD v1 [MobileBERT](https://arxiv.org/abs/2004.02984) is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. This model was fine-tuned from the HuggingFace checkpoint `google/mobilebert-uncased` on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer). ## Details | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD1.1 | train | 90.6K | | SQuAD1.1 | eval | 11.1k | ### Fine-tuning - Python: `3.7.5` - Machine specs: `CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz` `Memory: 32 GiB` `GPUs: 2 GeForce GTX 1070, each with 8GiB memory` `GPU driver: 418.87.01, CUDA: 10.1` - script: ```shell # after install https://github.com/huggingface/transformers cd examples/question-answering mkdir -p data wget -O data/train-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json wget -O data/dev-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json export SQUAD_DIR=`pwd`/data python run_squad.py \ --model_type mobilebert \ --model_name_or_path google/mobilebert-uncased \ --do_train \ --do_eval \ --do_lower_case \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --per_gpu_train_batch_size 16 \ --per_gpu_eval_batch_size 16 \ --learning_rate 4e-5 \ --num_train_epochs 5.0 \ --max_seq_length 320 \ --doc_stride 128 \ --warmup_steps 1400 \ --output_dir $SQUAD_DIR/mobilebert-uncased-warmup-squad_v1 2>&1 | tee train-mobilebert-warmup-squad_v1.log ``` It took about 3 hours to finish. ### Results **Model size**: `95M` | Metric | # Value | # Original ([Table 5](https://arxiv.org/pdf/2004.02984.pdf))| | ------ | --------- | --------- | | **EM** | **82.6** | **82.9** | | **F1** | **90.0** | **90.0** | Note that the above results didn't involve any hyperparameter search. ## Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="csarron/mobilebert-uncased-squad-v1", tokenizer="csarron/mobilebert-uncased-squad-v1" ) predictions = qa_pipeline({ 'context': "The game was played on February 7, 2016 at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.", 'question': "What day was the game played on?" }) print(predictions) # output: # {'score': 0.7754058241844177, 'start': 23, 'end': 39, 'answer': 'February 7, 2016'} ``` > Created by [Qingqing Cao](https://awk.ai/) | [GitHub](https://github.com/csarron) | [Twitter](https://twitter.com/sysnlp) > Made with ❤️ in New York.
KoboldAI/fairseq-dense-13B-Shinen
KoboldAI
2022-04-07T09:10:04Z
418
30
transformers
[ "transformers", "pytorch", "xglm", "text-generation", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-04-07T08:05:12Z
--- language: en license: mit --- # Fairseq-dense 13B - Shinen ## Model Description Fairseq-dense 13B-Shinen is a finetune created using Fairseq's MoE dense model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.** ## Training data The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way: ``` [Theme: <theme1>, <theme2> ,<theme3>] <Story goes here> ``` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/fairseq-dense-13B-Shinen') >>> generator("She was staring at me", do_sample=True, min_length=50) [{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}] ``` ### Limitations and Biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). ### BibTeX entry and citation info ``` Artetxe et al. (2021): Efficient Large Scale Language Modeling with Mixtures of Experts ```
timm/semnasnet_100.rmsp_in1k
timm
2023-04-27T21:14:35Z
418
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1807.11626", "license:apache-2.0", "region:us" ]
image-classification
2022-12-13T00:01:03Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for semnasnet_100.rmsp_in1k A MNasNet image classification model with Squeeze-and-Excitation channel attention. Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * A simple RmsProp based recipe without RandAugment. Using RandomErasing, mixup, dropout, standard random-resize-crop augmentation. * RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging * Step (exponential decay w/ staircase) LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 3.9 - GMACs: 0.3 - Activations (M): 6.2 - Image size: 224 x 224 - **Papers:** - MnasNet: Platform-Aware Neural Architecture Search for Mobi: https://arxiv.org/abs/1807.11626 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('semnasnet_100.rmsp_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'semnasnet_100.rmsp_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 112, 112]) # torch.Size([1, 24, 56, 56]) # torch.Size([1, 40, 28, 28]) # torch.Size([1, 112, 14, 14]) # torch.Size([1, 320, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'semnasnet_100.rmsp_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{tan2019mnasnet, title={Mnasnet: Platform-aware neural architecture search for mobile}, author={Tan, Mingxing and Chen, Bo and Pang, Ruoming and Vasudevan, Vijay and Sandler, Mark and Howard, Andrew and Le, Quoc V}, booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition}, pages={2820--2828}, year={2019} } ```
mirroring/pastel-mix
mirroring
2023-06-25T18:39:08Z
418
3
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-25T18:39:08Z
--- language: - en license: creativeml-openrail-m thumbnail: >- https://huggingface.co/andite/pastel-mix/resolve/main/example-images/01194-%20.png tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true duplicated_from: JamesFlare/pastel-mix --- Update Logs: [1/27/22] I uploaded the model in CivitAI! -> https://civitai.com/models/5414/pastel-mix-stylized-anime-model I'd appreciate the ratings, thank you! [2/2/22] Uploaded a lora version. <center><h1><b>Pastel Mix</b></h1></center> <p align="center">Welcome to Pastel Mix - a stylized latent diffusion model. This model is intended to produce high-quality, highly detailed anime style with just a few prompts.</p> <p align="center">This model is made with the thought of imitating pastel-like art and the potential of mixing LORAs into a model altogether to create a fantastic mix. Recipe for this mix could be found below. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images. </p> <p align="center">e.g. <b>masterpiece, best quality, upper body, 1girl, looking at viewer, red hair, medium hair, purple eyes, demon horns, black coat, indoors, dimly lit</b></p> <p align="center"><img src="https://huggingface.co/andite/Pastel-Mix/resolve/main/example-images/grid-0020.png"> <img src="https://huggingface.co/andite/Pastel-Mix/resolve/main/example-images/grid-0018.png"></p> ------- ## How to download with Git ``` git lfs install git clone https://huggingface.co/andite/pastel-mix ``` ## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "andite/pastel-mix" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "hatsune_miku" image = pipe(prompt).images[0] image.save("./hatsune_miku.png") ``` # Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run pastel-mix: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/pastel-mix) ## Examples ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/grid-0035.png) ``` masterpiece, best quality, ultra-detailed, illustration, portrait, 1girl Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent ``` ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/grid-reimu.png) ``` masterpiece, best quality, ultra-detailed, illustration, portrait, hakurei reimu, 1girl, throne room, dimly lit Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent ``` ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/grid-0043.png) ``` masterpiece, best quality, ultra-detailed, illustration, 1girl, witch hat, purple eyes, blonde hair, wielding a purple staff blasting purple energy, purple beam, purple effects, dragons, chaos Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent ``` ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/01166-%20.png) ``` masterpiece, best quality, ultra-detailed, illustration, close-up, straight on, 1girl, black hair, yellow eyes, red roses, chains Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2203084815, Size: 640x448, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 1280x960, Hires steps: 20, Hires upscaler: Latent ``` ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/01177-%20.png) ``` masterpiece, best quality, ultra-detailed, illustration, close-up, straight on, face focus, 1girl, white hair, golden eyes, long hair, halo, angel wings, serene expression, looking at viewer Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 240742293, Size: 640x448, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 1280x960, Hires steps: 20, Hires upscaler: Latent ``` ## So what the hell is the 'better-vae' version? I merged the pastel-waifu-diffusion.vae.pt inside the model so you don't have to set up the vae anymore. ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/xyz_grid-0004-%20.png) life so much ez now since you don't have to download the vae and set it up right? ## What is pastelmix-lora.safetensors? It's a lora version which is made from extracting the loras from pastel-mix using a script that is similar to add-difference method. https://github.com/bmaltais/kohya_ss/blob/master/train_network_README.md ## Guide For the settings or parameters, I recommend using these settings. ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/settings.png) ``` Sampler: DPM++ 2M Karras Steps: 20 CFG Scale: 7 Hires. Fix: On Upscaler: Latent (MUST!) Hires Steps: 20 Denoising Strength: 0. ``` I prefer using 0.6 since it's the sweet spot of this model. If you can find a better setting for this model, then good for you lol. Latent upscaler is the best setting for me since it retains or enhances the pastel style. Other upscalers like Lanczos or Anime6B tends to smoothen them out, removing the pastel-like brushwork. Please use the **VAE** that I uploaded in this repository. It is from the [Waifu Diffusion](https://huggingface.co/hakurei/waifu-diffusion-v1-4/tree/main/vae) team. Credits to [haru](https://huggingface.co/hakurei) for letting me rename and upload it. ## Tip (Optional) Putting mksks style in the beginning of the prompt can further influence the pastel-like style and make the output better. It is optional though, so it's up to you. You don't really need it. ![](https://huggingface.co/andite/pastel-mix/resolve/main/example-images/xy_grid-0016-%20.png) ``` mksks style, masterpiece, best quality, upper body, 1girl, looking at viewer, red hair, medium hair, purple eyes, demon horns, black coat, indoors, dimly lit Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts)) Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 580841049, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent ``` ## Recipe Merging the models. | Model: A | Model: B | Weight | Base alpha | Merge Name | | --- | --- | --- | --- | --- | | [dpepmkmp](https://huggingface.co/closertodeath/dpepmkmp) | [Tea](https://huggingface.co/andite/desserts) | 1,0.9,0.7,0.5,0.3,0.1,1,1,1,1,1,1,0,1,1,1,1,1,1,0.1,0.3,0.5,0.7,0.9,1 | 0 | dpeptea | | dpeptea | [basil-mix](https://huggingface.co/nuigurumi/basil_mix) | 1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 | 0 | dpeptea-basil | Merging the loras into the model. | Model | Lora | Weight | Merge Name | | --- | --- | --- | --- | | [dpeptea-basil](https://huggingface.co/closertodeath/dpepteahands3) | [Magic LORA](https://cdn.discordapp.com/attachments/1065289257243115540/1066346221876301845/MagicLORA.pt) | 0.3 | dpeptea-1 | | dpeptea-1 | [Jordan_3](https://huggingface.co/SatyamSSJ10/ConceptArt) | 1 | dpeptea-2 | | dpeptea-2 | [sttabi_v1.4-04](https://huggingface.co/dolphinz/stlora) | 0.5 | dpeptea-3 | | dpeptea-3 | [xlimo768](https://huggingface.co/closertodeath/ctdlora) | 0.6 | dpeptea-4 | | dpeptea-4 | [dpep 2 768](https://huggingface.co/closertodeath/ctdlora)| 0.35 | Pastel-Mix | ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content. 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) ------- ## Big Thanks to The 東方Project AI community for their wonderful LORAs. - [Closertodeath](https://huggingface.co/closertodeath) for dpepmkmp model, and the loras: xlimo768, dpep 2 768 - [dolphinz/sometimes#9353](https://huggingface.co/dolphinz) for tabi artstyle Lora. - [SatyamSSJ10](https://huggingface.co/SatyamSSJ10/ConceptArt) for Jordan_3 Lora. - randomaccessmemories#4004 for Magic Lora
yongsun-yoon/minilmv2-bertscore-distilled
yongsun-yoon
2023-07-06T23:00:56Z
418
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
2023-06-30T22:04:37Z
This is a distilled BERTScore model. Please read [this post](https://medium.com/@yongsun.yoon/bertscore-knowledge-distillation-42721b3508e2) for details. ```python from bert_score import BERTScorer texts1 = ['This is a text.'] texts2 = ['This is another text.'] scorer = BERTScorer(model_type='yongsun-yoon/minilmv2-bertscore-distilled', num_layers=6) P, R, F = scorer.score(texts1, texts2) ```
lorahub/flan_t5_large-dream_baseline
lorahub
2023-07-24T09:44:20Z
418
0
peft
[ "peft", "region:us" ]
null
2023-07-24T09:44:10Z
--- library_name: peft ---
TheBloke/Tulpar-7B-v0-GGUF
TheBloke
2023-09-27T12:48:39Z
418
3
transformers
[ "transformers", "gguf", "llama", "en", "base_model:HyperbeeAI/Tulpar-7b-v0", "license:llama2", "text-generation-inference", "region:us" ]
null
2023-09-10T13:29:27Z
--- language: - en license: llama2 library_name: transformers model_name: Tulpar 7B v0 base_model: HyperbeeAI/Tulpar-7b-v0 inference: false model_creator: HyperbeeAI model_type: llama prompt_template: '### User: {prompt} ### Assistant: ' quantized_by: TheBloke thumbnail: https://huggingface.co/HyperbeeAI/Tulpar-7b-v0/resolve/main/tulpar.png --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Tulpar 7B v0 - GGUF - Model creator: [HyperbeeAI](https://huggingface.co/HyperbeeAI) - Original model: [Tulpar 7B v0](https://huggingface.co/HyperbeeAI/Tulpar-7b-v0) <!-- description start --> ## Description This repo contains GGUF format model files for [HyperbeeAI's Tulpar 7B v0](https://huggingface.co/HyperbeeAI/Tulpar-7b-v0). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Tulpar-7B-v0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Tulpar-7B-v0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF) * [HyperbeeAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/HyperbeeAI/Tulpar-7b-v0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: User-Assistant-Hashes ``` ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [tulpar-7b-v0.Q2_K.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [tulpar-7b-v0.Q3_K_S.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [tulpar-7b-v0.Q3_K_M.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [tulpar-7b-v0.Q3_K_L.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [tulpar-7b-v0.Q4_0.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [tulpar-7b-v0.Q4_K_S.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [tulpar-7b-v0.Q4_K_M.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [tulpar-7b-v0.Q5_0.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [tulpar-7b-v0.Q5_K_S.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [tulpar-7b-v0.Q5_K_M.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [tulpar-7b-v0.Q6_K.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [tulpar-7b-v0.Q8_0.gguf](https://huggingface.co/TheBloke/Tulpar-7B-v0-GGUF/blob/main/tulpar-7b-v0.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Tulpar-7B-v0-GGUF and below it, a specific filename to download, such as: tulpar-7b-v0.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Tulpar-7B-v0-GGUF tulpar-7b-v0.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Tulpar-7B-v0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Tulpar-7B-v0-GGUF tulpar-7b-v0.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m tulpar-7b-v0.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### User: {prompt}\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Tulpar-7B-v0-GGUF", model_file="tulpar-7b-v0.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: HyperbeeAI's Tulpar 7B v0 <p align="center"> <img src="https://huggingface.co/HyperbeeAI/Tulpar-7b-v0/resolve/main/tulpar.png" width="360" height="360" > </p> # Model Description Tulpar-7b is a LLama2-7b-based model trained by HyperbeeAI. Training is done on a filtered and preprocessed instruction finetuning dataset that includes GPT-4 generated and generally curated datasets like Airoboros and Platypus. # Example Usage Loading the model: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HyperbeeAI/Tulpar-7b-v0") model = AutoModelForCausalLM.from_pretrained("HyperbeeAI/Tulpar-7b-v0", device_map="auto") ``` You can run inference with both of the following prompts: ```python input_text="What is deep learning?" prompt = f"### User: {input_text}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=512) print(tokenizer.decode(output[0])) ``` ```python input_text="What is deep learning?" prompt = f"Question: {input_text}\n\nAnswer:" inputs = tokenizer(prompt, return_tensors="pt") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=512) print(tokenizer.decode(output[0])) ``` # Evaluation Our offline HF Leaderboard evaluation results: |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |*arc_challenge*|acc_norm|0.5614| |*hellaswag*|acc_norm|0.7901| |*mmlu*|acc_norm|0.5242| |*truthfulqa_mc*|mc2|0.5160| |**Average**|-|**0.5979**|| Other GPT4All evaluation results: |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |boolq|acc |0.8306| |piqa|acc |0.7905| | |acc_norm|0.7884| |winogrande|acc |0.7159| |openbookqa|acc |0.356| | |acc_norm|0.448| |**Average** (including HF leaderboard datasets) | | **0.6468** | BigBenchHard results: |||| |:------:|:--------:|:-------:| |**Task**|**Metric**|**Value**| |bigbench_causal_judgement |multiple_choice_grade|0.6105| |bigbench_date_understanding |multiple_choice_grade|0.6423| |bigbench_disambiguation_qa |multiple_choice_grade|0.3643| |bigbench_dyck_languages |multiple_choice_grade|0.2000| |bigbench_formal_fallacies_syllogisms_negation |multiple_choice_grade|0.5002| |bigbench_geometric_shapes |multiple_choice_grade|0.0000| | |exact_str_match |0.0000| |bigbench_hyperbaton |multiple_choice_grade|0.6754| |bigbench_logical_deduction_five_objects |multiple_choice_grade|0.2700| |bigbench_logical_deduction_seven_objects |multiple_choice_grade|0.1929| |bigbench_logical_deduction_three_objects |multiple_choice_grade|0.4133| |bigbench_movie_recommendation |multiple_choice_grade|0.3000| |bigbench_navigate |multiple_choice_grade|0.5000| |bigbench_reasoning_about_colored_objects |multiple_choice_grade|0.5750| |bigbench_ruin_names |multiple_choice_grade|0.3281| |bigbench_salient_translation_error_detection |multiple_choice_grade|0.2976| |bigbench_snarks |multiple_choice_grade|0.6022| |bigbench_sports_understanding |multiple_choice_grade|0.5122| |bigbench_temporal_sequences |multiple_choice_grade|0.1450| |bigbench_tracking_shuffled_objects_five_objects |multiple_choice_grade|0.1976| |bigbench_tracking_shuffled_objects_seven_objects|multiple_choice_grade|0.1440| |bigbench_tracking_shuffled_objects_three_objects|multiple_choice_grade|0.4133| |**Average**| |**0.3754** # Ethical Considerations and Limitations Tulpar is a technology with potential risks and limitations. This model is finetuned only in English and all language-related scenarios are not covered. As HyperbeeAI, we neither guarantee ethical, accurate, unbiased, objective responses nor endorse its outputs. Before deploying this model, you are advised to make safety tests for your use case. <!-- original-model-card end -->
TheBloke/guanaco-33B-GGUF
TheBloke
2023-09-27T12:52:46Z
418
3
transformers
[ "transformers", "gguf", "llama", "base_model:timdettmers/guanaco-33b-merged", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-19T23:37:28Z
--- license: other model_name: Guanaco 33B base_model: timdettmers/guanaco-33b-merged inference: false model_creator: Tim Dettmers model_type: llama prompt_template: '### Human: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Guanaco 33B - GGUF - Model creator: [Tim Dettmers](https://huggingface.co/timdettmers) - Original model: [Guanaco 33B](https://huggingface.co/timdettmers/guanaco-33b-merged) <!-- description start --> ## Description This repo contains GGUF format model files for [Tim Dettmers' Guanaco 33B](https://huggingface.co/timdettmers/guanaco-33b-merged). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/guanaco-33B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/guanaco-33B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/guanaco-33B-GGUF) * [Tim Dettmers's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/timdettmers/guanaco-33b-merged) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Guanaco ``` ### Human: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [guanaco-33b.Q2_K.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes | | [guanaco-33b.Q3_K_S.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss | | [guanaco-33b.Q3_K_M.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss | | [guanaco-33b.Q3_K_L.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss | | [guanaco-33b.Q4_0.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [guanaco-33b.Q4_K_S.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss | | [guanaco-33b.Q4_K_M.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended | | [guanaco-33b.Q5_0.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [guanaco-33b.Q5_K_S.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended | | [guanaco-33b.Q5_K_M.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended | | [guanaco-33b.Q6_K.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss | | [guanaco-33b.Q8_0.gguf](https://huggingface.co/TheBloke/guanaco-33B-GGUF/blob/main/guanaco-33b.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/guanaco-33B-GGUF and below it, a specific filename to download, such as: guanaco-33b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/guanaco-33B-GGUF guanaco-33b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/guanaco-33B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/guanaco-33B-GGUF guanaco-33b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m guanaco-33b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: {prompt}\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/guanaco-33B-GGUF", model_file="guanaco-33b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Tim Dettmers' Guanaco 33B No original model card was available. <!-- original-model-card end -->
TheBloke/Nethena-20B-GGUF
TheBloke
2023-10-30T20:11:05Z
418
6
transformers
[ "transformers", "gguf", "llama", "base_model:NeverSleep/Nethena-20B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
2023-10-30T18:27:01Z
--- base_model: NeverSleep/Nethena-20B inference: false license: cc-by-nc-4.0 model_creator: NeverSleep model_name: Nethena 20B model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Nethena 20B - GGUF - Model creator: [NeverSleep](https://huggingface.co/NeverSleep) - Original model: [Nethena 20B](https://huggingface.co/NeverSleep/Nethena-20B) <!-- description start --> ## Description This repo contains GGUF format model files for [NeverSleep's Nethena 20B](https://huggingface.co/NeverSleep/Nethena-20B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nethena-20B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nethena-20B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nethena-20B-GGUF) * [NeverSleep's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/Nethena-20B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [NeverSleep's Nethena 20B](https://huggingface.co/NeverSleep/Nethena-20B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [nethena-20b.Q2_K.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q2_K.gguf) | Q2_K | 2 | 8.31 GB| 10.81 GB | smallest, significant quality loss - not recommended for most purposes | | [nethena-20b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q3_K_S.gguf) | Q3_K_S | 3 | 8.66 GB| 11.16 GB | very small, high quality loss | | [nethena-20b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q3_K_M.gguf) | Q3_K_M | 3 | 9.70 GB| 12.20 GB | very small, high quality loss | | [nethena-20b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q3_K_L.gguf) | Q3_K_L | 3 | 10.63 GB| 13.13 GB | small, substantial quality loss | | [nethena-20b.Q4_0.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q4_0.gguf) | Q4_0 | 4 | 11.29 GB| 13.79 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [nethena-20b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q4_K_S.gguf) | Q4_K_S | 4 | 11.34 GB| 13.84 GB | small, greater quality loss | | [nethena-20b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q4_K_M.gguf) | Q4_K_M | 4 | 12.04 GB| 14.54 GB | medium, balanced quality - recommended | | [nethena-20b.Q5_0.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q5_0.gguf) | Q5_0 | 5 | 13.77 GB| 16.27 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [nethena-20b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q5_K_S.gguf) | Q5_K_S | 5 | 13.77 GB| 16.27 GB | large, low quality loss - recommended | | [nethena-20b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q5_K_M.gguf) | Q5_K_M | 5 | 14.16 GB| 16.66 GB | large, very low quality loss - recommended | | [nethena-20b.Q6_K.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q6_K.gguf) | Q6_K | 6 | 16.40 GB| 18.90 GB | very large, extremely low quality loss | | [nethena-20b.Q8_0.gguf](https://huggingface.co/TheBloke/Nethena-20B-GGUF/blob/main/nethena-20b.Q8_0.gguf) | Q8_0 | 8 | 21.25 GB| 23.75 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Nethena-20B-GGUF and below it, a specific filename to download, such as: nethena-20b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Nethena-20B-GGUF nethena-20b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Nethena-20B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nethena-20B-GGUF nethena-20b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m nethena-20b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Nethena-20B-GGUF", model_file="nethena-20b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: NeverSleep's Nethena 20B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/NNs9LlIWuupi4KPLmTFtP.png) # This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)! Nethena-20B model. Use Alpaca format. Suitable for RP, ERP and general stuff. What would happen if we combine all of out best models? Well.. here it is, the holy grail: **Echidna v0.3** + **Athena v3** + **Nete** This model also has a 13b version, you can check it out right [here](https://huggingface.co/NeverSleep/Nethena-13B). [Recommended settings - No settings yet(Please suggest some over in the Community tab!)] <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains fp16 files of Nethena-20B. [FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Nethena-20B) <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)--> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)--> <!--[exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-20B-v0.3-8bpw-8h-exl2)--> <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)--> <!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)--> [GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Nethena-20B-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)--> ## Ratings: Note: We have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". <!-- description end --> <!-- description start --> ## Models+loras used and recipe - NeverSleep/Echidna-13b-v0.3 - IkariDev/Athena-v3 - Undi95/Nete-13B <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek <!-- original-model-card end -->
TheBloke/SynthIA-7B-v2.0-16k-GGUF
TheBloke
2023-11-15T09:02:20Z
418
5
transformers
[ "transformers", "gguf", "mistral", "text-generation", "en", "base_model:NurtureAI/SynthIA-7B-v2.0-16k", "license:apache-2.0", "text-generation-inference", "region:us" ]
text-generation
2023-11-15T08:58:14Z
--- base_model: NurtureAI/SynthIA-7B-v2.0-16k inference: false language: - en library_name: transformers license: apache-2.0 model_creator: NurtureAI model_name: SynthIA 7B V2.0 16K model_type: mistral pipeline_tag: text-generation prompt_template: 'SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SynthIA 7B V2.0 16K - GGUF - Model creator: [NurtureAI](https://huggingface.co/NurtureAI) - Original model: [SynthIA 7B V2.0 16K](https://huggingface.co/NurtureAI/SynthIA-7B-v2.0-16k) <!-- description start --> ## Description This repo contains GGUF format model files for [NurtureAI's SynthIA 7B V2.0 16K](https://huggingface.co/NurtureAI/SynthIA-7B-v2.0-16k). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF) * [NurtureAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NurtureAI/SynthIA-7B-v2.0-16k) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Synthia-CoT ``` SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [synthia-7b-v2.0-16k.Q2_K.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [synthia-7b-v2.0-16k.Q3_K_S.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [synthia-7b-v2.0-16k.Q3_K_M.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [synthia-7b-v2.0-16k.Q3_K_L.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [synthia-7b-v2.0-16k.Q4_0.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [synthia-7b-v2.0-16k.Q4_K_S.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [synthia-7b-v2.0-16k.Q4_K_M.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [synthia-7b-v2.0-16k.Q5_0.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [synthia-7b-v2.0-16k.Q5_K_S.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [synthia-7b-v2.0-16k.Q5_K_M.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [synthia-7b-v2.0-16k.Q6_K.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [synthia-7b-v2.0-16k.Q8_0.gguf](https://huggingface.co/TheBloke/SynthIA-7B-v2.0-16k-GGUF/blob/main/synthia-7b-v2.0-16k.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/SynthIA-7B-v2.0-16k-GGUF and below it, a specific filename to download, such as: synthia-7b-v2.0-16k.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/SynthIA-7B-v2.0-16k-GGUF synthia-7b-v2.0-16k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/SynthIA-7B-v2.0-16k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SynthIA-7B-v2.0-16k-GGUF synthia-7b-v2.0-16k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m synthia-7b-v2.0-16k.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.\nUSER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/SynthIA-7B-v2.0-16k-GGUF", model_file="synthia-7b-v2.0-16k.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: NurtureAI's SynthIA 7B V2.0 16K ## SynthIA 7B v2.0 extended to 16k context This is the same original model so the original model license still applies. This model has been modified to have a larger maximum context size of 16k. ### Original Model Card: ### Prompt format: ``` SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: How is insulin synthesized? ASSISTANT: ``` ### Code example: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch, json # model path model_path = "NurtureAI/SynthIA-7B-v2.0-16k" output_file_path = "./SynthIA-7B-v2.0-conversations.jsonl" device_map = {"": "cuda"} model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map=device_map, load_in_8bit=False, trust_remote_code=True, ) # tokenizer tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` <!-- original-model-card end -->
jzli/epiCPhotoGasm-last-unicorn
jzli
2024-05-16T14:16:39Z
418
2
diffusers
[ "diffusers", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-16T05:25:15Z
You can run this model for free at: https://sinkin.ai/m/1kb1xAX We offer API at low rates as well
TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF
TheBloke
2023-12-12T14:35:49Z
418
14
transformers
[ "transformers", "gguf", "mistral", "en", "base_model:v1olet/v1olet_marcoroni-go-bruins-merge-7B", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
2023-12-12T14:31:19Z
--- base_model: v1olet/v1olet_marcoroni-go-bruins-merge-7B inference: false language: - en license: apache-2.0 model_creator: Trong-Hieu Nguyen-Mau model_name: V1Olet Marcoroni Go Bruins Merge 7B model_type: mistral prompt_template: '### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # V1Olet Marcoroni Go Bruins Merge 7B - GGUF - Model creator: [Trong-Hieu Nguyen-Mau](https://huggingface.co/v1olet) - Original model: [V1Olet Marcoroni Go Bruins Merge 7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B) <!-- description start --> ## Description This repo contains GGUF format model files for [Trong-Hieu Nguyen-Mau's V1Olet Marcoroni Go Bruins Merge 7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF) * [Trong-Hieu Nguyen-Mau's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca-InstructOnly2 ``` ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [v1olet_marcoroni-go-bruins-merge-7b.Q2_K.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [v1olet_marcoroni-go-bruins-merge-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [v1olet_marcoroni-go-bruins-merge-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [v1olet_marcoroni-go-bruins-merge-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [v1olet_marcoroni-go-bruins-merge-7b.Q4_0.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [v1olet_marcoroni-go-bruins-merge-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [v1olet_marcoroni-go-bruins-merge-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [v1olet_marcoroni-go-bruins-merge-7b.Q5_0.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [v1olet_marcoroni-go-bruins-merge-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [v1olet_marcoroni-go-bruins-merge-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [v1olet_marcoroni-go-bruins-merge-7b.Q6_K.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [v1olet_marcoroni-go-bruins-merge-7b.Q8_0.gguf](https://huggingface.co/TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF/blob/main/v1olet_marcoroni-go-bruins-merge-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF and below it, a specific filename to download, such as: v1olet_marcoroni-go-bruins-merge-7b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF v1olet_marcoroni-go-bruins-merge-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/v1olet_marcoroni-go-bruins-merge-7B-GGUF v1olet_marcoroni-go-bruins-merge-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m v1olet_marcoroni-go-bruins-merge-7b.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./v1olet_marcoroni-go-bruins-merge-7b.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./v1olet_marcoroni-go-bruins-merge-7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Trong-Hieu Nguyen-Mau's V1Olet Marcoroni Go Bruins Merge 7B ### 12th December 2023 We are ranked *6th* on the overall leaderboard and **1st** in the 7B leaderboard! 🔥🔥🔥 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63c06fba8d1175e3399c16e6/LbOzm-1EEBaSv4O1pssyh.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63c06fba8d1175e3399c16e6/Cv5x7rRuM46AlliFdoC_B.png) Merge AIDC-ai-business/Marcoroni-7B-v3 and rwitz/go-bruins-v2 using slerp merge from https://github.com/cg123/mergekit. *config.yaml* ``` slices: - sources: - model: AIDC-ai-business/Marcoroni-7B-v3 layer_range: [0, 32] - model: rwitz/go-bruins-v2 layer_range: [0, 32] merge_method: slerp base_model: AIDC-ai-business/Marcoroni-7B-v3 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ``` You can use alpaca template. ``` template_format = """{system} ### Instruction: {prompt} ### Response: """ ``` Developed by: Trong-Hieu Nguyen-Mau <!-- original-model-card end -->
liujiarik/lim_base_zh
liujiarik
2023-12-22T08:46:48Z
418
1
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "mteb", "license:apache-2.0", "model-index", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
2023-12-22T05:52:32Z
--- license: apache-2.0 tags: - mteb model-index: - name: lim_base_zh_v0 results: - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 46.66600000000001 - type: f1 value: 43.88121213919628 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringP2P name: MTEB CLSClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 33.55469933811146 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringS2S name: MTEB CLSClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 36.17977796122646 - task: type: Reranking dataset: type: C-MTEB/CMedQAv1-reranking name: MTEB CMedQAv1 config: default split: test revision: None metrics: - type: map value: 83.84687250720238 - type: mrr value: 86.34579365079364 - task: type: Reranking dataset: type: C-MTEB/CMedQAv2-reranking name: MTEB CMedQAv2 config: default split: test revision: None metrics: - type: map value: 84.7457752094449 - type: mrr value: 87.41591269841268 - task: type: PairClassification dataset: type: C-MTEB/CMNLI name: MTEB Cmnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 70.99218280216476 - type: cos_sim_ap value: 79.5838273070596 - type: cos_sim_f1 value: 73.01215092730762 - type: cos_sim_precision value: 67.09108716944172 - type: cos_sim_recall value: 80.07949497311199 - type: dot_accuracy value: 70.99218280216476 - type: dot_ap value: 79.58744690895374 - type: dot_f1 value: 73.01215092730762 - type: dot_precision value: 67.09108716944172 - type: dot_recall value: 80.07949497311199 - type: euclidean_accuracy value: 70.99218280216476 - type: euclidean_ap value: 79.5838273070596 - type: euclidean_f1 value: 73.01215092730762 - type: euclidean_precision value: 67.09108716944172 - type: euclidean_recall value: 80.07949497311199 - type: manhattan_accuracy value: 70.88394467829224 - type: manhattan_ap value: 79.42301231718942 - type: manhattan_f1 value: 72.72536687631029 - type: manhattan_precision value: 65.91297738932168 - type: manhattan_recall value: 81.10825344867898 - type: max_accuracy value: 70.99218280216476 - type: max_ap value: 79.58744690895374 - type: max_f1 value: 73.01215092730762 - task: type: Classification dataset: type: C-MTEB/IFlyTek-classification name: MTEB IFlyTek config: default split: validation revision: None metrics: - type: accuracy value: 47.34128510965756 - type: f1 value: 35.49963469301016 - task: type: Classification dataset: type: C-MTEB/JDReview-classification name: MTEB JDReview config: default split: test revision: None metrics: - type: accuracy value: 85.66604127579738 - type: ap value: 53.038152290755555 - type: f1 value: 80.14685686902159 - task: type: Reranking dataset: type: C-MTEB/Mmarco-reranking name: MTEB MMarcoReranking config: default split: dev revision: None metrics: - type: map value: 20.56449688140155 - type: mrr value: 19.60753968253968 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 72.38399462004035 - type: f1 value: 70.33023134666634 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 74.87222595830531 - type: f1 value: 74.25722751562503 - task: type: Classification dataset: type: C-MTEB/MultilingualSentiment-classification name: MTEB MultilingualSentiment config: default split: validation revision: None metrics: - type: accuracy value: 76.27000000000001 - type: f1 value: 75.9660773461064 - task: type: PairClassification dataset: type: C-MTEB/OCNLI name: MTEB Ocnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 67.35246345425013 - type: cos_sim_ap value: 69.69618171375657 - type: cos_sim_f1 value: 71.70665459483928 - type: cos_sim_precision value: 62.75752773375595 - type: cos_sim_recall value: 83.6325237592397 - type: dot_accuracy value: 67.35246345425013 - type: dot_ap value: 69.69618171375657 - type: dot_f1 value: 71.70665459483928 - type: dot_precision value: 62.75752773375595 - type: dot_recall value: 83.6325237592397 - type: euclidean_accuracy value: 67.35246345425013 - type: euclidean_ap value: 69.69618171375657 - type: euclidean_f1 value: 71.70665459483928 - type: euclidean_precision value: 62.75752773375595 - type: euclidean_recall value: 83.6325237592397 - type: manhattan_accuracy value: 66.81104493773688 - type: manhattan_ap value: 69.33781930832232 - type: manhattan_f1 value: 71.6342082980525 - type: manhattan_precision value: 59.78798586572438 - type: manhattan_recall value: 89.33474128827878 - type: max_accuracy value: 67.35246345425013 - type: max_ap value: 69.69618171375657 - type: max_f1 value: 71.70665459483928 - task: type: Classification dataset: type: C-MTEB/OnlineShopping-classification name: MTEB OnlineShopping config: default split: test revision: None metrics: - type: accuracy value: 93.05 - type: ap value: 91.26069801777923 - type: f1 value: 93.04149818231389 - task: type: Reranking dataset: type: C-MTEB/T2Reranking name: MTEB T2Reranking config: default split: dev revision: None metrics: - type: map value: 65.74883739850293 - type: mrr value: 75.47326869136282 - task: type: Classification dataset: type: C-MTEB/TNews-classification name: MTEB TNews config: default split: validation revision: None metrics: - type: accuracy value: 53.269999999999996 - type: f1 value: 51.410630382886445 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringP2P name: MTEB ThuNewsClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 63.344532225921434 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringS2S name: MTEB ThuNewsClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 60.33437882010517 - task: type: Classification dataset: type: C-MTEB/waimai-classification name: MTEB Waimai config: default split: test revision: None metrics: - type: accuracy value: 87.96000000000002 - type: ap value: 72.43737061465443 - type: f1 value: 86.48668399738767 --- ## Model Details Lim is a general text embedding model(chinese),We are continuously optimizing it. ## History 『2023-12-22』Published lim_base_zh_v0 model ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python model_name="liujiarik/lim_base_zh" from sentence_transformers import SentenceTransformer sentences = ['我换手机号了', '如果我换手机怎么办?'] model = SentenceTransformer(model_name) embeddings = model.encode(sentences) print(embeddings) ```
Artefact2/Yi-34B-GGUF
Artefact2
2024-06-26T08:45:41Z
418
1
null
[ "gguf", "en", "license:apache-2.0", "region:us" ]
null
2024-02-09T02:06:26Z
--- language: - en license: apache-2.0 --- <img src="data:image/svg+xml;base64,PHN2ZyB3aWR0aD0iNjAwIiBoZWlnaHQ9IjYwMCIgdmlld0JveD0iMCAwIDYwMCA2MDAiIGZpbGw9Im5vbmUiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyI+CjxjaXJjbGUgY3g9IjMwMCIgY3k9IjMwMCIgcj0iMzAwIiBmaWxsPSIjMDAzNDI1Ii8+CjxyZWN0IHg9IjQwOS43MzMiIHk9IjM0MC4wMzEiIHdpZHRoPSI0Mi4zODYyIiBoZWlnaHQ9IjE1MS42NDgiIHJ4PSIyMS4xOTMxIiBmaWxsPSJ3aGl0ZSIvPgo8cGF0aCBmaWxsLXJ1bGU9ImV2ZW5vZGQiIGNsaXAtcnVsZT0iZXZlbm9kZCIgZD0iTTQyMi4wMDUgMTMzLjM1NEM0MTMuMDg5IDEyNS43NzEgMzk5LjcxNCAxMjYuODUxIDM5Mi4xMzEgMTM1Ljc2N0wyNzMuNjk5IDI3NS4wMjFDMjcwLjY0MyAyNzguNjE0IDI2OC45OTQgMjgyLjkzMiAyNjguNjk4IDI4Ny4zMDJDMjY4LjUzMiAyODguMzcxIDI2OC40NDYgMjg5LjQ2NiAyNjguNDQ2IDI5MC41ODFWNDY4LjYwM0MyNjguNDQ2IDQ4MC4zMDggMjc3LjkzNCA0ODkuNzk2IDI4OS42MzkgNDg5Ljc5NkMzMDEuMzQ0IDQ4OS43OTYgMzEwLjgzMiA0ODAuMzA4IDMxMC44MzIgNDY4LjYwM1YyOTYuNzg0TDQyNC40MTkgMTYzLjIyOEM0MzIuMDAyIDE1NC4zMTIgNDMwLjkyMSAxNDAuOTM3IDQyMi4wMDUgMTMzLjM1NFoiIGZpbGw9IndoaXRlIi8+CjxyZWN0IHg9IjExMy45NzIiIHk9IjEzNC4yNSIgd2lkdGg9IjQyLjM4NjIiIGhlaWdodD0iMTc0Ljc0NSIgcng9IjIxLjE5MzEiIHRyYW5zZm9ybT0icm90YXRlKC0zOS4zNDQxIDExMy45NzIgMTM0LjI1KSIgZmlsbD0id2hpdGUiLz4KPGNpcmNsZSBjeD0iNDYwLjEyNiIgY3k9IjI3OS4yNzgiIHI9IjI1LjkwMjciIGZpbGw9IiMwMEZGMjUiLz4KPC9zdmc+" /> These are GGUF quantized versions of [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B). The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`. The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later. The IQ3_XXS requires version `f4d7e54` or later. Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf`
aarashfeizi/sdxl_ps
aarashfeizi
2024-04-29T05:06:18Z
418
2
diffusers
[ "diffusers", "autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-02-12T19:08:15Z
--- tags: - autotrain - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A photo of jfgodbout license: openrail++ --- # AutoTrain SDXL LoRA DreamBooth - aarashfeizi/sdxl_ps <Gallery /> ## Model description These are aarashfeizi/sdxl_ps LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: None. ## Trigger words You should use A photo of jfgodbout to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](aarashfeizi/sdxl_ps/tree/main) them in the Files & versions tab.
janetsw/bas
janetsw
2024-06-22T03:21:00Z
418
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-02-16T04:27:52Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - janetsw/bas These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1-base. You can find some example images in the following.
adowu/astral-128k
adowu
2024-04-10T04:49:29Z
418
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "astral", "128k", "conversational", "en", "base_model:adowu/a3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-10T04:22:52Z
--- base_model: - adowu/a3 - adowu/a1 - adowu/a2 library_name: transformers tags: - astral - 128k license: apache-2.0 language: - en pipeline_tag: text-generation --- # ASTRAL-128K ### Model Description The adowu/astral-128k model is a state-of-the-art language model based on the MistralForCausalLM architecture, designed for causal text generation. This means the model generates text sequentially, where each subsequent word is chosen based on the previous words in the sequence. Below is an overview of the key features and configuration of this model. ## Architectural Highlights The model is built on a foundation designed to understand and generate language by sequentially predicting the next piece of text based on the preceding context. Its architecture allows for deep understanding of language nuances, enabling the generation of text that is not only relevant but also rich in variety. ## Capabilities With its advanced architecture, the model can handle a wide range of language generation tasks. It is capable of generating long passages of text that maintain coherence and relevance throughout, making it suitable for tasks such as content creation, conversation generation, and more complex applications like summarization and translation in context-rich scenarios. ## Performance and Efficiency The model is optimized for both performance and efficiency, ensuring that it can process and generate text quickly while maintaining a high level of accuracy and coherence in the output. This balance makes it an excellent tool for developers and researchers looking for a reliable model for generating high-quality text. ## Application Scope Due to its versatile nature, the model finds applications across various domains, including but not limited to, automated storytelling, chatbots, and assistance systems, content generation for websites, and aiding in the creation of written content for educational and informational purposes. In summary, the adowu/astral-128k model represents a significant advancement in the field of natural language processing, offering a powerful tool for generating human-like text across a wide array of applications. - **Developed by:** aww - **Model type:** Mistral
mradermacher/Bagel-Hermes-2x34B-i1-GGUF
mradermacher
2024-06-29T06:15:35Z
418
0
transformers
[ "transformers", "gguf", "yi", "moe", "en", "base_model:Weyaxi/Bagel-Hermes-2x34B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-21T00:04:30Z
--- base_model: Weyaxi/Bagel-Hermes-2x34B language: - en library_name: transformers license: apache-2.0 license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE license_name: yi-license quantized_by: mradermacher tags: - yi - moe --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/Weyaxi/Bagel-Hermes-2x34B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-IQ1_S.gguf) | i1-IQ1_S | 12.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-IQ1_M.gguf) | i1-IQ1_M | 14.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 16.3 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 18.1 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-IQ2_S.gguf) | i1-IQ2_S | 18.8 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-IQ2_M.gguf) | i1-IQ2_M | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-Q2_K.gguf) | i1-Q2_K | 22.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 23.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 25.1 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 26.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-IQ3_S.gguf) | i1-IQ3_S | 26.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-IQ3_M.gguf) | i1-IQ3_M | 27.2 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 29.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 31.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 32.6 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-Q4_0.gguf) | i1-Q4_0 | 34.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 34.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 36.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 42.0 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 43.2 | | | [GGUF](https://huggingface.co/mradermacher/Bagel-Hermes-2x34B-i1-GGUF/resolve/main/Bagel-Hermes-2x34B.i1-Q6_K.gguf) | i1-Q6_K | 50.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf
RichardErkhov
2024-05-11T22:27:01Z
418
0
null
[ "gguf", "region:us" ]
null
2024-05-11T20:00:10Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Hermes-2-Pro-Llama-3-8B - GGUF - Model creator: https://huggingface.co/NousResearch/ - Original model: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Hermes-2-Pro-Llama-3-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [Hermes-2-Pro-Llama-3-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Hermes-2-Pro-Llama-3-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Hermes-2-Pro-Llama-3-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Hermes-2-Pro-Llama-3-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.IQ3_M.gguf) | IQ3_M | 3.53GB | | [Hermes-2-Pro-Llama-3-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [Hermes-2-Pro-Llama-3-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Hermes-2-Pro-Llama-3-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Hermes-2-Pro-Llama-3-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Hermes-2-Pro-Llama-3-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [Hermes-2-Pro-Llama-3-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Hermes-2-Pro-Llama-3-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Hermes-2-Pro-Llama-3-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [Hermes-2-Pro-Llama-3-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Hermes-2-Pro-Llama-3-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [Hermes-2-Pro-Llama-3-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [Hermes-2-Pro-Llama-3-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Hermes-2-Pro-Llama-3-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [Hermes-2-Pro-Llama-3-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Hermes-2-Pro-Llama-3-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [Hermes-2-Pro-Llama-3-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Hermes-2-Pro-Llama-3-8B-gguf/blob/main/Hermes-2-Pro-Llama-3-8B.Q6_K.gguf) | Q6_K | 6.14GB | Original model description: --- base_model: NousResearch/Meta-Llama-3-8B tags: - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl model-index: - name: Hermes-2-Pro-Llama-3-8B results: [] language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro messages: - role: system content: >- You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: >- Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. --- # Hermes 2 Pro - Llama-3 8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ggO2sBDJ8Bhc6w-zwTx5j.png) ## Model Description Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house. This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation. Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below. This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now. This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling ## Example Outputs ### Ask for a structured JSON output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ll2j2wkQffCsiSwUjfRUq.png) ### Write the plot for a story where anime became real life: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/h_7aXGXdm2p2ONYuDF4Ii.png) ### Coding Assistance ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/bBd0hyAb8w5rKUiN2w1I6.png) # Prompt Format Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/vOYv9wJUMn1Xrf4BvmO_x.png) ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5520|± |0.0145| | | |acc_norm|0.5887|± |0.0144| |arc_easy | 0|acc |0.8350|± |0.0076| | | |acc_norm|0.8123|± |0.0080| |boolq | 1|acc |0.8584|± |0.0061| |hellaswag | 0|acc |0.6265|± |0.0048| | | |acc_norm|0.8053|± |0.0040| |openbookqa | 0|acc |0.3800|± |0.0217| | | |acc_norm|0.4580|± |0.0223| |piqa | 0|acc |0.8003|± |0.0093| | | |acc_norm|0.8118|± |0.0091| |winogrande | 0|acc |0.7490|± |0.0122| ``` Average: 72.62 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.2520|± |0.0273| | | |acc_norm|0.2559|± |0.0274| |agieval_logiqa_en | 0|acc |0.3548|± |0.0188| | | |acc_norm|0.3625|± |0.0189| |agieval_lsat_ar | 0|acc |0.1826|± |0.0255| | | |acc_norm|0.1913|± |0.0260| |agieval_lsat_lr | 0|acc |0.5510|± |0.0220| | | |acc_norm|0.5255|± |0.0221| |agieval_lsat_rc | 0|acc |0.6431|± |0.0293| | | |acc_norm|0.6097|± |0.0298| |agieval_sat_en | 0|acc |0.7330|± |0.0309| | | |acc_norm|0.7039|± |0.0319| |agieval_sat_en_without_passage| 0|acc |0.4029|± |0.0343| | | |acc_norm|0.3689|± |0.0337| |agieval_sat_math | 0|acc |0.3909|± |0.0330| | | |acc_norm|0.3773|± |0.0328| ``` Average: 42.44 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|± |0.0360| |bigbench_date_understanding | 0|multiple_choice_grade|0.6667|± |0.0246| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|± |0.0290| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.1755|± |0.0201| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2014|± |0.0152| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5500|± |0.0288| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.4300|± |0.0222| |bigbench_navigate | 0|multiple_choice_grade|0.4980|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7010|± |0.0102| |bigbench_ruin_names | 0|multiple_choice_grade|0.4688|± |0.0236| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1974|± |0.0126| |bigbench_snarks | 0|multiple_choice_grade|0.7403|± |0.0327| |bigbench_sports_understanding | 0|multiple_choice_grade|0.5426|± |0.0159| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.5320|± |0.0158| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2280|± |0.0119| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1531|± |0.0086| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5500|± |0.0288| ``` Average: 43.55 ## TruthfulQA: ``` | Task |Version|Metric|Value| |Stderr| |-------------|------:|------|----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.410|± |0.0172| | | |mc2 |0.578|± |0.0157| ``` # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-8B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-2-Pro-Llama-3-8B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Pro-Llama-3-8B, url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)}, title={Hermes-2-Pro-Llama-3-8B}, author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"} } ```
thesven/Phi-3-medium-128k-instruct-GGUF
thesven
2024-05-22T17:25:30Z
418
0
null
[ "gguf", "nlp", "code", "text-generation", "en", "license:mit", "region:us" ]
text-generation
2024-05-22T05:50:07Z
--- license: mit license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE language: - en pipeline_tag: text-generation tags: - nlp - code widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- ## Description This repo contains GGUF quantized versions of the Microsoft Phi 3 medium 128k instruct model. They are supplied in different quantizations so that you can see what works best on the hardware you would like to run it on. The repo contains quantizations in the following types: - Q4_0 - Q4_1 - Q4_K - Q4_K_S - Q4_K_M - Q5_0 - Q5_1 - Q5_K - Q5_K_M - Q5_K_S - Q6_K - Q8_0 - Q2_K - Q3_K - Q3_K_S - Q3_K_XS - IQ2_K - IQ3_S - IQ3_XXS - IQ4_NL - IQ4_XS - IQ5_K - IQ2_S - IQ2_XS - IQ1_S <div style="text-align: center;"> <a href="https://github.com/thesven/GGUF-n-Go"> <img src="https://github.com/thesven/GGUF-n-Go/blob/main/assets/quantized_with.png?raw=true" alt="image/png" style="max-width: 350px;"> </a> </div> ## Model Summary The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support. After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures. When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters. Resources and Technical Documentation: + [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april) + [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) + [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) + Phi-3 ONNX: [128K](https://aka.ms/Phi3-mini-128k-instruct-onnx) ## Intended Uses **Primary use cases** The model is intended for commercial and research use in English. The model provides uses for applications which require: 1) Memory/compute constrained environments 2) Latency bound scenarios 3) Strong reasoning (especially code, math and logic) Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features. **Use case considerations** Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. ### Tokenizer Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size. ### Chat Format Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion<|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>`. In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ## Responsible AI Considerations Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include: + Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. + Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases. + Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case. + Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated. + Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses. Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include: + Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques. + High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context. + Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG). + Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case. + Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. ## Training ### Model * Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines. * Inputs: Text. It is best suited for prompts using chat format. * Context length: 128K tokens * GPUs: 512 H100-80G * Training time: 7 days * Training data: 3.3T tokens * Outputs: Generated text in response to the input * Dates: Our models were trained between February and April 2024 * Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models. ### Datasets Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of 1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code; 2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.); 3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness. ## License The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE). ## Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
gaianet/Phi-3-mini-128k-instruct-GGUF
gaianet
2024-05-22T06:56:03Z
418
2
transformers
[ "transformers", "gguf", "phi3", "text-generation", "nlp", "code", "custom_code", "en", "base_model:microsoft/Phi-3-mini-128k-instruct", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-22T06:26:48Z
--- base_model: microsoft/Phi-3-mini-128k-instruct license: mit license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE language: - en pipeline_tag: text-generation model_creator: Microsoft model_name: Phi 3 mini 128k instruct model_type: phi-msft quantized_by: Second State Inc. tags: - nlp - code --- ![](https://github.com/GaiaNet-AI/.github/assets/45785633/d6976adc-f97d-4f86-a648-0f2f5c8e7eee) # Phi-3-mini-128k-instruct-GGUF ## Original Model [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ## Run with Gaianet **Prompt template** prompt template: `phi-3-chat` **Context size** chat_ctx_size: `3072` **Run with GaiaNet** - Quick start: https://docs.gaianet.ai/node-guide/quick-start - Customize your node: https://docs.gaianet.ai/node-guide/customize ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Phi-3-mini-128k-instruct-Q2_K.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q2_K.gguf) | Q2_K | 2 | 1.42 GB| smallest, significant quality loss - not recommended for most purposes | | [Phi-3-mini-128k-instruct-Q3_K_L.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 2.09 GB| small, substantial quality loss | | [Phi-3-mini-128k-instruct-Q3_K_M.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 1.96 GB| very small, high quality loss | | [Phi-3-mini-128k-instruct-Q3_K_S.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 1.68 GB| very small, high quality loss | | [Phi-3-mini-128k-instruct-Q4_0.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q4_0.gguf) | Q4_0 | 4 | 2.18 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [Phi-3-mini-128k-instruct-Q4_K_M.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 2.39 GB| medium, balanced quality - recommended | | [Phi-3-mini-128k-instruct-Q4_K_S.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 2.19 GB| small, greater quality loss | | [Phi-3-mini-128k-instruct-Q5_0.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q5_0.gguf) | Q5_0 | 5 | 2.64 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Phi-3-mini-128k-instruct-Q5_K_M.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 2.82 GB| large, very low quality loss - recommended | | [Phi-3-mini-128k-instruct-Q5_K_S.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 2.64 GB| large, low quality loss - recommended | | [Phi-3-mini-128k-instruct-Q6_K.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q6_K.gguf) | Q6_K | 6 | 3.14 GB| very large, extremely low quality loss | | [Phi-3-mini-128k-instruct-Q8_0.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-Q8_0.gguf) | Q8_0 | 8 | 4.06 GB| very large, extremely low quality loss - not recommended | | [Phi-3-mini-128k-instruct-f16.gguf](https://huggingface.co/gaianet/Phi-3-mini-128k-instruct-GGUF/blob/main/Phi-3-mini-128k-instruct-f16.gguf) | f16 | 16 | 7.64 GB| | *Quantized with llama.cpp b2961.*
mradermacher/HelpingAI-9B-i1-GGUF
mradermacher
2024-06-16T22:59:27Z
418
0
transformers
[ "transformers", "gguf", "HelpingAI", "Emotionally Intelligent", "EQ", "en", "dataset:OEvortex/SentimentSynth", "dataset:OEvortex/EmotionalIntelligence-10K", "base_model:OEvortex/HelpingAI-9B", "license:other", "endpoints_compatible", "region:us" ]
null
2024-05-22T06:33:55Z
--- base_model: OEvortex/HelpingAI-9B datasets: - OEvortex/SentimentSynth - OEvortex/EmotionalIntelligence-10K language: - en library_name: transformers license: other license_link: LICENSE.md license_name: helpingai quantized_by: mradermacher tags: - HelpingAI - Emotionally Intelligent - EQ --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/OEvortex/HelpingAI-9B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/HelpingAI-9B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/HelpingAI-9B-i1-GGUF/resolve/main/HelpingAI-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf
RichardErkhov
2024-05-24T02:26:53Z
418
0
null
[ "gguf", "region:us" ]
null
2024-05-24T00:04:47Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) K2S3-Mistral-7b-v1.1 - GGUF - Model creator: https://huggingface.co/Changgil/ - Original model: https://huggingface.co/Changgil/K2S3-Mistral-7b-v1.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [K2S3-Mistral-7b-v1.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q2_K.gguf) | Q2_K | 2.6GB | | [K2S3-Mistral-7b-v1.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.IQ3_XS.gguf) | IQ3_XS | 2.89GB | | [K2S3-Mistral-7b-v1.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.IQ3_S.gguf) | IQ3_S | 3.04GB | | [K2S3-Mistral-7b-v1.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q3_K_S.gguf) | Q3_K_S | 3.02GB | | [K2S3-Mistral-7b-v1.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.IQ3_M.gguf) | IQ3_M | 3.14GB | | [K2S3-Mistral-7b-v1.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q3_K.gguf) | Q3_K | 3.35GB | | [K2S3-Mistral-7b-v1.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q3_K_M.gguf) | Q3_K_M | 3.35GB | | [K2S3-Mistral-7b-v1.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q3_K_L.gguf) | Q3_K_L | 3.64GB | | [K2S3-Mistral-7b-v1.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.IQ4_XS.gguf) | IQ4_XS | 3.76GB | | [K2S3-Mistral-7b-v1.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q4_0.gguf) | Q4_0 | 3.91GB | | [K2S3-Mistral-7b-v1.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.IQ4_NL.gguf) | IQ4_NL | 3.95GB | | [K2S3-Mistral-7b-v1.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q4_K_S.gguf) | Q4_K_S | 3.94GB | | [K2S3-Mistral-7b-v1.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q4_K.gguf) | Q4_K | 4.15GB | | [K2S3-Mistral-7b-v1.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q4_K_M.gguf) | Q4_K_M | 4.15GB | | [K2S3-Mistral-7b-v1.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q4_1.gguf) | Q4_1 | 4.33GB | | [K2S3-Mistral-7b-v1.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q5_0.gguf) | Q5_0 | 4.75GB | | [K2S3-Mistral-7b-v1.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q5_K_S.gguf) | Q5_K_S | 4.75GB | | [K2S3-Mistral-7b-v1.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q5_K.gguf) | Q5_K | 4.87GB | | [K2S3-Mistral-7b-v1.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q5_K_M.gguf) | Q5_K_M | 4.87GB | | [K2S3-Mistral-7b-v1.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q5_1.gguf) | Q5_1 | 5.16GB | | [K2S3-Mistral-7b-v1.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q6_K.gguf) | Q6_K | 5.63GB | | [K2S3-Mistral-7b-v1.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Changgil_-_K2S3-Mistral-7b-v1.1-gguf/blob/main/K2S3-Mistral-7b-v1.1.Q8_0.gguf) | Q8_0 | 7.3GB | Original model description: --- license: cc-by-nc-4.0 language: - en - ko --- --- ## Developed by : * K2S3 ## Model Number: * K2S3-Mistral-7b-v1.1 ## Base Model : * mistralai/Mistral-7B-v0.1 ### Training Data * The training data for this model includes alpaca-gpt4-data, and samples from The OpenOrca Dataset. * 이 모델의 훈련 데이터에는 alpaca-gpt4-data, 그리고 OpenOrca Dataset에서 제공한 샘플들이 포함됩니다. ### Training Method * This model was fine-tuned on the "mistralai/Mistral-7B-v0.1" base model using a full parameter tuning method with SFT (Supervised Fine-Tuning). * 이 모델은 "mistralai/Mistral-7B-v0.1" 기반 모델을 SFT를 사용하여 전체 파라미터 조정 방법으로 미세조정되었습니다. ### Hardware * Hardware: Utilized two A100 (80G*2EA) GPUs for training. * Training Factors: This model was fine-tuned with SFT, using the HuggingFace SFTtrainer and applied fsdp. * 이 모델은 SFT를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다.
Shengkun/LLama2-7B-Structural-Prune-2.25x
Shengkun
2024-06-05T07:19:53Z
418
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-03T16:13:51Z
--- license: apache-2.0 --- ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed]
afrideva/Qwen2-0.5B-GGUF
afrideva
2024-06-06T19:03:44Z
418
0
null
[ "gguf", "pretrained", "ggml", "quantized", "text-generation", "en", "base_model:Qwen/Qwen2-0.5B", "license:apache-2.0", "region:us" ]
text-generation
2024-06-06T18:59:29Z
--- base_model: Qwen/Qwen2-0.5B inference: true language: - en license: apache-2.0 model_creator: Qwen model_name: Qwen2-0.5B pipeline_tag: text-generation quantized_by: afrideva tags: - pretrained - gguf - ggml - quantized --- # Qwen2-0.5B-GGUF Quantized GGUF model files for [Qwen2-0.5B](https://huggingface.co/Qwen/Qwen2-0.5B) from [Qwen](https://huggingface.co/Qwen) ## Original Model Card: # Qwen2-0.5B ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 0.5B Qwen2 base language model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). <br> ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Usage We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. ## Performance The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc. The datasets for evaluation include: **English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot) **Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript) **Math Tasks**: GSM8K (4-shot), MATH (4-shot) **Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot) **Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot) #### Qwen2-0.5B & Qwen2-1.5B performances | Datasets | Phi-2 | Gemma-2B | MiniCPM | Qwen1.5-1.8B | Qwen2-0.5B | Qwen2-1.5B | | :--------| :---------: | :------------: | :------------: |:------------: | :------------: | :------------: | |#Non-Emb Params | 2.5B | 2.0B | 2.4B | 1.3B | 0.35B | 1.3B | |MMLU | 52.7 | 42.3 | 53.5 | 46.8 | 45.4 | **56.5** | |MMLU-Pro | - | 15.9 | - | - | 14.7 | 21.8 | |Theorem QA | - | - | - |- | 8.9 | **15.0** | |HumanEval | 47.6 | 22.0 |**50.0**| 20.1 | 22.0 | 31.1 | |MBPP | **55.0** | 29.2 | 47.3 | 18.0 | 22.0 | 37.4 | |GSM8K | 57.2 | 17.7 | 53.8 | 38.4 | 36.5 | **58.5** | |MATH | 3.5 | 11.8 | 10.2 | 10.1 | 10.7 | **21.7** | |BBH | **43.4** | 35.2 | 36.9 | 24.2 | 28.4 | 37.2 | |HellaSwag | **73.1** | 71.4 | 68.3 | 61.4 | 49.3 | 66.6 | |Winogrande | **74.4** | 66.8 | -| 60.3 | 56.8 | 66.2 | |ARC-C | **61.1** | 48.5 | -| 37.9 | 31.5 | 43.9 | |TruthfulQA | 44.5 | 33.1 | -| 39.4 | 39.7 | **45.9** | |C-Eval | 23.4 | 28.0 | 51.1| 59.7 | 58.2 | **70.6** | |CMMLU | 24.2 | - | 51.1 | 57.8 | 55.1 | **70.3** | ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
Jaidchen/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-IQ4_NL-GGUF
Jaidchen
2024-06-20T04:52:34Z
418
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "de", "base_model:DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-06-20T04:52:11Z
--- base_model: DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1 language: - de library_name: transformers license: llama3 tags: - llama-cpp - gguf-my-repo --- # Jaidchen/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-IQ4_NL-GGUF This model was converted to GGUF format from [`DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1`](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DiscoResearch/Llama3-DiscoLeo-Instruct-8B-32k-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Jaidchen/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-IQ4_NL-GGUF --hf-file llama3-discoleo-instruct-8b-32k-v0.1-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Jaidchen/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-IQ4_NL-GGUF --hf-file llama3-discoleo-instruct-8b-32k-v0.1-iq4_nl-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Jaidchen/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-IQ4_NL-GGUF --hf-file llama3-discoleo-instruct-8b-32k-v0.1-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Jaidchen/Llama3-DiscoLeo-Instruct-8B-32k-v0.1-IQ4_NL-GGUF --hf-file llama3-discoleo-instruct-8b-32k-v0.1-iq4_nl-imat.gguf -c 2048 ```
Kitajiang/push_exam2-Q4_K_M-GGUF
Kitajiang
2024-06-20T05:30:31Z
418
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:Kitajiang/push_exam2", "region:us" ]
null
2024-06-20T05:30:11Z
--- base_model: Kitajiang/push_exam2 tags: - llama-cpp - gguf-my-repo --- # Kitajiang/push_exam2-Q4_K_M-GGUF This model was converted to GGUF format from [`Kitajiang/push_exam2`](https://huggingface.co/Kitajiang/push_exam2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Kitajiang/push_exam2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Kitajiang/push_exam2-Q4_K_M-GGUF --hf-file push_exam2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Kitajiang/push_exam2-Q4_K_M-GGUF --hf-file push_exam2-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Kitajiang/push_exam2-Q4_K_M-GGUF --hf-file push_exam2-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Kitajiang/push_exam2-Q4_K_M-GGUF --hf-file push_exam2-q4_k_m.gguf -c 2048 ```
Ali-C137/Q2AW1M-0111
Ali-C137
2024-06-21T20:38:35Z
418
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-21T14:58:16Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF
jfer1015
2024-06-24T02:10:04Z
418
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "region:us" ]
null
2024-06-24T02:09:41Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.3 license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF This model was converted to GGUF format from [`mistralai/Mistral-7B-Instruct-v0.3`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -c 2048 ```
RichardErkhov/beberik_-_TinyExperts-v0-4x1B-gguf
RichardErkhov
2024-06-29T18:06:50Z
418
0
null
[ "gguf", "region:us" ]
null
2024-06-29T16:25:21Z
Entry not found
tliobnih/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF
tliobnih
2024-07-01T03:53:09Z
418
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "zh", "en", "base_model:MediaTek-Research/Breeze-7B-Instruct-v1_0", "license:apache-2.0", "region:us" ]
text-generation
2024-07-01T03:52:51Z
--- base_model: MediaTek-Research/Breeze-7B-Instruct-v1_0 language: - zh - en license: apache-2.0 pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # tliobnih/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF This model was converted to GGUF format from [`MediaTek-Research/Breeze-7B-Instruct-v1_0`](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo tliobnih/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo tliobnih/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo tliobnih/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo tliobnih/Breeze-7B-Instruct-v1_0-Q4_K_M-GGUF --hf-file breeze-7b-instruct-v1_0-q4_k_m.gguf -c 2048 ```
DancingIguana/music-generation
DancingIguana
2022-06-13T16:48:57Z
417
23
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-06-11T20:47:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: music-generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # music-generation This model a trained from scratch version of [distilgpt2](https://huggingface.co/distilgpt2) on a dataset where the text represents musical notes. The [dataset](https://www.kaggle.com/datasets/soumikrakshit/classical-music-midi) consists of one stream of notes from MIDI files (the stream with most notes), where all of the melodies were transposed either to C major or A minor. Also, the BPM of the song is ignored, the duration of each note is based on its quarter length. Each element in the melody is represented by a series of letters and numbers with the following structure. * For a note: ns[pitch of the note as a string]s[duration] * Examples: nsC4s0p25, nsF7s1p0, * For a rest: rs[duration]: * Examples: rs0p5, rs1q6 * For a chord: cs[number of notes in chord]s[pitches of chords separated by "s"]s[duration] * Examples: cs2sE7sF7s1q3, cs2sG3sGw3s0p25 The following special symbols are replaced in the strings by the following: * . = p * / = q * # = * - = t ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
pablocosta/bertabaporu-base-uncased
pablocosta
2023-11-14T22:33:01Z
417
10
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "pt", "dataset:Twitter", "doi:10.57967/hf/0019", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-02T21:59:20Z
--- language: pt license: mit tags: - bert - pytorch datasets: - Twitter --- **Paper:** For more details, please refer to our paper: [BERTabaporu: Assessing a Genre-Specific Language Model for Portuguese NLP](https://aclanthology.org/2023.ranlp-1.24/) ## Introduction BERTabaporu is a Brazilian Portuguese BERT model in the Twitter domain. The model has been built from a collection of 238 million tweets written by over 100 thousand unique Twitter users, and conveying over 2.9 billion tokens in total. ## Available models | Model | Arch. | #Layers | #Params | | ---------------------------------------- | ---------- | ------- | ------- | | `pablocosta/bertabaporu-base-uncased` | BERT-Base | 12 | 110M | | `pablocosta/bertabaporu-large-uncased` | BERT-Large | 24 | 335M | ## Usage ```python from transformers import AutoTokenizer # Or BertTokenizer from transformers import AutoModelForPreTraining # Or BertForPreTraining for loading pretraining heads from transformers import AutoModel # or BertModel, for BERT without pretraining heads model = AutoModelForPreTraining.from_pretrained('pablocosta/bertabaporu-base-uncased') tokenizer = AutoTokenizer.from_pretrained('pablocosta/bertabaporu-base-uncased') ``` ## Cite us @inproceedings{costa-etal-2023-bertabaporu, title = "{BERT}abaporu: Assessing a Genre-Specific Language Model for {P}ortuguese {NLP}", author = "Costa, Pablo Botton and Pavan, Matheus Camasmie and Santos, Wesley Ramos and Silva, Samuel Caetano and Paraboni, Ivandr{\'e}", editor = "Mitkov, Ruslan and Angelova, Galia", booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing", month = sep, year = "2023", address = "Varna, Bulgaria", publisher = "INCOMA Ltd., Shoumen, Bulgaria", url = "https://aclanthology.org/2023.ranlp-1.24", pages = "217--223", abstract = "Transformer-based language models such as Bidirectional Encoder Representations from Transformers (BERT) are now mainstream in the NLP field, but extensions to languages other than English, to new domains and/or to more specific text genres are still in demand. In this paper we introduced BERTabaporu, a BERT language model that has been pre-trained on Twitter data in the Brazilian Portuguese language. The model is shown to outperform the best-known general-purpose model for this language in three Twitter-related NLP tasks, making a potentially useful resource for Portuguese NLP in general.", }
architext/gptj-162M
architext
2023-05-21T07:23:43Z
417
24
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "architecture", "design", "en", "dataset:THEODOROS/Architext_v1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-08-22T00:04:16Z
--- license: apache-2.0 datasets: - THEODOROS/Architext_v1 language: - en pipeline_tag: text-generation tags: - architecture - design --- # Architext GPT-J 162M # Model Description Architext GPT-J-162M is a transformer model trained using Ben Wang's Mesh Transformer JAX on the Pile and finetuned specifically on a synthetically generated dataset of architectural layouts of apartments. It is capable of generating a large diversity of designs, in a convenient geometric representation that can be used downstream in different design workflows, using just a natural language prompt. The model consists of 12 layers with a model dimension of 768, and a feedforward dimension of 2048. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. # Training data GPT-J 162B was pre-trained on the Pile, a large-scale curated dataset created by EleutherAI. It was then finetuned on synthetically generated data that was procedurally generated using the Rhinocers/Grasshopper software suite. The model was finetuned for 1.25 billion tokens over 11,500 steps on TPU v3-8. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. # Intended Use and Limitations Architext models learn an inner representation of the architectural design that can be used to generate a larger diversity of geometric designs and can be useful for many downstream design workflows and tasks. While it could be adapted to many different design outputs, the model is best at generating residential floor plans given a natural language prompt. # How to use ```python This model can be easily loaded using the AutoModelForCausalLM functionality: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("architext/gptj-162M") model = AutoModelForCausalLM.from_pretrained("architext/gptj-162M") ``` # Limitations and Biases The core functionality of Architext is taking a string of text and generating a design output, by still continuously predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work especially in the design context. Architext will often generate a design that is not semantically correct, depending on the prompt description it was given, although it almost always generates designs that are valid (non intersecting spaces, no orphan rooms). It is also limited within a small diversity of natural language prompts, specifically prompts that describe: * typology: "a house with two bedrooms and three bathrooms" or "a house with six rooms" * adjacency: "the bedroom is adjacent to the living room" or "the kitchen is not adjacent to the bathroom" * location: "the bedroom is in the north side of the house" or "a bedroom is in the south east side of the house" Of course, the designs that are generated are conceptual designs and one should never depend on Architext to directly generate accurate construction documentation. # Citation and Related Information ## BibTeX entry To cite this model: ``` @article{galanos2023architext, title={Architext: Language-Driven Generative Architecture Design}, author={Galanos, Theodoros and Liapis, Antonios and Yannakakis, Georgios N}, journal={arXiv preprint arXiv:2303.07519}, year={2023} } ``` To cite the codebase that trained this model: ``` @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` # Acknowledgements This project would not have been possible without compute generously provided by Google through the TPU Research Cloud that generously provided access to Clout TPU VMs used to finetune this model.
DunnBC22/trocr-base-handwritten-OCR-handwriting_recognition_v2
DunnBC22
2023-08-25T03:15:17Z
417
7
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "generated_from_trainer", "image-to-text", "en", "endpoints_compatible", "region:us" ]
image-to-text
2023-04-17T00:13:38Z
--- tags: - generated_from_trainer model-index: - name: trocr-base-handwritten-OCR-handwriting_recognition_v2 results: [] language: - en metrics: - cer pipeline_tag: image-to-text --- # trocr-base-handwritten-OCR-handwriting_recognition_v2 This model is a fine-tuned version of [microsoft/trocr-base-handwritten](https://huggingface.co/microsoft/trocr-base-handwritten). It achieves the following results on the evaluation set: - Loss: 0.2470 - CER: 0.0360 ## Model description For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Optical%20Character%20Recognition%20(OCR)/Handwriting%20Recognition/Handwriting%20Recognition_v2/Mini%20Handwriting%20OCR%20Project.ipynb ## Intended uses & limitations This model is intended to demonstrate my ability to solve a complex problem using technology. You are welcome to test and experiment with this model, but it is at your own risk/peril. ## Training and evaluation data Dataset Source: https://www.kaggle.com/datasets/ssarkar445/handwriting-recognitionocr _Character Length for Training Dataset:_ ![Input Character Length for Training Dataset](https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/raw/main/Optical%20Character%20Recognition%20(OCR)/Handwriting%20Recognition/Images/Input%20Character%20Length%20Distribution%20for%20Training%20Dataset.png) _Character Length for Evaluation Dataset:_ ![Input Character Length for Evaluation Dataset](https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/raw/main/Optical%20Character%20Recognition%20(OCR)/Handwriting%20Recognition/Images/Input%20Characgter%20Length%20Distribution%20for%20Evaluation%20Dataset.png) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4292 | 1.0 | 2500 | 0.4332 | 0.0679 | | 0.2521 | 2.0 | 5000 | 0.2767 | 0.0483 | | 0.1049 | 3.0 | 7500 | 0.2470 | 0.0360 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.12.1 - Datasets 2.8.0 - Tokenizers 0.12.1
radames/stable-diffusion-x4-upscaler-img2img
radames
2023-05-16T22:01:10Z
417
18
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "image-to-image", "arxiv:2112.10752", "arxiv:2202.00512", "arxiv:1910.09700", "license:openrail++", "diffusers:StableDiffusionUpscalePipeline", "region:us" ]
image-to-image
2023-05-16T19:43:42Z
--- license: openrail++ tags: - stable-diffusion - image-to-image duplicated_from: stabilityai/stable-diffusion-x4-upscaler pipeline_tag: image-to-image --- # Stable Diffusion x4 upscaler model card This model card focuses on the model associated with the Stable Diffusion Upscaler, available [here](https://github.com/Stability-AI/stablediffusion). This model is trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). ![Image](https://github.com/Stability-AI/stablediffusion/raw/main/assets/stable-samples/upscaling/merged-dog.png) - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `x4-upscaler-ema.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler/resolve/main/x4-upscaler-ema.ckpt). - Use it with 🧨 [`diffusers`](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler#examples) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion 2 in a simple and efficient manner. ```bash pip install diffusers transformers accelerate scipy safetensors ``` ```python import requests from PIL import Image from io import BytesIO from diffusers import StableDiffusionUpscalePipeline import torch # load model and scheduler model_id = "stabilityai/stable-diffusion-x4-upscaler" pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipeline = pipeline.to("cuda") # let's download an image url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png" response = requests.get(url) low_res_img = Image.open(BytesIO(response.content)).convert("RGB") low_res_img = low_res_img.resize((128, 128)) prompt = "a white cat" upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0] upscaled_image.save("upsampled_cat.png") ``` **Notes**: - Despite not being a dependency, we highly recommend you to install [xformers](https://github.com/facebookresearch/xformers) for memory efficient attention (better performance) - If you have low GPU RAM available, make sure to add a `pipe.enable_attention_slicing()` after sending it to `cuda` for less VRAM usage (to the cost of speed) # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a subset of the large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section). ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion vw was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic. **Training Procedure** Stable Diffusion v2 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through the OpenCLIP-ViT/H text-encoder. - The output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We also use the so-called _v-objective_, see https://arxiv.org/abs/2202.00512. We currently provide the following checkpoints: - `512-base-ema.ckpt`: 550k steps at resolution `256x256` on a subset of [LAION-5B](https://laion.ai/blog/laion-5b/) filtered for explicit pornographic material, using the [LAION-NSFW classifier](https://github.com/LAION-AI/CLIP-based-NSFW-Detector) with `punsafe=0.1` and an [aesthetic score](https://github.com/christophschuhmann/improved-aesthetic-predictor) >= `4.5`. 850k steps at resolution `512x512` on the same dataset with resolution `>= 512x512`. - `768-v-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for 150k steps using a [v-objective](https://arxiv.org/abs/2202.00512) on the same dataset. Resumed for another 140k steps on a `768x768` subset of our dataset. - `512-depth-ema.ckpt`: Resumed from `512-base-ema.ckpt` and finetuned for 200k steps. Added an extra input channel to process the (relative) depth prediction produced by [MiDaS](https://github.com/isl-org/MiDaS) (`dpt_hybrid`) which is used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. - `512-inpainting-ema.ckpt`: Resumed from `512-base-ema.ckpt` and trained for another 200k steps. Follows the mask-generation strategy presented in [LAMA](https://github.com/saic-mdal/lama) which, in combination with the latent VAE representations of the masked image, are used as an additional conditioning. The additional input channels of the U-Net which process this extra information were zero-initialized. The same strategy was used to train the [1.5-inpainting checkpoint](https://github.com/saic-mdal/lama). - `x4-upscaling-ema.ckpt`: Trained for 1.25M steps on a 10M subset of LAION containing images `>2048x2048`. The model was trained on crops of size `512x512` and is a text-guided [latent upscaling diffusion model](https://arxiv.org/abs/2112.10752). In addition to the textual input, it receives a `noise_level` as an input parameter, which can be used to add noise to the low-resolution input according to a [predefined diffusion schedule](configs/stable-diffusion/x4-upscaling.yaml). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 steps DDIM sampling steps show the relative improvements of the checkpoints: ![pareto](model-variants.jpg) Evaluated using 50 DDIM steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 200000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq. ## Citation @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
zhyemmmm/ToonYou
zhyemmmm
2023-06-06T07:23:13Z
417
3
diffusers
[ "diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-06T07:05:30Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image --- use to study
trajanson/textual_inversion_ralph_lauren_2
trajanson
2023-08-21T02:44:05Z
417
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-20T23:47:05Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - trajanson/textual_inversion_cat These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
s3nh/Sentdex-WSB-GPT-13B-GGUF
s3nh
2023-09-01T08:09:40Z
417
0
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
2023-09-01T07:51:55Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/Sentdex/WSB-GPT-13B). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### Perplexity params Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16 7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066 13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543 ### inference ```python import ctransformers from ctransformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(output_dir, gguf_file, gpu_layers=32, model_type="llama") manual_input: str = "Tell me about your last dream, please." llm(manual_input, max_new_tokens=256, temperature=0.9, top_p= 0.7) ``` # Original model card
softwareweaver/photon
softwareweaver
2023-11-14T14:52:21Z
417
1
diffusers
[ "diffusers", "onnx", "text-to-image", "en", "license:creativeml-openrail-m", "diffusers:OnnxStableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-03T17:15:35Z
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image --- # Model Card for Model ID Olive Optimized Onnx model for https://civitai.com/models/84728 This model is being used by Fusion Quill - a Windows app that runs Stable Diffusion models locally. https://FusionQuill.AI
TheBloke/LLaMA-65B-GGUF
TheBloke
2023-09-20T09:03:47Z
417
4
transformers
[ "transformers", "gguf", "llama", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-20T02:33:10Z
--- base_model: https://ai.meta.com/blog/large-language-model-llama-meta-ai inference: false license: other model_creator: Meta model_name: LLaMA 65B model_type: llama prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # LLaMA 65B - GGUF - Model creator: [Meta](https://huggingface.co/none) - Original model: [LLaMA 65B](https://ai.meta.com/blog/large-language-model-llama-meta-ai) <!-- description start --> ## Description This repo contains GGUF format model files for [Meta's LLaMA 65B](https://ai.meta.com/blog/large-language-model-llama-meta-ai). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLaMA-65B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-65B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-65B-GGUF) * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-65b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [llama-65b.Q2_K.gguf](https://huggingface.co/TheBloke/LLaMA-65B-GGUF/blob/main/llama-65b.Q2_K.gguf) | Q2_K | 2 | 27.04 GB| 29.54 GB | smallest, significant quality loss - not recommended for most purposes | | [llama-65b.Q3_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-65B-GGUF/blob/main/llama-65b.Q3_K_S.gguf) | Q3_K_S | 3 | 28.16 GB| 30.66 GB | very small, high quality loss | | [llama-65b.Q3_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-65B-GGUF/blob/main/llama-65b.Q3_K_M.gguf) | Q3_K_M | 3 | 31.56 GB| 34.06 GB | very small, high quality loss | | [llama-65b.Q3_K_L.gguf](https://huggingface.co/TheBloke/LLaMA-65B-GGUF/blob/main/llama-65b.Q3_K_L.gguf) | Q3_K_L | 3 | 34.65 GB| 37.15 GB | small, substantial quality loss | | [llama-65b.Q4_0.gguf](https://huggingface.co/TheBloke/LLaMA-65B-GGUF/blob/main/llama-65b.Q4_0.gguf) | Q4_0 | 4 | 36.80 GB| 39.30 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [llama-65b.Q4_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-65B-GGUF/blob/main/llama-65b.Q4_K_S.gguf) | Q4_K_S | 4 | 36.92 GB| 39.42 GB | small, greater quality loss | | [llama-65b.Q4_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-65B-GGUF/blob/main/llama-65b.Q4_K_M.gguf) | Q4_K_M | 4 | 39.35 GB| 41.85 GB | medium, balanced quality - recommended | | [llama-65b.Q5_0.gguf](https://huggingface.co/TheBloke/LLaMA-65B-GGUF/blob/main/llama-65b.Q5_0.gguf) | Q5_0 | 5 | 44.92 GB| 47.42 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [llama-65b.Q5_K_S.gguf](https://huggingface.co/TheBloke/LLaMA-65B-GGUF/blob/main/llama-65b.Q5_K_S.gguf) | Q5_K_S | 5 | 44.92 GB| 47.42 GB | large, low quality loss - recommended | | [llama-65b.Q5_K_M.gguf](https://huggingface.co/TheBloke/LLaMA-65B-GGUF/blob/main/llama-65b.Q5_K_M.gguf) | Q5_K_M | 5 | 46.24 GB| 48.74 GB | large, very low quality loss - recommended | | llama-65b.Q6_K.gguf | Q6_K | 6 | 53.56 GB| 56.06 GB | very large, extremely low quality loss | | llama-65b.Q8_0.gguf | Q8_0 | 8 | 69.37 GB| 71.87 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `llama-65b.Q6_K.gguf-split-a` * `llama-65b.Q6_K.gguf-split-b` ### q8_0 Please download: * `llama-65b.Q8_0.gguf-split-a` * `llama-65b.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat llama-65b.Q6_K.gguf-split-* > llama-65b.Q6_K.gguf && rm llama-65b.Q6_K.gguf-split-* cat llama-65b.Q8_0.gguf-split-* > llama-65b.Q8_0.gguf && rm llama-65b.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B llama-65b.Q6_K.gguf-split-a + llama-65b.Q6_K.gguf-split-b llama-65b.Q6_K.gguf del llama-65b.Q6_K.gguf-split-a llama-65b.Q6_K.gguf-split-b COPY /B llama-65b.Q8_0.gguf-split-a + llama-65b.Q8_0.gguf-split-b llama-65b.Q8_0.gguf del llama-65b.Q8_0.gguf-split-a llama-65b.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/LLaMA-65B-GGUF and below it, a specific filename to download, such as: llama-65b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/LLaMA-65B-GGUF llama-65b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/LLaMA-65B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA-65B-GGUF llama-65b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m llama-65b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/LLaMA-65B-GGUF", model_file="llama-65b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Meta's LLaMA 65B This contains the weights for the LLaMA-65b model. This model is under a non-commercial license (see the LICENSE file). You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format. <!-- original-model-card end -->
TheBloke/tulu-7B-GGUF
TheBloke
2023-09-27T12:53:39Z
417
1
transformers
[ "transformers", "gguf", "llama", "en", "dataset:databricks/databricks-dolly-15k", "dataset:OpenAssistant/oasst1", "dataset:sahil2801/CodeAlpaca-20k", "arxiv:2306.04751", "arxiv:2302.13971", "arxiv:2304.07327", "base_model:allenai/tulu-7b", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-20T02:43:32Z
--- language: - en license: other datasets: - databricks/databricks-dolly-15k - OpenAssistant/oasst1 - sahil2801/CodeAlpaca-20k model_name: Tulu 7B base_model: allenai/tulu-7b inference: false model_creator: Allen Institute for AI model_type: llama prompt_template: '<|user|> {prompt} <|assistant|> ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Tulu 7B - GGUF - Model creator: [Allen Institute for AI](https://huggingface.co/allenai) - Original model: [Tulu 7B](https://huggingface.co/allenai/tulu-7b) <!-- description start --> ## Description This repo contains GGUF format model files for [Allen AI's Tulu 7B](https://huggingface.co/allenai/tulu-7b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/tulu-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/tulu-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-7B-GGUF) * [Allen Institute for AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-7B-fp16) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Tulu ``` <|user|> {prompt} <|assistant|> ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [tulu-7b.Q2_K.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [tulu-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [tulu-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [tulu-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [tulu-7b.Q4_0.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [tulu-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [tulu-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [tulu-7b.Q5_0.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [tulu-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [tulu-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [tulu-7b.Q6_K.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [tulu-7b.Q8_0.gguf](https://huggingface.co/TheBloke/tulu-7B-GGUF/blob/main/tulu-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/tulu-7B-GGUF and below it, a specific filename to download, such as: tulu-7b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/tulu-7B-GGUF tulu-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/tulu-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/tulu-7B-GGUF tulu-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m tulu-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|user|>\n{prompt}\n<|assistant|>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/tulu-7B-GGUF", model_file="tulu-7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Allen AI's Tulu 7B <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Allen AI's Tulu 7B fp16 These files are pytorch format fp16 model files for [Allen AI's Tulu 7B](https://huggingface.co/allenai/tulu-7b). It is the result of merging and/or converting the source repository to float16. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/tulu-7B-fp16) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/tulu-7B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/tulu-7B-fp16) ## Prompt template The following template should be used: ``` <|user|> prompt goes here <|assistant|> ``` **Note**: There should be a newline after `<|assistant|>`. This appears to be very important for getting this model to respond correctly. In other words, the prompt is: ``` <|user|>\nprompt goes here\n<|assistant|>\n ``` <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Allen AI's Tulu 7B # Tulu 7B This model is a 7B LLaMa model finetuned on a mixture of instruction datasets (FLAN V2, CoT, Dolly, Open Assistant 1, GPT4-Alpaca, Code-Alpaca, and ShareGPT). *Please note this is a model diff - see below for usage instructions*. This was trained as part of the paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751). The codebase used to train and evaluate this model can be found at [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct). This model is licensed under the AI model license given in LICENSE.txt along with the original Llama license (llama_license.txt). ## Usage We assume you have access to a LLaMa model in HF format already. You can find details on getting access and converting the model here: [https://huggingface.co/docs/transformers/main/model_doc/llama](https://huggingface.co/docs/transformers/main/model_doc/llama) Clone [https://github.com/allenai/open-instruct](https://github.com/allenai/open-instruct) and install the required dependencies, or just copy `scripts/weight_diff.py` and install the minimal requirements listed in `weight-diff-requirements.txt`. Then download or clone this model diff to the same machine. Then, run: ```bash python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location} ``` And you will have a recovered model! Note this takes up a decent amount of RAM, especially for the larger models. ## Input Format The model is trained to use the following format (note the newlines): ``` <|user|> Your message here! <|assistant|> ``` For best results, format all inputs in this manner. ## Performance Here is the performance of this model across benchmarks explored in our paper [How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources](https://arxiv.org/abs/2306.04751): | MMLU 0-shot | MMLU 5-shot | GSM Direct | GSM CoT | BBH Direct | BBH CoT | TydiQA Gold-Passage | TydiQA Closed-book | Codex-Eval Pass@1 | Codex-Eval Pass@10 | AlpacaFarm vs Davinci-003 | Average | |:-----------:|:-----------:|:----------:|:-------:|:----------:|:-------:|:-------------------:|:------------------:|:-----------------:|:------------------:|:-------------------------:|---------| | 44.5 | 47.0 | 6.0 | 27.0 | 38.1 | 39.2 | 45.7 | 7.7 | 17.5 | 27.8 | 48.3 | 33.1 | If you use this model, please cite our work, the llama paper, and the original datasets: ``` @misc{wang2023far, title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources}, author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi}, year={2023}, eprint={2306.04751}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample}, year={2023}, eprint={2302.13971}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{dolly, author = {Databricks}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {Blog post}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm} } ``` ``` @article{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Longpre, Shayne and Hou, Le and Vu, Tu and Webson, Albert and Chung, Hyung Won and Tay, Yi and Zhou, Denny and Le, Quoc V and Zoph, Barret and Wei, Jason and others}, journal={arXiv preprint arXiv:2301.13688}, year={2023} } ``` ``` @misc{köpf2023openassistant, title={OpenAssistant Conversations -- Democratizing Large Language Model Alignment}, author={Andreas Köpf and Yannic Kilcher and Dimitri von Rütte and Sotiris Anagnostidis and Zhi-Rui Tam and Keith Stevens and Abdullah Barhoum and Nguyen Minh Duc and Oliver Stanley and Richárd Nagyfi and Shahul ES and Sameer Suri and David Glushkov and Arnav Dantuluri and Andrew Maguire and Christoph Schuhmann and Huu Nguyen and Alexander Mattick}, year={2023}, eprint={2304.07327}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @article{peng2023instruction, title={Instruction Tuning with GPT-4}, author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng}, journal={arXiv preprint arXiv:2304.03277}, year={2023} } ``` ``` @misc{codealpaca, author = {Sahil Chaudhary}, title = {Code Alpaca: An Instruction-following LLaMA model for code generation}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/sahil280114/codealpaca}}, } ``` <!-- original-model-card end -->
mesolitica/pos-t5-small-standard-bahasa-cased
mesolitica
2023-10-09T07:18:59Z
417
0
transformers
[ "transformers", "safetensors", "t5", "token-classification", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
token-classification
2023-10-09T07:18:24Z
Entry not found
TheBloke/ZephRP-m7b-GGUF
TheBloke
2023-10-12T03:52:27Z
417
3
transformers
[ "transformers", "gguf", "mistral", "text-generation", "en", "base_model:royallab/ZephRP-m7b", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
text-generation
2023-10-12T03:42:45Z
--- base_model: royallab/ZephRP-m7b inference: false language: - en library_name: transformers license: cc-by-nc-4.0 model_creator: The Royal Lab model_name: Zephrp m7b model_type: mistral pipeline_tag: text-generation prompt_template: "### Instruction:\nCharacter's Persona: bot character description\n\ \nUser's persona: user character description\n \nScenario: what happens in the\ \ story\n\nPlay the role of Character. You must engage in a roleplaying chat with\ \ User below this line. Do not write dialogues and narration for User. Character\ \ should respond with messages of medium length.\n\n### Input:\nUser: {prompt}\n\ \n### Response:\nCharacter: \n" quantized_by: TheBloke tags: - mistral --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Zephrp m7b - GGUF - Model creator: [The Royal Lab](https://huggingface.co/royallab) - Original model: [Zephrp m7b](https://huggingface.co/royallab/ZephRP-m7b) <!-- description start --> ## Description This repo contains GGUF format model files for [The Royal Lab's Zephrp m7b](https://huggingface.co/royallab/ZephRP-m7b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ZephRP-m7b-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ZephRP-m7b-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF) * [The Royal Lab's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/royallab/ZephRP-m7b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: LimaRP-Alpaca ``` ### Instruction: Character's Persona: bot character description User's persona: user character description Scenario: what happens in the story Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length. ### Input: User: {prompt} ### Response: Character: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [zephrp-m7b.Q2_K.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [zephrp-m7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [zephrp-m7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [zephrp-m7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [zephrp-m7b.Q4_0.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [zephrp-m7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [zephrp-m7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [zephrp-m7b.Q5_0.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [zephrp-m7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [zephrp-m7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [zephrp-m7b.Q6_K.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [zephrp-m7b.Q8_0.gguf](https://huggingface.co/TheBloke/ZephRP-m7b-GGUF/blob/main/zephrp-m7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/ZephRP-m7b-GGUF and below it, a specific filename to download, such as: zephrp-m7b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/ZephRP-m7b-GGUF zephrp-m7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/ZephRP-m7b-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/ZephRP-m7b-GGUF zephrp-m7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m zephrp-m7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction:\nCharacter's Persona: bot character description\n\nUser's persona: user character description\n \nScenario: what happens in the story\n\nPlay the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length.\n\n### Input:\nUser: {prompt}\n\n### Response:\nCharacter:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/ZephRP-m7b-GGUF", model_file="zephrp-m7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: The Royal Lab's Zephrp m7b # ZephRP-m7b This is a [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1)-based model consisting of a merge between [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) and PEFT adapter trained using the LimaRP dataset. The goal was to combine the message length instruction training of LimaRPv3 and additional stylistic elements with the superior knowledge and instruction-following capabilities of the Zephyr model. ## Usage: The intended prompt format is the Alpaca instruction format of LimaRP v3: ``` ### Instruction: Character's Persona: {bot character description} User's Persona: {user character description} Scenario: {what happens in the story} Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. ### Input: User: {utterance} ### Response: Character: {utterance} ### Input User: {utterance} ### Response: Character: {utterance} (etc.) ``` ## Message length control Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this: ``` ### Input User: {utterance} ### Response: (length = medium) Character: {utterance} ``` This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages. ## Bias, Risks, and Limitations The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form. ## Training Details The LimaRP PEFT adapter was trained as an 8-bit lora using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). The following hyperparameters were used during training of the adapter on the original [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model using a single L40 GPU: - learning_rate: 0.00015 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 2 <!-- original-model-card end -->
TheBloke/MysticFusion-13B-GGUF
TheBloke
2023-11-25T17:12:24Z
417
3
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "base_model:Walmart-the-bag/MysticFusion-13B", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
2023-11-25T17:05:21Z
--- base_model: Walmart-the-bag/MysticFusion-13B inference: false language: - en license: llama2 model_creator: wbag model_name: MysticFusion 13B model_type: llama pipeline_tag: text-generation prompt_template: "### Instruction: \n{prompt}\n### Response:\n" quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MysticFusion 13B - GGUF - Model creator: [wbag](https://huggingface.co/Walmart-the-bag) - Original model: [MysticFusion 13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [wbag's MysticFusion 13B](https://huggingface.co/Walmart-the-bag/MysticFusion-13B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MysticFusion-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MysticFusion-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF) * [wbag's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Walmart-the-bag/MysticFusion-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca-Tiefighter ``` ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [mysticfusion-13b.Q2_K.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [mysticfusion-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [mysticfusion-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [mysticfusion-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [mysticfusion-13b.Q4_0.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [mysticfusion-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [mysticfusion-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [mysticfusion-13b.Q5_0.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [mysticfusion-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [mysticfusion-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [mysticfusion-13b.Q6_K.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [mysticfusion-13b.Q8_0.gguf](https://huggingface.co/TheBloke/MysticFusion-13B-GGUF/blob/main/mysticfusion-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/MysticFusion-13B-GGUF and below it, a specific filename to download, such as: mysticfusion-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/MysticFusion-13B-GGUF mysticfusion-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/MysticFusion-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MysticFusion-13B-GGUF mysticfusion-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m mysticfusion-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: \n{prompt}\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/MysticFusion-13B-GGUF", model_file="mysticfusion-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: wbag's MysticFusion 13B YAML: ``` models: - model: KoboldAI/LLaMA2-13B-Tiefighter parameters: weight: 0.3 - model: NeverSleep/Noromaid-13b-v0.1.1 parameters: weight: 0.5 - model: lmsys/vicuna-13b-v1.5 parameters: weight: 0.2 merge_method: linear dtype: float16 ``` # Prompt Template: ### Alpaca ``` ### Instruction: ### Response: ``` <!-- original-model-card end -->
hkust-nlp/deita-7b-v1.0
hkust-nlp
2023-12-29T14:57:43Z
417
11
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:hkust-nlp/deita-6k-v0", "arxiv:2312.15685", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-20T06:39:33Z
--- license: apache-2.0 datasets: - hkust-nlp/deita-6k-v0 language: - en --- <img src="https://huggingface.co/datasets/hkust-nlp/deita-images/resolve/main/logo-final.png" alt="Deita banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Deita 7B V1.0 [GitHub](https://github.com/hkust-nlp/deita) | [Paper](https://arxiv.org/abs/2312.15685) Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs). Deita 7B V1.0 is a fine-tuned + DPO version of Mistral-7B-v0.1 that was trained on **6K** automatically selected lightweight, high-quality alignment SFT data: [Deita 6K V0](https://huggingface.co/datasets/hkust-nlp/deita-6k-v0) and **10K** randomly sampled alignment preference data from Ultrafeedback. ## Model description - **Model type:** Model trained on automatically selected lightweight, high-quality alignment SFT data and 10K randomly sampled alignment preference data. - **Language(s) (NLP):** Primarily English - **Finetuned from model:** Mistral-7B-v0.1 ### Model Sources - **Repository:** https://github.com/hkust-nlp/deita - **Model Family:** Other models and the dataset are found in the [Deita collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4). ## Performance | Model | Align | Data Size | MT-Bench | AlpacaEval(%) | OpenLLM (Avg.) | |------------------------------------------------|-----------|------------|----------|---------------|----------------| | **Proprietary Models** | | | | | | | GPT-4-Turbo | ? | -- | 9.32 | 97.70 | -- | | GPT-4 | SFT + PPO | -- | 8.99 | 95.03 | -- | | Claude-2 | SFT + PPO | -- | 8.06 | 91.36 | -- | | GPT-3.5-turbo | SFT + PPO | -- | 7.94 | 89.37 | -- | | **Open-sourced Models based on LLaMA-1-13B** | | | | | | | LIMA | SFT | 1K SFT | 4.29 | 41.98 | 59.82 | | WizardLM-13B | SFT | 70K SFT | 6.35 | 75.31 | 58.96 | | Vicuna-13B-v1.3 | SFT | 125K SFT | 6.39 | 82.11 | 60.01 | | Random | SFT | 10K SFT | 6.03 | 71.52 | 60.14 | | DEITA-LLaMA1-13B-v1.0-sft | SFT | 10K SFT | 6.60 | 78.01 | 64.27 | | **Open-sourced Models based on LLaMA-2-13B** | | | | | | | Tulu-2-13B | SFT | 326K SFT | 6.70 | 78.90 | -- | | Tulu-2-13B+DPO | SFT + DPO | 326K SFT + 60K DPO | 7.00 | 89.50 | -- | | LLaMA2-13B-Chat | SFT + PPO | -- | 6.65 | 81.09 | -- | | WizardLM-13B-v1.2 | SFT | >70K SFT | 7.09 | 89.17 | -- | | Vicuna-13B-v1.5 | SFT | 125K SFT | 6.57 | 78.80 | 61.63 | | Random | SFT | 10K SFT | 5.78 | 65.19 | 61.32 | | DEITA-LLaMA2-13B-v1.0-sft | SFT | 10K SFT | 6.79 | 81.09 | 62.71 | | **Open-sourced Models based on Mistral-7B** | | | | | | | Mistral-7B-Instruct-v0.1 | -- | -- | 6.84 | 69.65 | 60.45 | | Zephyr-7B-sft | SFT | 200K SFT | 5.32 | 75.12 | 60.93 | | $\text{Zephyr-7B-}\beta$ | SFT + DPO | 200K SFT + 60K DPO | 7.34 | 90.60 | 66.36 | | OpenChat-3.5 | C-RLFT | >> 70K C-RLFT | 7.81 | 88.51 | -- | | Starling-7B | C-RLFT + APA | >>70K C-RLFT + 183K APA | 8.09 | 91.99 | -- | | Random | SFT | 10K SFT | 5.89 | 56.90 | 61.72 | | DEITA-7B-v1.0-sft (6K) | SFT | 6K SFT | 7.22 | 80.78 | 64.94 | | DEITA-7B-v1.0-sft (10K) | SFT | 10K SFT | 7.32 | 81.67 | 64.00 | | DEITA-7B-v1.0 | SFT + DPO | 6K SFT + 10K DPO | 7.55 | 90.06 | 69.86 | ## Input Format The model is trained using the [vicuna_v1.1 template](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py) ### SFT Format ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hello! ASSISTANT: Hi!</s>USER: How are you? ASSISTANT: ``` ### DPO Format ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <prompt> ASSISTANT: <answer></s> ``` where \<answer\> can be a chosen answer or a rejected answer. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 128 - total_train_batch_size: 512 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 6.0 ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1 ## Citation If you find the content of this project helpful, please cite our paper as follows: ``` @misc{liu2023what, title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning}, author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He}, year={2023}, eprint={2312.15685}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
mmnga/lightblue-qarasu-14B-chat-plus-unleashed-gguf
mmnga
2024-05-07T12:57:34Z
417
3
null
[ "gguf", "qwen", "ja", "dataset:OpenAssistant/oasst1", "dataset:zetavg/ShareGPT-Processed", "dataset:augmxnt/ultra-orca-boros-en-ja-v1", "license:other", "region:us" ]
null
2023-12-29T12:27:30Z
--- license: other license_name: tongyi-qianwen license_link: LICENSE tags: - qwen language: - ja datasets: - OpenAssistant/oasst1 - zetavg/ShareGPT-Processed - augmxnt/ultra-orca-boros-en-ja-v1 --- # lightblue-qarasu-14B-chat-plus-unleashed-gguf [lightblueさんが公開しているqarasu-14B-chat-plus-unleashed](https://huggingface.co/lightblue/qarasu-14B-chat-plus-unleashed)のggufフォーマット変換版です。 ## 他のモデル [mmnga/lightblue-Karasu-Mixtral-8x22B-v0.1-gguf](https://huggingface.co/mmnga/lightblue-Karasu-Mixtral-8x22B-v0.1-gguf) [mmnga/lightblue-suzume-llama-3-8B-multilingual-gguf](https://huggingface.co/mmnga/lightblue-suzume-llama-3-8B-multilingual-gguf) [mmnga/lightblue-suzume-llama-3-8B-japanese-gguf](https://huggingface.co/mmnga/lightblue-suzume-llama-3-8B-japanese-gguf) [mmnga/lightblue-ao-karasu-72B-gguf](https://huggingface.co/mmnga/lightblue-ao-karasu-72B-gguf) [mmnga/lightblue-karasu-1.1B-gguf](https://huggingface.co/mmnga/lightblue-karasu-1.1B-gguf) [mmnga/lightblue-karasu-7B-chat-plus-unleashed-gguf](https://huggingface.co/mmnga/lightblue-karasu-7B-chat-plus-unleashed-gguf) [mmnga/lightblue-qarasu-14B-chat-plus-unleashed-gguf](https://huggingface.co/mmnga/lightblue-qarasu-14B-chat-plus-unleashed-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'lightblue-qarasu-14B-chat-plus-unleashed-q4_0.gguf' -n 128 -p 'イギリスの首相は誰ですか?' ``` ## Licence ライセンスがqwenの[Tongyi Qianwenライセンス](https://huggingface.co/mmnga/lightblue-qarasu-14B-chat-plus-unleashed-gguf/blob/main/LICENSE)なので内容をご確認ください *ライセンスは元のモデルのライセンスをそのまま使用していますが、学習に使用したデータセット内容により異なる可能性があります。ご確認ください。*
TheBloke/yayi2-30B-llama-GGUF
TheBloke
2024-01-01T16:34:01Z
417
10
transformers
[ "transformers", "gguf", "yayi2", "zh", "en", "arxiv:2307.09288", "base_model:cognitivecomputations/yayi2-30b-llama", "license:other", "region:us" ]
null
2024-01-01T14:37:11Z
--- base_model: cognitivecomputations/yayi2-30b-llama inference: false language: - zh - en license: other model_creator: Cognitive Computations model_name: Yayi2 30B Llama model_type: yayi2 prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Yayi2 30B Llama - GGUF - Model creator: [Cognitive Computations](https://huggingface.co/cognitivecomputations) - Original model: [Yayi2 30B Llama](https://huggingface.co/cognitivecomputations/yayi2-30b-llama) <!-- description start --> ## Description This repo contains GGUF format model files for [Cognitive Computations's Yayi2 30B Llama](https://huggingface.co/cognitivecomputations/yayi2-30b-llama). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/yayi2-30B-llama-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF) * [Cognitive Computations's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/cognitivecomputations/yayi2-30b-llama) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [yayi2-30b-llama.Q2_K.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q2_K.gguf) | Q2_K | 2 | 12.90 GB| 15.40 GB | smallest, significant quality loss - not recommended for most purposes | | [yayi2-30b-llama.Q3_K_S.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q3_K_S.gguf) | Q3_K_S | 3 | 13.30 GB| 15.80 GB | very small, high quality loss | | [yayi2-30b-llama.Q3_K_M.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q3_K_M.gguf) | Q3_K_M | 3 | 14.77 GB| 17.27 GB | very small, high quality loss | | [yayi2-30b-llama.Q3_K_L.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q3_K_L.gguf) | Q3_K_L | 3 | 16.10 GB| 18.60 GB | small, substantial quality loss | | [yayi2-30b-llama.Q4_0.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q4_0.gguf) | Q4_0 | 4 | 17.26 GB| 19.76 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [yayi2-30b-llama.Q4_K_S.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q4_K_S.gguf) | Q4_K_S | 4 | 17.32 GB| 19.82 GB | small, greater quality loss | | [yayi2-30b-llama.Q4_K_M.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q4_K_M.gguf) | Q4_K_M | 4 | 18.23 GB| 20.73 GB | medium, balanced quality - recommended | | [yayi2-30b-llama.Q5_0.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q5_0.gguf) | Q5_0 | 5 | 20.99 GB| 23.49 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [yayi2-30b-llama.Q5_K_S.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q5_K_S.gguf) | Q5_K_S | 5 | 20.99 GB| 23.49 GB | large, low quality loss - recommended | | [yayi2-30b-llama.Q5_K_M.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q5_K_M.gguf) | Q5_K_M | 5 | 21.49 GB| 23.99 GB | large, very low quality loss - recommended | | [yayi2-30b-llama.Q6_K.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q6_K.gguf) | Q6_K | 6 | 24.95 GB| 27.45 GB | very large, extremely low quality loss | | [yayi2-30b-llama.Q8_0.gguf](https://huggingface.co/TheBloke/yayi2-30B-llama-GGUF/blob/main/yayi2-30b-llama.Q8_0.gguf) | Q8_0 | 8 | 32.31 GB| 34.81 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/yayi2-30B-llama-GGUF and below it, a specific filename to download, such as: yayi2-30b-llama.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/yayi2-30B-llama-GGUF yayi2-30b-llama.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/yayi2-30B-llama-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/yayi2-30B-llama-GGUF yayi2-30b-llama.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m yayi2-30b-llama.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./yayi2-30b-llama.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "{prompt}", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./yayi2-30b-llama.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Cognitive Computations's Yayi2 30B Llama This is [wenge-research/yayi2-30b](https://huggingface.co/wenge-research/yayi2-30b) converted to llama compatible format. Subject to the [Yayi 2 license](https://github.com/wenge-research/YAYI2/blob/main/COMMUNITY_LICENSE). Brought to you by @Weyaxi and @ehartford, with thanks to @chargoddard for the pioneering work and the consultation! And of course thanks to the yayi2 team for sharing an amazing model. Original card below: <div align="center"> <h1> YAYI 2 </h1> <!-- <br> --> </div> <div align="center"> <a href="https://github.com/wenge-research/YAYI2" target="_blank">GitHub</a> | <a href="https://yayi.wenge.com" target="_blank">雅意大模型</a> </div> ## 介绍/Introduction YAYI 2 是中科闻歌研发的开源大语言模型,包括 Base 和 Chat 版本,参数规模为 30B。YAYI2-30B 是基于 Transformer 的大语言模型,采用了 2.65 万亿 Tokens 的高质量、多语言语料进行预训练。针对通用和特定领域的应用场景,我们采用了百万级指令进行微调,同时借助人类反馈强化学习方法,以更好地使模型与人类价值观对齐。 本次开源的模型为 YAYI2-30B Base 模型。如果您想了解更多关于 YAYI 2 模型的细节,我们建议您参阅 [GitHub](https://github.com/wenge-research/YAYI2) 仓库。更多技术细节,敬请期待我们的技术报告🔥。 YAYI 2 is a collection of open-source large language models launched by Wenge Technology. YAYI2-30B is a Transformer-based large language model, and has been pretrained for 2.65 trillion tokens of multilingual data with high quality. The base model is aligned with human values through supervised fine-tuning with millions of instructions and reinforcement learning from human feedback (RLHF). We opensource the pre-trained language model in this release, namely **YAYI2-30B**. For more details about the YAYI 2, please refer to our [GitHub](https://github.com/wenge-research/YAYI2) repository. Stay tuned for more technical details in our upcoming technical report! 🔥 ## 模型细节/Model Details | Hyperparameter| Value | |:----------|:----------:| | n_layers | 64 | | n_heads | 64 | | hidden_size | 7168 | | vocab_size | 81920 | | sequence length | 4096 | ## 要求/Requirements * python 3.8及以上版本 * pytorch 2.0.1 及以上版本 * 建议使用 CUDA 11.7 及以上版本 * 运行 BF16 或 FP16 模型需要至少80GB显存(例如1xA100) * python 3.8 and above * pytorch 2.0.1 and above * CUDA 11.7 and above are recommended * To run YAYI2-30B in bf16/fp16, at least 80B GPU memory is required (e.g., 1xA100-80G) ## 快速开始/Quick Start ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("wenge-research/yayi2-30b", trust_remote_code=True) >>> model = AutoModelForCausalLM.from_pretrained("wenge-research/yayi2-30b", device_map="auto", trust_remote_code=True) >>> inputs = tokenizer('The winter in Beijing is', return_tensors='pt') >>> inputs = inputs.to('cuda') >>> pred = model.generate( **inputs, max_new_tokens=256, eos_token_id=tokenizer.eos_token_id, do_sample=True, repetition_penalty=1.2, temperature=0.4, top_k=100, top_p=0.8 ) >>> print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True)) ``` ## 评测结果/Evaluation 我们在多个基准数据集上进行了评测,包括 C-Eval、MMLU、 CMMLU、AGIEval、GAOKAO-Bench、GSM8K、MATH、BBH、HumanEval 以及 MBPP。我们考察了模型在语言理解、学科知识、数学推理、逻辑推理以及代码生成方面的表现。YAYI 2 模型在与其规模相近的开源模型中展现出了显著的性能提升。 We evaluate our model on standard benchmarks, including C-Eval, MMLU, CMMLU, AGIEval, GAOKAO-Bench, GSM8K, MATH, BBH, HumanEval, and MBPP. Our goal is to assess the model's performance in language comprehension, knowledge comprehension, mathematical reasoning, logical reasoning, and code generation. YAYI 2 has demonstrated exceptional performance across models with similar size. <table id="myTable"> <!-- Table header --> <tr> <th></th> <th colspan="5" style="text-align: center;">Knowledge</th> <th colspan="2" style="text-align: center;">Math</th> <th colspan="1" style="text-align: center;">Logic reasonning</th> <th colspan="2" style="text-align: center;">Code</th> </tr> <tr> <th style="text-align: left;">Model</th> <th>C-Eval(val)</th> <th>MMLU</th> <th>AGIEval</th> <th>CMMLU</th> <th>GAOKAO-Bench</th> <th>GSM8K</th> <th>MATH</th> <th>BBH</th> <th>HumanEval</th> <th>MBPP</th> </tr> <tr> <td></td> <td style="text-align: center;">5-shot</td> <td style="text-align: center;">5-shot</td> <td style="text-align: center;">3/0-shot</td> <td style="text-align: center;">5-shot</td> <td style="text-align: center;">0-shot</td> <td style="text-align: center;">8/4-shot</td> <td style="text-align: center;">4-shot</td> <td style="text-align: center;">3-shot</td> <td style="text-align: center;">0-shot</td> <td style="text-align: center;">3-shot</td> </tr> <tr> <td><strong>MPT-30B</strong></td> <td style="text-align: center;">-</td> <td style="text-align: center;">46.9</td> <td style="text-align: center;">33.8</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">15.2</td> <td style="text-align: center;">3.1</td> <td style="text-align: center;">38.0</td> <td style="text-align: center;">25.0</td> <td style="text-align: center;">32.8</td> </tr> <tr> <td><strong>Falcon-40B</strong></td> <td style="text-align: center;">-</td> <td style="text-align: center;">55.4</td> <td style="text-align: center;">37.0</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">19.6</td> <td style="text-align: center;">5.5</td> <td style="text-align: center;">37.1</td> <td style="text-align: center;">0.6</td> <td style="text-align: center;">29.8</td> </tr> <tr> <td><strong>LLaMA2-34B</strong></td> <td style="text-align: center;">-</td> <td style="text-align: center;">62.6</td> <td style="text-align: center;">43.4</td> <td style="text-align: center;">-</td> <td style="text-align: center;">-</td> <td style="text-align: center;">42.2</td> <td style="text-align: center;">6.2</td> <td style="text-align: center;">44.1</td> <td style="text-align: center;">22.6</td> <td style="text-align: center;">33.0</td> </tr> <tr> <td><strong>Baichuan2-13B</strong></td> <td style="text-align: center;">59.0</td> <td style="text-align: center;">59.5</td> <td style="text-align: center;">37.4</td> <td style="text-align: center;">61.3</td> <td style="text-align: center;">45.6</td> <td style="text-align: center;">52.6</td> <td style="text-align: center;">10.1</td> <td style="text-align: center;">49.0</td> <td style="text-align: center;">17.1</td> <td style="text-align: center;">30.8</td> </tr> <tr> <td><strong>Qwen-14B</strong></td> <td style="text-align: center;">71.7</td> <td style="text-align: center;">67.9</td> <td style="text-align: center;">51.9</td> <td style="text-align: center;">70.2</td> <td style="text-align: center;">62.5</td> <td style="text-align: center;">61.6</td> <td style="text-align: center;">25.2</td> <td style="text-align: center;">53.7</td> <td style="text-align: center;">32.3</td> <td style="text-align: center;">39.8</td> </tr> <tr> <td><strong>InternLM-20B</strong></td> <td style="text-align: center;">58.8</td> <td style="text-align: center;">62.1</td> <td style="text-align: center;">44.6</td> <td style="text-align: center;">59.0</td> <td style="text-align: center;">45.5</td> <td style="text-align: center;">52.6</td> <td style="text-align: center;">7.9</td> <td style="text-align: center;">52.5</td> <td style="text-align: center;">25.6</td> <td style="text-align: center;">35.6</td> </tr> <tr> <td><strong>Aquila2-34B</strong></td> <td style="text-align: center;">98.5</td> <td style="text-align: center;">76.0</td> <td style="text-align: center;">43.8</td> <td style="text-align: center;">78.5</td> <td style="text-align: center;">37.8</td> <td style="text-align: center;">50.0</td> <td style="text-align: center;">17.8</td> <td style="text-align: center;">42.5</td> <td style="text-align: center;">0.0</td> <td style="text-align: center;">41.0</td> </tr> <tr> <td><strong>Yi-34B</strong></td> <td style="text-align: center;">81.8</td> <td style="text-align: center;">76.3</td> <td style="text-align: center;">56.5</td> <td style="text-align: center;">82.6</td> <td style="text-align: center;">68.3</td> <td style="text-align: center;">67.6</td> <td style="text-align: center;">15.9</td> <td style="text-align: center;">66.4</td> <td style="text-align: center;">26.2</td> <td style="text-align: center;">38.2</td> </tr> <tr> <td><strong>YAYI2-30B</strong></td> <td style="text-align: center;">80.9</td> <td style="text-align: center;"><b>80.5</b></td> <td style="text-align: center;"><b>62.0</b></td> <td style="text-align: center;"><b>84.0</b></td> <td style="text-align: center;">64.4</td> <td style="text-align: center;"><b>71.2</b></td> <td style="text-align: center;">14.8</td> <td style="text-align: center;">54.5</td> <td style="text-align: center;"><b>53.1</b></td> <td style="text-align: center;"><b>45.8</b></td> </tr> </table> 我们使用 [OpenCompass Github 仓库](https://github.com/open-compass/opencompass) 提供的源代码进行了评测。对于对比模型,我们列出了他们在 [OpenCompass](https://opencompass.org.cn) 榜单上的评测结果,截止日期为 2023年12月15日。对于其他尚未在 [OpenCompass](https://opencompass.org.cn/leaderboard-llm) 平台参与评测的模型,包括 MPT、Falcon 和 LLaMa 2,我们采用了 [LLaMA 2](https://arxiv.org/abs/2307.09288) 报告的结果。 We evaluate our model using the source code from the [OpenCompass Github repository](https://github.com/open-compass/opencompass). If available, we report results for comparative models assessed by OpenCompass with the evaluation reference date set to Dec. 15th, 2013. For MPT, Falcon, and Llama, which have not been evaluated by OpenCompass, we use the results reported in the [LLaMA 2](https://arxiv.org/abs/2307.09288) paper. ## 协议/License 本项目中的代码依照 [Apache-2.0](https://github.com/wenge-research/YAYI2/blob/main/LICENSE) 协议开源,社区使用 YAYI 2 模型和数据需要遵循[雅意YAYI 2 模型社区许可协议](https://github.com/wenge-research/YAYI2/blob/main/COMMUNITY_LICENSE)。若您需要将雅意 YAYI 2系列模型或其衍生品用作商业用途,请根据[《雅意 YAYI 2 模型商用许可协议》](https://github.com/wenge-research/YAYI2/blob/main/COMMERCIAL_LICENSE)将商用许可申请登记信息发送至指定邮箱 [email protected]。审核通过后,雅意将授予您商用版权许可,请遵循协议中的商业许可限制。 The code in this project is open-sourced under the [Apache-2.0](https://github.com/wenge-research/YAYI2/blob/main/LICENSE) license. The use of YaYi series model weights and data must adhere to the [YAYI 2 Community License](https://github.com/wenge-research/YAYI2/blob/main/COMMUNITY_LICENSE). If you intend to use the YAYI 2 series models or their derivatives for commercial purposes, please submit your commercial license application and registration information to [email protected], following the [YAYI 2 Commercial License](https://github.com/wenge-research/YAYI2/blob/main/COMMERCIAL_LICENSE). Upon approval, YAYI will grant you a commercial copyright license, subject to the commercial license restrictions outlined in the agreement. ## 引用/Citation 如果您在工作中使用了我们的模型,请引用我们的论文。 If you are using the resource for your work, please cite our paper. ``` @article{YAYI 2, author = {Yin Luo, Qingchao Kong, Nan Xu, et.al.}, title = {YAYI 2: Multilingual Open Source Large Language Models}, journal = {arXiv preprint arXiv}, year = {2023} } ``` <!-- original-model-card end -->
second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF
second-state
2024-03-20T07:15:21Z
417
2
transformers
[ "transformers", "gguf", "mixtral", "text-generation", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "en", "base_model:NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-17T09:23:45Z
--- base_model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO inference: false license: apache-2.0 model-index: - name: Nous-Hermes-2-Mixtral-8x7B-DPO results: [] model_creator: NousResearch model_name: Nous Hermes 2 Mixtral 8X7B DPO model_type: mixtral quantized_by: Second State Inc. language: - en tags: - Mixtral - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF ## Original Model [NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO) ## Run with LlamaEdge - LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above - Prompt template - Prompt type: `chatml` - Prompt string ```text <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` - Context size: `4096` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_M.gguf llama-api-server.wasm -p chatml ``` - Run as LlamaEdge command app ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_M.gguf llama-chat.wasm -p chatml ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q2_K.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q2_K.gguf) | Q2_K | 2 | 17.3 GB| smallest, significant quality loss - not recommended for most purposes | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_L.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_L.gguf) | Q3_K_L | 3 | 24.2 GB| small, substantial quality loss | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_M.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_M.gguf) | Q3_K_M | 3 | 22.5 GB| very small, high quality loss | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_S.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q3_K_S.gguf) | Q3_K_S | 3 | 20.4 GB| very small, high quality loss | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_0.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_0.gguf) | Q4_0 | 4 | 26.4 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_M.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_M.gguf) | Q4_K_M | 4 | 28.4 GB| medium, balanced quality - recommended | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_S.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q4_K_S.gguf) | Q4_K_S | 4 | 26.7 GB| small, greater quality loss | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_0.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_0.gguf) | Q5_0 | 5 | 32.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_M.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_M.gguf) | Q5_K_M | 5 | 33.2 GB| large, very low quality loss - recommended | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_S.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q5_K_S.gguf) | Q5_K_S | 5 | 32.2 GB| large, low quality loss - recommended | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q6_K.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q6_K.gguf) | Q6_K | 6 | 38.4 GB| very large, extremely low quality loss | | [Nous-Hermes-2-Mixtral-8x7B-DPO-Q8_0.gguf](https://huggingface.co/second-state/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF/blob/main/Nous-Hermes-2-Mixtral-8x7B-DPO-Q8_0.gguf) | Q8_0 | 8 | 49.6 GB| very large, extremely low quality loss - not recommended |
EpicJhon/5GspWRKvaVzZdwjAPFZpzwsZNEhJRcTzh81iFzviWz6iqKNx_vgg
EpicJhon
2024-03-27T22:47:03Z
417
0
keras
[ "keras", "region:us" ]
null
2024-02-25T12:16:58Z
Entry not found
artificialguybr/selfiephotographyredmond-selfie-photography-lora-for-sdxl
artificialguybr
2024-03-15T16:33:01Z
417
6
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "person", "photorealistic", "sexy", "female", "selfie", "photo", "style", "woman", "actress", "girls", "male", "man", "photography", "realistic", "selfies", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
2024-03-15T15:58:53Z
--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=False&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - person - photorealistic - sexy - female - selfie - photo - style - woman - actress - girls - male - man - photography - realistic - selfies base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: instagram model widget: - text: 'photo of a A rainbow haired, 19 years old, asian girl, beautiful, stunning, ,instagram model, discord profile picture face focused' output: url: >- 7451495.jpeg - text: 'a old granny with pink hair, 95 years old, instagram model, discord profile picture' output: url: >- 7451277.jpeg - text: 'Indian Tech CEO, 35 years old' output: url: >- 7451276.jpeg - text: 'photo of a basketball player, ,instagram model, discord profile picture face focused' output: url: >- 7451282.jpeg - text: 'a old granny with pink hair, 95 years old ' output: url: >- 7451275.jpeg - text: 'photo of a basketball player, ,instagram model, discord profile picture face focused' output: url: >- 7451284.jpeg - text: 'photo of a basketball player, ,instagram model, discord profile picture face focused' output: url: >- 7451287.jpeg - text: 'photo of a queen, face focused' output: url: >- 7451281.jpeg - text: 'a blonde woman wearing rainbow sunglasses, 19 years, instagram model, discord profile picture' output: url: >- 7451279.jpeg --- # SelfiePhotographyRedmond - Selfie Photography Lora for SDXL <Gallery /> ## Model description <h1 id="heading-28">SelfiePhotography.Redmond is here!</h1><p>I'm grateful for the GPU time from <strong>Redmond.AI</strong> that allowed me to finish this LORA!</p><p>Want to test and have acess to all my AI Stuff? Check my <a target="_blank" rel="ugc" href="https://artificialguy.com/">website</a>!</p><p>This is a <strong>Doodle </strong>LORA fine-tuned on <strong>SD XL 1.0.</strong></p><p>Test all my Loras <a target="_blank" rel="ugc" href="https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora">here</a> for free and unlimited. Thanks, HF, for Inference API!</p><p>The LORA has a high capacity to generate Selfie and Photography in a wide variety of themes.<strong> It's a versatile LORA.</strong></p><p><strong><u>The tag for the model: A photo of a {your prompt} , instagram model, discord profile picture</u></strong></p><p>I really hope you like the LORA and use it.</p><p>If you like the model and think it's worth it, you can make a donation to my <a target="_blank" rel="ugc" href="https://www.patreon.com/user?u=81570187">Patreon</a> or <a target="_blank" rel="ugc" href="https://ko-fi.com/jvkape">Ko-fi</a>.</p><p>Follow me in my twitter to know before all about new models:</p><p><a target="_blank" rel="ugc" href="https://twitter.com/artificialguybr/"><u>https://twitter.com/artificialguybr/</u></a></p> ## Trigger words You should use `instagram model`, `discord profile picture` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/artificialguybr/selfiephotographyredmond-selfie-photography-lora-for-sdxl/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('artificialguybr/selfiephotographyredmond-selfie-photography-lora-for-sdxl', weight_name='SelfiePhotographyRedmond.safetensors') image = pipeline('`instagram model`, `discord profile picture`').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
stablediffusionapi/animerge
stablediffusionapi
2024-03-18T08:52:15Z
417
0
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-03-18T08:50:48Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # AniMerge API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/18275557381710748065.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "animerge" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/animerge) Model link: [View model](https://modelslab.com/models/animerge) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "animerge", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF
mradermacher
2024-05-14T19:37:33Z
417
3
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:FluffyKaeloky/Midnight-Miqu-103B-v1.5", "endpoints_compatible", "region:us" ]
null
2024-03-21T03:16:27Z
--- base_model: FluffyKaeloky/Midnight-Miqu-103B-v1.5 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About weighted/imatrix quants of https://huggingface.co/FluffyKaeloky/Midnight-Miqu-103B-v1.5 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ1_S.gguf) | i1-IQ1_S | 22.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ1_M.gguf) | i1-IQ1_M | 23.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.7 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 30.8 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ2_S.gguf) | i1-IQ2_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ2_M.gguf) | i1-IQ2_M | 35.1 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q2_K.gguf) | i1-Q2_K | 38.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 40.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 42.5 | | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 44.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ3_S.gguf) | i1-IQ3_S | 45.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ3_M.gguf) | i1-IQ3_M | 46.5 | | | [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q3_K_M.gguf.part2of2) | i1-Q3_K_M | 50.0 | IQ3_S probably better | | [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q3_K_L.gguf.part2of2) | i1-Q3_K_L | 54.5 | IQ3_M probably better | | [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-IQ4_XS.gguf.part2of2) | i1-IQ4_XS | 55.5 | | | [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q4_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q4_0.gguf.part2of2) | i1-Q4_0 | 58.4 | fast, low quality | | [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 59.0 | optimal size/speed/quality | | [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 62.3 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 71.4 | | | [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 73.3 | | | [PART 1](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.5-i1-GGUF/resolve/main/Midnight-Miqu-103B-v1.5.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 85.1 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Erosumika-MistralLayla-Slerp-GGUF
mradermacher
2024-05-06T05:48:21Z
417
1
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "localfultonextractor/Erosumika-7B-v2", "l3utterfly/mistral-7b-v0.2-layla-v4", "en", "base_model:Smuggling1710/Erosumika-MistralLayla-Slerp", "endpoints_compatible", "region:us" ]
null
2024-03-28T12:09:08Z
--- base_model: Smuggling1710/Erosumika-MistralLayla-Slerp language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - localfultonextractor/Erosumika-7B-v2 - l3utterfly/mistral-7b-v0.2-layla-v4 --- ## About static quants of https://huggingface.co/Smuggling1710/Erosumika-MistralLayla-Slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Erosumika-MistralLayla-Slerp-GGUF/resolve/main/Erosumika-MistralLayla-Slerp.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
hotchpotch/japanese-reranker-cross-encoder-base-v1
hotchpotch
2024-04-01T02:39:31Z
417
1
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "ja", "dataset:hotchpotch/JQaRA", "dataset:shunk031/JGLUE", "dataset:miracl/miracl", "dataset:castorini/mr-tydi", "dataset:unicamp-dl/mmarco", "license:mit", "region:us" ]
null
2024-03-29T07:07:37Z
--- license: mit datasets: - hotchpotch/JQaRA - shunk031/JGLUE - miracl/miracl - castorini/mr-tydi - unicamp-dl/mmarco language: - ja library_name: sentence-transformers --- ## hotchpotch/japanese-reranker-cross-encoder-base-v1 日本語で学習させた Reranker (CrossEncoder) シリーズです。 | モデル名 | layers | hidden_size | | ----------------------------------------------------------------------------------------------------------------------------------- | ------ | ----------- | | [hotchpotch/japanese-reranker-cross-encoder-xsmall-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-xsmall-v1) | 6 | 384 | | [hotchpotch/japanese-reranker-cross-encoder-small-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-small-v1) | 12 | 384 | | [hotchpotch/japanese-reranker-cross-encoder-base-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-base-v1) | 12 | 768 | | [hotchpotch/japanese-reranker-cross-encoder-large-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-large-v1) | 24 | 1024 | | [hotchpotch/japanese-bge-reranker-v2-m3-v1](https://huggingface.co/hotchpotch/japanese-bge-reranker-v2-m3-v1) | 24 | 1024 | Reranker についてや、技術レポート・評価等は以下を参考ください。 - [日本語最高性能のRerankerをリリース / そもそも Reranker とは?](https://secon.dev/entry/2024/04/02/070000-japanese-reranker-release/) - [日本語 Reranker 作成のテクニカルレポート](https://secon.dev/entry/2024/04/02/080000-japanese-reranker-tech-report/) ## 使い方 ### SentenceTransformers ```python from sentence_transformers import CrossEncoder import torch MODEL_NAME = "hotchpotch/japanese-reranker-cross-encoder-base-v1" device = "cuda" if torch.cuda.is_available() else "cpu" model = CrossEncoder(MODEL_NAME, max_length=512, device=device) if device == "cuda": model.model.half() query = "感動的な映画について" passages = [ "深いテーマを持ちながらも、観る人の心を揺さぶる名作。登場人物の心情描写が秀逸で、ラストは涙なしでは見られない。", "重要なメッセージ性は評価できるが、暗い話が続くので気分が落ち込んでしまった。もう少し明るい要素があればよかった。", "どうにもリアリティに欠ける展開が気になった。もっと深みのある人間ドラマが見たかった。", "アクションシーンが楽しすぎる。見ていて飽きない。ストーリーはシンプルだが、それが逆に良い。", ] scores = model.predict([(query, passage) for passage in passages]) ``` ## HuggingFace transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification from torch.nn import Sigmoid MODEL_NAME = "hotchpotch/japanese-reranker-cross-encoder-base-v1" device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME) model.to(device) model.eval() if device == "cuda": model.half() query = "感動的な映画について" passages = [ "深いテーマを持ちながらも、観る人の心を揺さぶる名作。登場人物の心情描写が秀逸で、ラストは涙なしでは見られない。", "重要なメッセージ性は評価できるが、暗い話が続くので気分が落ち込んでしまった。もう少し明るい要素があればよかった。", "どうにもリアリティに欠ける展開が気になった。もっと深みのある人間ドラマが見たかった。", "アクションシーンが楽しすぎる。見ていて飽きない。ストーリーはシンプルだが、それが逆に良い。", ] inputs = tokenizer( [(query, passage) for passage in passages], padding=True, truncation=True, max_length=512, return_tensors="pt", ) inputs = {k: v.to(device) for k, v in inputs.items()} logits = model(**inputs).logits activation = Sigmoid() scores = activation(logits).squeeze().tolist() ``` ## 評価結果 | Model Name | [JQaRA](https://huggingface.co/datasets/hotchpotch/JQaRA) | [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR) | [MIRACL](https://huggingface.co/datasets/miracl/miracl) | [JSQuAD](https://github.com/yahoojapan/JGLUE) | | ------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------- | ----------------------------------------------------------- | ------------------------------------------------------- | --------------------------------------------- | | [japanese-reranker-cross-encoder-xsmall-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-xsmall-v1) | 0.6136 | 0.9376 | 0.7411 | 0.9602 | | [japanese-reranker-cross-encoder-small-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-small-v1) | 0.6247 | 0.939 | 0.7776 | 0.9604 | | [japanese-reranker-cross-encoder-base-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-base-v1) | 0.6711 | 0.9337 | 0.818 | 0.9708 | | [japanese-reranker-cross-encoder-large-v1](https://huggingface.co/hotchpotch/japanese-reranker-cross-encoder-large-v1) | 0.7099 | 0.9364 | 0.8406 | 0.9773 | | [japanese-bge-reranker-v2-m3-v1](https://huggingface.co/hotchpotch/japanese-bge-reranker-v2-m3-v1) | 0.6918 | 0.9372 | 0.8423 | 0.9624 | | [bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) | 0.673 | 0.9343 | 0.8374 | 0.9599 | | [bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 0.4718 | 0.7332 | 0.7666 | 0.7081 | | [bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 0.2445 | 0.4905 | 0.6792 | 0.5757 | | [cross-encoder-mmarco-mMiniLMv2-L12-H384-v1](https://huggingface.co/corrius/cross-encoder-mmarco-mMiniLMv2-L12-H384-v1) | 0.5588 | 0.9211 | 0.7158 | 0.932 | | [shioriha-large-reranker](https://huggingface.co/cl-nagoya/shioriha-large-reranker) | 0.5775 | 0.8458 | 0.8084 | 0.9262 | | [bge-m3+all](https://huggingface.co/BAAI/bge-m3) | 0.576 | 0.904 | 0.7926 | 0.9226 | | [bge-m3+dense](https://huggingface.co/BAAI/bge-m3) | 0.539 | 0.8642 | 0.7753 | 0.8815 | | [bge-m3+colbert](https://huggingface.co/BAAI/bge-m3) | 0.5656 | 0.9064 | 0.7902 | 0.9297 | | [bge-m3+sparse](https://huggingface.co/BAAI/bge-m3) | 0.5088 | 0.8944 | 0.6941 | 0.9184 | | [JaColBERTv2](https://huggingface.co/bclavie/JaColBERTv2) | 0.5847 | 0.9185 | 0.6861 | 0.9247 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.554 | 0.8759 | 0.7722 | 0.8892 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 0.4917 | 0.869 | 0.7025 | 0.8565 | | bm25 | 0.458 | 0.8408 | 0.4387 | 0.9002 | ## ライセンス MIT License
mradermacher/solar10B-eugeneparkthebestv2-GGUF
mradermacher
2024-05-06T05:13:26Z
417
0
transformers
[ "transformers", "gguf", "ko", "base_model:moondriller/solar10B-eugeneparkthebestv2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-06T04:41:12Z
--- base_model: moondriller/solar10B-eugeneparkthebestv2 language: - ko library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/moondriller/solar10B-eugeneparkthebestv2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.Q2_K.gguf) | Q2_K | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.IQ3_XS.gguf) | IQ3_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.IQ3_S.gguf) | IQ3_S | 4.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.IQ3_M.gguf) | IQ3_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.Q3_K_M.gguf) | Q3_K_M | 5.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.IQ4_XS.gguf) | IQ4_XS | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.Q4_K_S.gguf) | Q4_K_S | 6.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.Q5_K_S.gguf) | Q5_K_S | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.Q5_K_M.gguf) | Q5_K_M | 7.8 | | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.Q6_K.gguf) | Q6_K | 9.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/solar10B-eugeneparkthebestv2-GGUF/resolve/main/solar10B-eugeneparkthebestv2.Q8_0.gguf) | Q8_0 | 11.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/BiscuitRP-8x7B-GGUF
mradermacher
2024-05-06T05:09:02Z
417
0
transformers
[ "transformers", "gguf", "rp", "roleplay", "en", "base_model:Fredithefish/BiscuitRP-8x7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-08T04:04:27Z
--- base_model: Fredithefish/BiscuitRP-8x7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - rp - roleplay --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Fredithefish/BiscuitRP-8x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q2_K.gguf) | Q2_K | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.4 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.IQ3_M.gguf) | IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.5 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q6_K.gguf) | Q6_K | 38.5 | very good quality | | [PART 1](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/BiscuitRP-8x7B-GGUF/resolve/main/BiscuitRP-8x7B.Q8_0.gguf.part2of2) | Q8_0 | 49.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ibivibiv/colossus_120b
ibivibiv
2024-04-14T01:03:24Z
417
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-12T17:30:45Z
--- library_name: transformers license: apache-2.0 language: - en --- # Model Card for Colossus 120b Colussus 120b is a finetuning of alpindale/goliath-120b. ![img](./colossus.png) ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bartowski/c4ai-command-r-plus-GGUF
bartowski
2024-04-15T13:44:50Z
417
7
transformers
[ "transformers", "gguf", "text-generation", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-generation
2024-04-15T04:26:30Z
--- license: cc-by-nc-4.0 library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp Quantizations of c4ai-command-r-plus Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2636">b2636</a> for quantization. Original model: https://huggingface.co/CohereForAI/c4ai-command-r-plus ## Prompt format ``` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{system_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [c4ai-command-r-plus-Q5_K_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/tree/main/c4ai-command-r-plus-Q5_K_M.gguf) | Q5_K_M | 73.62GB | High quality, *recommended*. | | [c4ai-command-r-plus-Q5_K_S.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/tree/main/c4ai-command-r-plus-Q5_K_S.gguf) | Q5_K_S | 71.80GB | High quality, *recommended*. | | [c4ai-command-r-plus-Q4_K_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/tree/main/c4ai-command-r-plus-Q4_K_M.gguf) | Q4_K_M | 62.75GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [c4ai-command-r-plus-Q4_K_S.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/tree/main/c4ai-command-r-plus-Q4_K_S.gguf) | Q4_K_S | 59.64GB | Slightly lower quality with more space savings, *recommended*. | | [c4ai-command-r-plus-IQ4_NL.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/tree/main/c4ai-command-r-plus-IQ4_NL.gguf) | IQ4_NL | 59.73GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [c4ai-command-r-plus-IQ4_XS.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/tree/main/c4ai-command-r-plus-IQ4_XS.gguf) | IQ4_XS | 56.72GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [c4ai-command-r-plus-Q3_K_L.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/tree/main/c4ai-command-r-plus-Q3_K_L.gguf) | Q3_K_L | 55.40GB | Lower quality but usable, good for low RAM availability. | | [c4ai-command-r-plus-Q3_K_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/tree/main/c4ai-command-r-plus-Q3_K_M.gguf) | Q3_K_M | 50.98GB | Even lower quality. | | [c4ai-command-r-plus-IQ3_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/blob/main/c4ai-command-r-plus-IQ3_M.gguf) | IQ3_M | 47.68GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [c4ai-command-r-plus-IQ3_S.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/blob/main/c4ai-command-r-plus-IQ3_S.gguf) | IQ3_S | 45.95GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [c4ai-command-r-plus-Q3_K_S.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/blob/main/c4ai-command-r-plus-Q3_K_S.gguf) | Q3_K_S | 45.85GB | Low quality, not recommended. | | [c4ai-command-r-plus-IQ3_XS.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/blob/main/c4ai-command-r-plus-IQ3_XS.gguf) | IQ3_XS | 43.59GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [c4ai-command-r-plus-Q2_K.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-GGUF/blob/main/c4ai-command-r-plus-Q2_K.gguf) | Q2_K | 39.49GB | Very low quality but surprisingly usable. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
ping98k/gemma-han-2b
ping98k
2024-04-16T13:51:45Z
417
0
transformers
[ "transformers", "safetensors", "gguf", "gemma", "text-generation", "conversational", "th", "dataset:pythainlp/han-instruct-dataset-v2.0", "base_model:unsloth/gemma-2b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-15T18:31:25Z
--- base_model: unsloth/gemma-2b datasets: - pythainlp/han-instruct-dataset-v2.0 language: - th pipeline_tag: text-generation widget: - text: "จงแต่งบทกวีเกี่ยวกับสายฝนที่ผ่านมา" example_title: "แต่งบทกวี" - text: "จงเขียนโปรแกรมภาษา python แสดงแม่สูตรคูณแม่ 12 ให้หน่อย" example_title: "เขียนโปรแกรม" - text: "คิดเรื่องสั้นเกี่ยวกับแมวให้หน่อย" example_title: "คิดเรื่องสั้น" - text: "สร้างประโยคโฆษณาที่เกี่ยวข้องกับการนอนให้หน่อย" example_title: "สร้างโฆษณา" inference: parameters: temperature: 0.5 --- for test unsloth finetune process and Inference API **this model overfit with train data so it cannot answer anything not in han dataset** ## prompt ``` Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: จงแต่งบทกวีเกี่ยวกับสายฝนที่ผ่านมา ### Response: ```
mmnga/c4ai-command-r-plus-gguf
mmnga
2024-04-23T16:13:37Z
417
3
null
[ "gguf", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-22T14:46:40Z
--- license: cc-by-nc-4.0 language: - en - ja datasets: - TFMC/imatrix-dataset-for-japanese-llm --- # c4ai-command-r-plus-gguf [CohereForAIさんが公開しているc4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 ## 分割されたファイルについて q6_kやq8_0のファイルはサイズが大きく分割されているので結合する必要があります。 ~~~bash cat c4ai-command-r-plus-Q5_K_M.gguf.* > c4ai-command-r-plus-Q5_K_M.gguf ~~~ ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'c4ai-command-r-plus-Q4_0.gguf' -p "<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>あなたは日本語を話すCommand-Rです<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>こんにちわ<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>" -n 128 ```
adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.2
adalbertojunior
2024-04-22T17:25:55Z
417
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "pt", "dataset:adalbertojunior/dolphin_pt_test", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-22T15:11:41Z
--- datasets: - adalbertojunior/dolphin_pt_test language: - pt --- ## Como Utilizar ``` import transformers import torch model_id = "adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.2" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "Você é um robô pirata que sempre responde como um pirata deveria!"}, {"role": "user", "content": "Quem é você?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|im_end|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ### Formato do prompt ``` <|im_start|>system Você é um assistente útil com respostas curtas.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
taide/Llama3-TAIDE-LX-8B-Chat-Alpha1-4bit
taide
2024-04-29T03:50:47Z
417
35
null
[ "gguf", "license:other", "region:us" ]
null
2024-04-28T01:48:37Z
--- license: other license_name: llama3-taide-models-community-license-agreement license_link: https://drive.google.com/file/d/12-Q0WWSjG0DW6CqJQm_jr5wUGRLeb-8p/view extra_gated_heading: 您需要先同意授權條款才能使用此模型 extra_gated_fields: 姓名(Name): text 生日(Date of birth): date_picker 國家(Country): country 所屬單位(Affiliation): text geo: ip_location 按下送出表示您同意社群授權同意書與個人資料蒐集告知聲明(By clicking Submit below I accept the terms of the license and privacy policy): checkbox extra_gated_prompt: >- * ### [Llama3-TAIDE 模型社群授權同意書(License)](https://drive.google.com/file/d/12-Q0WWSjG0DW6CqJQm_jr5wUGRLeb-8p/view) * ### [個人資料蒐集告知聲明(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view) extra_gated_button_content: 送出(Submit) --- * [English Version](./README_en.md) # 模型簡介 * [TAIDE計畫](https://taide.tw/index)致力於開發符合台灣語言和文化特性的生成式人工智慧對話引擎模型,同時建構可信任的人工智慧環境。結合產學研能量,推動可信任生成式人工智慧的發展,提升台灣在國際競爭中的地位,促進產業發展,避免對外國技術的依賴。 * Llama3 TAIDE 系列模型以 Meta 公司釋出的 [LLaMA3-8b](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) 為基礎,導入台灣不同領域可用的文本與訓練素材,提高模型在正體中文回應的能力與特定任務的表現。公開釋出的模型如下: * [Llama3-TAIDE-LX-8B-Chat-Alpha1](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1): 以 LLaMA3-8b 為基礎,使用繁體中文資料預訓練 (continuous pretraining),並透過指令微調(instruction tuning)強化辦公室常用任務和多輪問答對話能力,適合聊天對話或任務協助的使用情境。Llama3-TAIDE-LX-8B-Chat-Alpha1 另外有提供[4 bit 量化模型](https://huggingface.co/taide/Llama3-TAIDE-LX-8B-Chat-Alpha1-4bit),量化模型主要是提供使用者的便利性,可能會影響效能與更多不可預期的問題,還請使用者理解與注意。 # 模型參數 * 參數量: 8B * 最大內容長度 (context length): 8K * 繁中訓練資料 token 量: 43B * 訓練時間: 2336 H100 GPU Hours # 特色 * 嚴格把關模型的訓練資料,提升模型生成資料的可信任性和適用性 * 針對自動摘要、寫信、寫文章、中翻英、英翻中等辦公室常用任務做加強 * 針對台灣在地文化、用語、國情等知識做加強 * 具備多輪問答對話能力 # 應用範例 | 任務 | 使用者輸入 | 模型輸出 | | --- | -------- | ------- | | 寫文章 | 請以以下內容為基礎,寫一篇文章:撰寫一篇作文,題目為《一張舊照片》,內容要求為:選擇一張令你印象深刻的照片,說明令你印象深刻的原因,並描述照片中的影像及背後的故事。記錄成長的過程、與他人的情景、環境變遷和美麗的景色。 | 一張舊照片,承載著時光的痕跡和回憶的溫度。我還記得那張照片是怎麼出現的,那是一張泛黃的黑白相片,裱在牆上,訴說著一段屬於我家族的故事。<br><br>那張照片中,我的祖父母站在田野間,後方是綿延的山巒,前方則是一片金黃的稻田。這幅景象令我印象深刻,因為這是我的童年回憶,也是我成長過程中最美好的風景線。<br><br>我出生在台灣鄉下,從小就生活在這樣一個純樸的環境中。每當我跟著家人在田裡工作時,就會看到這樣的美景。金黃的稻穗隨著微風搖曳,山巒則靜靜地矗立在遠方,給人一種安詳與祥和的感覺。<br><br>那張照片拍攝於民國50年代,當時台灣經濟尚未起飛,農業為主要產業。我的祖父母都是農民,他們辛勤耕種,為我們家提供溫飽。每當我看到那張照片,就會想起他們為我們所做的犧牲和努力。<br><br>除了美麗的風景外,照片中還有一段動人的故事。在我還很小的時候,家裡發生了一場大火,將我們的房子和所有財產都燒光了。幸好全家人都平安無事,但生活卻變得十分困苦。我們被迫搬到另一個村莊,在那裡重新開始生活。<br><br>那張照片也記錄了那段艱辛的歲月。雖然我們失去了許多東西,但家人之間的感情卻更加深厚。我們互相扶持,共同面對生活上的種種挑戰。<br><br>如今,我已長大成人,在都市裡工作。但每當我看到那張舊照片,我都會想起那段簡單而美好的時光。那些美麗的風景、親人的笑容,以及那段充滿挑戰卻又充滿希望的日子,都將永遠鐫刻在我的記憶深處。<br><br>這張舊照片,不僅僅是一張照片,更是一個寶貴的家族史,一個關於我們如何克服困難、珍惜現在的故事。我將把這張照片好好保存下來,讓後代也能夠認識我們的根源,並且銘記那些值得珍藏的美好回憶。 | | 寫信 | 請以品牌經理的身份,給廣告公司的創意總監寫一封信,提出對於新產品廣告宣傳活動的創意建議。 | [您的姓名]<br>[您的職稱]<br>[您的公司名稱]<br>[郵件地址]<br>[電話號碼]<br>[發文日期]<br><br>[創意總監姓名]<br>[廣告公司名稱]<br>[廣告公司地址]<br><br>主旨:關於新產品宣傳活動創意發想,請查照。<br><br>尊敬的[創意總監姓名],<br><br>我希望這封信能找到你忙碌中好好休息。我寫這封信,是為了分享我對於我們即將推出的新產品宣傳活動一些創意發想。我深信,這些想法能幫助我們新產品在市場上脫穎而出,吸引消費者的目光。<br><br>首先,我建議我們可以利用社群媒體平台,像是Facebook、Instagram、YouTube等,製作一系列引人入勝的短片和貼文。這些內容可以展現新產品的特色和優點,同時也能傳達出我們品牌重視環保、健康的核心價值。我們可以與網紅或意見領袖合作,讓他們試用並分享他們的使用經驗,這將有助於增加我們產品的可信度和能見度。<br><br>其次,我們可以舉辦新產品試吃/試用活動,邀請民眾親身體驗我們的新產品。我們可以在各大百貨公司、超市、以及我們的門市舉辦這些活動,讓參與者免費試吃/試用新產品,並填寫問卷回饋意見。這種互動的方式,不僅能收集到寶貴的市場反饋,還能讓民眾對我們的品牌產生正面的印象。<br><br>第三,我們可以與當地餐飲業者合作,將我們的新產品納入他們的菜單中。這將使我們的產品接觸到更廣泛的客群,同時也能證明我們的產品在烹飪和美食產業中具有價值。<br><br>第四,我們可以舉辦一場新產品發表會,邀請媒體、業界人士、以及重要客戶參加。在這場活動中,我們可以展示新產品的各項特色和優勢,並由專業廚師示範如何將新產品融入各式料理中。這將是向大眾展示我們品牌創新精神和專業能力的絕佳機會。<br><br>最後,我建議我們可以製作一系列精美的包裝和禮盒,以吸引消費者在節慶和送禮時選擇我們的產品。精緻的包裝將能提升我們產品的價值感,同時也能傳達出我們品牌對細節和品質的堅持。<br><br>我深信,這些創意發想將有助於我們新產品在市場上獲得成功。若您對這些想法有任何意見或建議,請不吝與我分享。我期待著與您合作,共同為[您的公司名稱]的成長和成功而努力。<br><br>謝謝您抽空閱讀我的信,並期待著與您進一步討論這些想法。<br><br>敬禮<br><br>[您的姓名]<br>[您的職稱]<br>[您的公司名稱] | | 摘要 | 請將這篇文章精簡條理化:「產業創新條例第10條之2及第72條條文修正案」俗稱「台版晶片法」,針對半導體、電動車、5G等技術創新且居國際供應鏈關鍵地位公司,提供最高25%營所稅投抵優惠,企業適用要件包含當年度研發費用、研發密度達一定規模,且有效稅率達一定比率。<br>為因應經濟合作暨發展組織(OECD)國家最低稅負制調整,其中有效稅率門檻,民國112年訂為12%,113年料將提高至15%,但仍得審酌國際間最低稅負制實施情形。<br>經濟部官員表示,已和財政部協商進入最後階段,除企業研發密度訂在6%,目前已確認,企業購置先進製程的設備投資金額達100億元以上可抵減。<br>財政部官員表示,研商過程中,針對台灣產業與其在國際間類似的公司進行深入研究,在設備部分,畢竟適用產創10之2的業者是代表台灣隊打「國際盃」,投入金額不達100億元,可能也打不了。<br>至於備受關注的研發費用門檻,經濟部官員表示,歷經與財政部來回密切討論,研發費用門檻有望落在60億至70億元之間。<br>財政部官員指出,研發攸關台灣未來經濟成長動能,門檻不能「高不可攀」,起初雖設定在100億元,之所以會調降,正是盼讓企業覺得有辦法達得到門檻、進而適用租稅優惠,才有動力繼續投入研發,維持國際供應鏈關鍵地位。<br>經濟部官員表示,因廠商研發費用平均為30、40億元,其中,IC設計業者介於30億至60億元範圍,若將門檻訂在100億元,符合條件的業者較少、刺激誘因不足;此外,若符合申請門檻的業者增加,將可提高企業在台投資金額,財政部稅收也能因此獲得挹注。<br>IC設計業者近日頻頻針對產創10之2發聲,希望降低適用門檻,加上各國力拚供應鏈自主化、加碼補助半導體產業,經濟部官員表示,經濟部和財政部就產創10之2達成共識,爭取讓更多業者受惠,盼增強企業投資力道及鞏固台灣技術地位。<br>財政部官員表示,租稅獎勵的制定必須「有為有守」,並以達到獎勵設置目的為最高原則,現階段在打「國內盃」的企業仍可適用產創第10條、10之1的租稅優惠,共同壯大台灣經濟發展。<br>經濟部和財政部正就研發費用門檻做最後確認,待今明兩天預告子法之後,約有30天時間,可與業界進一步討論及調整,盼產創10之2能在6月上路。 | 文章重點摘要如下:<br><br>- 「台版晶片法」或《產業創新條例第10條之2及第72條條文修正案》,旨在鼓勵關鍵技術創新企業,特別是那些在半導體、電動車、5G技術等領域且在國際供應鏈中扮演關鍵角色者。<br><br>- 企業若符合特定條件,包括當年度研發費用及研發密度達標,可享最高25%的營所稅抵減優惠。研發密度目標設為6%,而企業為取得抵減,必須證明其有效稅率達到特定比率,起初設定為112年的12%,113年提高至15%,但可能會依國際稅制的實施情形進行調整。<br><br>- 政府在制定政策時,考量到OECD最低稅負制的調整,因此會視情況調整有效稅率門檻。<br><br>- 政府部門已就法案內容與企業利益相關者進行協商,確定一些細節,例如企業投資先進製程設備的門檻設為100億元新台幣以供抵減。<br><br>- 研發費用門檻經過討論,可能從最初提議的100億元下調至60億至70億元之間,以激勵更多企業符合資格並投入研發,維持台灣在國際供應鏈中的關鍵地位。<br><br>- 經濟部官員表示,調降門檻的目的是讓更多企業覺得能夠達成標準,從而獲得應用租稅優惠的動力,並可能藉此提高在台投資金額,進而增加政府稅收。<br><br>- 由於IC設計業者的研發費用普遍較低,若門檻設得過高,符合條件的企業將很少,因此有必要調整。<br><br>- 在各國推動供應鏈自主化並增加對半導體產業的補助之際,經濟部正努力確保更多企業能從《產業創新條例》第10條之2中受益,以增強投資和鞏固台灣的技術地位。<br><br>- 政府強調租稅優惠的制定應符合獎勵目標,在此同時,也將繼續提供其他租稅優惠給在國內經營的企業,以促進台灣經濟成長。<br><br>- 最終的研發費用門檻將由經濟部和財政部共同確定,在預告相關子法後,將有約30天的時間供各界討論並提出調整建議,目標是讓這項新政策在6月上路。 | | 翻譯 | 以下提供英文內容,請幫我翻譯成中文。Dongshan coffee is famous for its unique position, and the constant refinement of production methods. The flavor is admired by many caffeine afficionados. | 東山咖啡以獨特的位置,加上不斷精進的製作手法而聞名,風味更是讓許多咖啡癡趨之若鶩。 | # 使用說明 * 範例程式 * 範例程式與文件後續會釋出到github * prompt 樣版 * 一般問答用法 ```python chat = [ {"role": "user", "content": "{question}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * 將 {question} 替換成使用者的輸入 * 加入 system prompt 的用法 ```python chat = [ {"role": "system", "content": "{sys}"}, {"role": "user", "content": "{question}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * 將 {sys} 替換成指令,例如:你是一個來自台灣的AI助理,你的名字是 TAIDE,樂於以台灣人的立場幫助使用者,會用繁體中文回答問題。 * 將 {question} 替換成使用者的問題 * 多輪問答用法 ```python chat = [ {"role": "system", "content": "{sys}"}, {"role": "user", "content": "{question1}"}, {"role": "assistant", "content": "{model_anwer_1}"}, {"role": "user", "content": "{question2}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * 將 {sys} 替換成指令,例如:你是一個來自台灣的AI助理,你的名字是 TAIDE,樂於以台灣人的立場幫助使用者,會用繁體中文回答問題。 * 將 {question1} 替換成使用者的問題1 * 將 {model_anwer_1} 替換成模型的回答1 * 將 {question2} 替換成使用者的問題2 * 更多細節請參考[Llama3 文件](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) # 訓練方法 * 軟硬體規格 * 國網中心 H100 * 訓練框架: PyTorch * 資料前處理 * 字元標準化 * 去除重覆 * 去除雜訊 * 網頁資料的html tag、javascript * 非標準字元或亂碼 * 字數過短的文章 * 去除文章中的特定格式,如為排版增加的換行 * 去除個資,如email、電話 * 去除不當文字,如賭博、色情等 * 持續預訓練 (continuous pretraining, CP) * 補充大量來源可信賴的繁體中文知識 * 超參數 (hyper parameters) * optimizer: AdamW * learning rate: 1e-4 * batch size: 1M tokens * epoch: 1 * 微調 (fine tune, FT) * 讓模型可針對繁體中文提問回答問題 * 超參數 (hyper parameters) * optimizer: AdamW * learning rate: 5e-5 * batch size: 256K tokens * epoch: 3 # 訓練資料 * 持續預訓練資料(資料量約為140G) | 資料集 | 資料描述 | | --- | -------- | | 訴訟資料 | 《[司法院裁判書](https://judgment.judicial.gov.tw/FJUD/default.aspx)》自2013年1月至2023年12月各級法院民事、刑事、行政訴訟資料。 | | 中央社 | 《[中央社中文新聞](https://www.cna.com.tw/)》資料集含中央社自1993年6月至2023年06月,共30年份之每日新聞文章。內容涵蓋國內外政治、社會、財經、文教、生活等領域。 | | ETtoday 新聞雲 | 《[ETtoday新聞雲](https://www.ettoday.net/)》資料,包含自2011年10月至 2023年12月的資料。 | | 立法院公報 | 《[立法院公報](https://ppg.ly.gov.tw/ppg/)》包含自第8屆第1會期至第10屆第7會期之公報資料。 | | 出版商網站書籍介紹 | 包含[三采](https://www.suncolor.com.tw/)、[Gotop](https://www.gotop.com.tw/)出版商網站上的書籍簡介。 | | GRB 研究計畫摘要 | [GRB](https://www.grb.gov.tw/)為收錄由政府經費補助之研究計畫及其成果報告的資訊系統,此資料集主要收錄 1993年至 2023年之研究計畫摘要以及研究報告摘要,含中文及其英文對照。 | | 學術會議論文摘要 | 收錄《[學術會議論文摘要資料庫](https://sticnet.stpi.narl.org.tw/sticloc/ttscalle?meet:)》中自1988至2009年由台灣所舉辦之學術會議論文。 | | 光華雜誌 | 《[台灣光華雜誌](https://www.taiwan-panorama.com/)》含自1993年7月至2023年6月的文章,共30年份。內容著重於我國文化、觀光與民情等。 | | 樂詞網 | 《[樂詞網](https://terms.naer.edu.tw/)》涵蓋文理領域約187萬則學術名詞及其譯名對照。 | | 各部會資料 | 包含行政院「[國情簡介](https://www.ey.gov.tw/state/)」、文化部「[國家文化記憶庫](https://memory.culture.tw/)」、國發會「[檔案支援教學網](https://art.archives.gov.tw/index.aspx)」、交通部「[交通安全入口網](https://168.motc.gov.tw/)」等部會網站資料之部分資料。 | | 今周刊 | 《[今周刊](https://www.businesstoday.com.tw/)》為一以財經為主的週刊雜誌,此資料集涵蓋2008年1月至2023年7月的文章。 | | 教育部國語辭典、成語辭典 | 包含以下三項資料:<br>[教育部《成語典》](https://dict.idioms.moe.edu.tw/search.jsp?webMd=1&la=0),含5,338條成語,內容包含每條成語的釋義、典故原文及其白話說明、用法說明、例句等。<br>[教育部《重編國語辭典修訂本》](https://dict.revised.moe.edu.tw/?la=0&powerMode=0),收錄中文單字及各類辭彙,包含讀音、部首、釋義等資訊,共約165,539筆資料。<br>[教育部《國語辭典簡編本》](https://dict.concised.moe.edu.tw/?la=0&powerMode=0),為《重編國語辭典修訂本》的簡編版本,共45,247筆資料。 | | 科技大觀園資料 | 含《[科技大觀園網站](https://scitechvista.nat.gov.tw/)》上的科學新知以及科普文章。 | | iKnow 科技產業資訊室 | 《[科技產業資訊室](https://iknow.stpi.narl.org.tw/)(iKnow)》提供台灣及全球的科技市場趨勢、策略分析、專利知識,及技術交易資訊,專注於科技產業的創新與發展,包含自 2008 年至 2023 年。 | | 科學發展月刊 | 《[科學發展月刊](https://ejournal.stpi.narl.org.tw/sd)》為國科會為推廣科學教育而出版的科普刊物,含自2004年10月至2020年12月之科普文章;2021年起,以《[科技魅癮](https://www.charmingscitech.nat.gov.tw/)》季刊重新出發,提供國際關注科技議題的新知文章。 | | 法規資料庫 | 《[法規資料庫](https://law.moj.gov.tw/)》含截自 112 年 10 月各政府部門最新發布之中央法規、行政規則、法規命令草案及地方自治法規等。 | | 各地政府旅遊網 | 涵蓋台灣部分縣市地方政府觀光旅遊網站上之部分資料。 | | 國教院課程綱要(十二年國教) | 含十二年國教課程綱要之總綱以及各級學校不同科目之課程綱要。 | | 中央社譯名檔資料庫 | 《中央社譯名檔資料庫》蒐集中央社新聞業務上翻譯過的中外姓氏、人名、組織、地名等譯名對照。 | | 童話書 | 共 20 本童話書,含湯姆歷險記、小飛俠、愛麗絲夢遊仙境、長腿叔叔等。 | | RedPajama-Data-V2 | 從國外開放多國語言語料庫 [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data) 取出英文資料 | | MathPile-commercial | 國外開放數學語料庫 [MathPile-commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial) | | 中文維基百科 | 《[中文維基百科](https://zh.wikipedia.org/zh-tw/%E4%B8%AD%E6%96%87%E7%BB%B4%E5%9F%BA%E7%99%BE%E7%A7%91)》截至2023年1月所有條目的內容。 | | github-code-clean | 為 github 開源程式碼資料集,去除unlicense的程式碼和文件。 | * 微調資料 * TAIDE團隊訓練llama2系列模型來產生微調資料資料,產生的任務包含世界知識、創意寫作、普通常識、翻譯、摘要、程式、台灣價值等單輪或多輪對話問答共 128K 筆。微調資料後續會對外釋出。 # 模型評測 * taide-bench * 評測資料 * 寫文章、寫信、摘要、英翻中、中翻英,共500題 * 資料連結: [taide-bench](https://huggingface.co/datasets/taide/taide-bench) * 評測方法 * gpt4評分 * 評分程式: [taide-bench-eval](https://github.com/taide-taiwan/taide-bench-eval) * 評測分數 | 模型 | 中翻英 | 英翻中 | 摘要 | 寫文章 | 寫信 | 平均 | | --- | ----- | ----- | ---- | ---- | ---- | --- | | Llama3-TAIDE-LX-8B-Chat-Alpha1 | 7.770 | 8.280 | 8.495 | 9.605 | 8.950 | 8.620 | | GPT3.5 | 8.880 | 8.810 | 7.450 | 9.490 | 8.750 | 8.676 | | TAIDE-LX-7B-Chat | 7.165 | 7.685 | 7.720 | 9.635 | 9.110 | 8.263 | | LLAMA2 7B | 6.075 | 4.475 | 5.905 | 2.625 | 3.040 | 4.424 | | LLAMA2 13B | 6.480 | 6.135 | 6.110 | 2.565 | 3.000 | 4.858 | | LLAMA2 70B | 6.975 | 6.375 | 6.795 | 2.625 | 2.990 | 5.152 | # 授權條款 * [Llama3-TAIDE 模型社群授權同意書](https://drive.google.com/file/d/12-Q0WWSjG0DW6CqJQm_jr5wUGRLeb-8p/view) # 免責聲明 * LLM 模型由於設計架構的限制,以及資料難免有偏誤,語言模型的任何回應不代表 TAIDE 立場,使用前需要額外加入安全防護機制,且回應內容也可能包含不正確的資訊,使用者請勿盡信。 # 開發團隊 * [https://taide.tw/index/teamList](https://taide.tw/index/teamList) # 相關連結 * [TAIDE官網](https://taide.tw/index) * [TAIDE Huggingface](https://huggingface.co/taide) * [TAIDE Github](https://github.com/taide-taiwan) * [Kuwa AI](https://kuwaai.org/) # Citation * [TAIDE官網](https://taide.tw/index)
mradermacher/OpenCarrot-llama3-Mix-8B-GGUF
mradermacher
2024-05-07T16:01:48Z
417
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "ko", "en", "base_model:CarrotAI/OpenCarrot-llama3-Mix-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-30T16:29:29Z
--- base_model: CarrotAI/OpenCarrot-llama3-Mix-8B language: - ko - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/CarrotAI/OpenCarrot-llama3-Mix-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/OpenCarrot-llama3-Mix-8B-GGUF/resolve/main/OpenCarrot-llama3-Mix-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf
RichardErkhov
2024-05-30T08:26:37Z
417
0
null
[ "gguf", "region:us" ]
null
2024-05-30T05:31:51Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) CollectiveCognition-v1.1-Nebula-7B - GGUF - Model creator: https://huggingface.co/Weyaxi/ - Original model: https://huggingface.co/Weyaxi/CollectiveCognition-v1.1-Nebula-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [CollectiveCognition-v1.1-Nebula-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [CollectiveCognition-v1.1-Nebula-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [CollectiveCognition-v1.1-Nebula-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [CollectiveCognition-v1.1-Nebula-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [CollectiveCognition-v1.1-Nebula-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [CollectiveCognition-v1.1-Nebula-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [CollectiveCognition-v1.1-Nebula-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [CollectiveCognition-v1.1-Nebula-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [CollectiveCognition-v1.1-Nebula-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [CollectiveCognition-v1.1-Nebula-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [CollectiveCognition-v1.1-Nebula-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [CollectiveCognition-v1.1-Nebula-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [CollectiveCognition-v1.1-Nebula-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [CollectiveCognition-v1.1-Nebula-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [CollectiveCognition-v1.1-Nebula-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [CollectiveCognition-v1.1-Nebula-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [CollectiveCognition-v1.1-Nebula-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [CollectiveCognition-v1.1-Nebula-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [CollectiveCognition-v1.1-Nebula-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [CollectiveCognition-v1.1-Nebula-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [CollectiveCognition-v1.1-Nebula-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q6_K.gguf) | Q6_K | 5.53GB | | [CollectiveCognition-v1.1-Nebula-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_CollectiveCognition-v1.1-Nebula-7B-gguf/blob/main/CollectiveCognition-v1.1-Nebula-7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: cc-by-nc-4.0 datasets: - garage-bAInd/Open-Platypus language: - en --- <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> # OpenOrca-Nebula-7B OpenOrca-Nebula-7B is a merge of [teknium/CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B) and [PulsarAI/Nebula-7B](https://huggingface.co/Weyaxi/PulsarAI/Nebula-7B) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PulsarAI__CollectiveCognition-v1.1-Nebula-7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 53.79 | | ARC (25-shot) | 58.11 | | HellaSwag (10-shot) | 82.39 | | MMLU (5-shot) | 57.03 | | TruthfulQA (0-shot) | 53.53 | | Winogrande (5-shot) | 73.72 | | GSM8K (5-shot) | 9.55 | | DROP (3-shot) | 42.17 |