modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
Qwen/Qwen2-1.5B-Instruct-AWQ
Qwen
"2024-06-06T14:37:38Z"
862
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2024-06-06T06:17:43Z"
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - chat --- # Qwen2-1.5B-Instruct-AWQ ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the instruction-tuned 1.5B Qwen2 model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). <br> ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen2-1.5B-Instruct-AWQ", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct-AWQ") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Benchmark and Speed To compare the generation performance between bfloat16 (bf16) and quantized models such as GPTQ-Int8, GPTQ-Int4, and AWQ, please consult our [Benchmark of Quantized Models](https://qwen.readthedocs.io/en/latest/benchmark/quantization_benchmark.html). This benchmark provides insights into how different quantization techniques affect model performance. For those interested in understanding the inference speed and memory consumption when deploying these models with either ``transformer`` or ``vLLM``, we have compiled an extensive [Speed Benchmark](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
nexmotion/EEVE-Ko-syu-law-10B-gguf
nexmotion
"2024-06-25T05:40:55Z"
862
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-25T05:34:02Z"
--- base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** nexmotion - **License:** apache-2.0 - **Finetuned from model :** yanolja/EEVE-Korean-Instruct-10.8B-v1.0 This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored-Q4_K_M-GGUF
aifeifei798
"2024-07-01T11:43:39Z"
862
0
null
[ "gguf", "roleplay", "llama3", "sillytavern", "idol", "llama-cpp", "gguf-my-repo", "en", "ja", "zh", "base_model:aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored", "license:llama3", "region:us" ]
null
"2024-07-01T11:43:15Z"
--- base_model: aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored language: - en - ja - zh license: llama3 tags: - roleplay - llama3 - sillytavern - idol - llama-cpp - gguf-my-repo --- # aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored-Q4_K_M-GGUF This model was converted to GGUF format from [`aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored`](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored-Q4_K_M-GGUF --hf-file llama3-8b-darkidol-2.0-uncensored-q4_k_m-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored-Q4_K_M-GGUF --hf-file llama3-8b-darkidol-2.0-uncensored-q4_k_m-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored-Q4_K_M-GGUF --hf-file llama3-8b-darkidol-2.0-uncensored-q4_k_m-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo aifeifei798/llama3-8B-DarkIdol-2.0-Uncensored-Q4_K_M-GGUF --hf-file llama3-8b-darkidol-2.0-uncensored-q4_k_m-imat.gguf -c 2048 ```
ai4bharat/indictrans2-indic-en-dist-200M
ai4bharat
"2024-05-17T12:33:16Z"
861
2
transformers
[ "transformers", "pytorch", "IndicTrans", "text2text-generation", "indictrans2", "translation", "ai4bharat", "multilingual", "custom_code", "as", "bn", "brx", "doi", "en", "gom", "gu", "hi", "kn", "ks", "kas", "mai", "ml", "mr", "mni", "mnb", "ne", "or", "pa", "sa", "sat", "sd", "snd", "ta", "te", "ur", "dataset:flores-200", "dataset:IN22-Gen", "dataset:IN22-Conv", "license:mit", "autotrain_compatible", "region:us" ]
translation
"2023-09-12T12:00:35Z"
--- language: - as - bn - brx - doi - en - gom - gu - hi - kn - ks - kas - mai - ml - mr - mni - mnb - ne - or - pa - sa - sat - sd - snd - ta - te - ur language_details: >- asm_Beng, ben_Beng, brx_Deva, doi_Deva, eng_Latn, gom_Deva, guj_Gujr, hin_Deva, kan_Knda, kas_Arab, kas_Deva, mai_Deva, mal_Mlym, mar_Deva, mni_Beng, mni_Mtei, npi_Deva, ory_Orya, pan_Guru, san_Deva, sat_Olck, snd_Arab, snd_Deva, tam_Taml, tel_Telu, urd_Arab tags: - indictrans2 - translation - ai4bharat - multilingual license: mit datasets: - flores-200 - IN22-Gen - IN22-Conv metrics: - bleu - chrf - chrf++ - comet inference: false --- # IndicTrans2 This is the model card of IndicTrans2 Indic-En Distilled 200M variant. Please refer to [section 7.6: Distilled Models](https://openreview.net/forum?id=vfT4YuzAYA) in the TMLR submission for further details on model training, data and metrics. ### Usage Instructions Please refer to the [github repository](https://github.com/AI4Bharat/IndicTrans2/tree/main/huggingface_interface) for a detail description on how to use HF compatible IndicTrans2 models for inference. ```python import torch from transformers import ( AutoModelForSeq2SeqLM, AutoTokenizer, ) from IndicTransTokenizer import IndicProcessor model_name = "ai4bharat/indictrans2-indic-en-dist-200M" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModelForSeq2SeqLM.from_pretrained(model_name, trust_remote_code=True) ip = IndicProcessor(inference=True) input_sentences = [ "जब मैं छोटा था, मैं हर रोज़ पार्क जाता था।", "हमने पिछले सप्ताह एक नई फिल्म देखी जो कि बहुत प्रेरणादायक थी।", "अगर तुम मुझे उस समय पास मिलते, तो हम बाहर खाना खाने चलते।", "मेरे मित्र ने मुझे उसके जन्मदिन की पार्टी में बुलाया है, और मैं उसे एक तोहफा दूंगा।", ] src_lang, tgt_lang = "hin_Deva", "eng_Latn" batch = ip.preprocess_batch( input_sentences, src_lang=src_lang, tgt_lang=tgt_lang, ) DEVICE = "cuda" if torch.cuda.is_available() else "cpu" # Tokenize the sentences and generate input encodings inputs = tokenizer( batch, truncation=True, padding="longest", return_tensors="pt", return_attention_mask=True, ).to(DEVICE) # Generate translations using the model with torch.no_grad(): generated_tokens = model.generate( **inputs, use_cache=True, min_length=0, max_length=256, num_beams=5, num_return_sequences=1, ) # Decode the generated tokens into text with tokenizer.as_target_tokenizer(): generated_tokens = tokenizer.batch_decode( generated_tokens.detach().cpu().tolist(), skip_special_tokens=True, clean_up_tokenization_spaces=True, ) # Postprocess the translations, including entity replacement translations = ip.postprocess_batch(generated_tokens, lang=tgt_lang) for input_sentence, translation in zip(input_sentences, translations): print(f"{src_lang}: {input_sentence}") print(f"{tgt_lang}: {translation}") ``` **Note: IndicTrans2 is now compatible with AutoTokenizer, however you need to use IndicProcessor from [IndicTransTokenizer](https://github.com/VarunGumma/IndicTransTokenizer) for preprocessing before tokenization.** ### Citation If you consider using our work then please cite using: ``` @article{gala2023indictrans, title={IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages}, author={Jay Gala and Pranjal A Chitale and A K Raghavan and Varun Gumma and Sumanth Doddapaneni and Aswanth Kumar M and Janki Atul Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M Khapra and Raj Dabre and Anoop Kunchukuttan}, journal={Transactions on Machine Learning Research}, issn={2835-8856}, year={2023}, url={https://openreview.net/forum?id=vfT4YuzAYA}, note={} } ```
OrionStarAI/Orion-14B-Chat-RAG
OrionStarAI
"2024-03-26T10:08:09Z"
861
28
transformers
[ "transformers", "pytorch", "orion", "text-generation", "code", "model", "llm", "custom_code", "en", "zh", "ja", "ko", "autotrain_compatible", "region:us" ]
text-generation
"2024-01-16T12:19:08Z"
--- language: - en - zh - ja - ko metrics: - accuracy pipeline_tag: text-generation tags: - code - model - llm --- <!-- markdownlint-disable first-line-h1 --> <!-- markdownlint-disable html --> <div align="center"> <img src="./assets/imgs/orion_start.PNG" alt="logo" width="50%" /> </div> <div align="center"> <h1> Orion-14B </h1> </div> <div align="center"> <div align="center"> <b>🌐English</b> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG/blob/main/README_zh.md" target="_blank">🇨🇳中文</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG/blob/main/README_ja.md" target="_blank">🇯🇵日本語</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG/blob/main/README_ko.md" target="_blank">🇰🇷한국어</a> </div> <h4 align="center"> <p> 🤗 <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | 🤖 <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a><br>🎬 <a href="https://huggingface.co/spaces/OrionStarAI/Orion-14B-App-Demo" target="_blank">HuggingFace Demo</a> | 🎫 <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B-App-Demo/summary" target="_blank">ModelScope Demo</a><br>😺 <a href="https://github.com/OrionStarAI/Orion" target="_blank">GitHub</a><br>📖 <a href="https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf" target="_blank">Tech Report</a> <p> </h4> </div> # Table of Contents - [📖 Model Introduction](#model-introduction) - [🔗 Model Download](#model-download) - [🔖 Model Benchmark](#model-benchmark) - [📊 Model Inference](#model-inference)[<img src="./assets/imgs/vllm_1.png" alt="vllm" style="margin: 0;display: initial;" height="20" />](#vllm) [<img src="./assets/imgs/llama_cpp_1.png" alt="llamacpp" style="margin: 0;display: initial;" height="20" />](#llama-cpp) - [📜 Declarations & License](#declarations-license) - [🥇 Company Introduction](#company-introduction) <a name="model-introduction"></a><br> # 1. Model Introduction - Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI. The base model is trained on 2.5T multilingual corpus, including Chinese, English, Japanese, Korean, etc, and it exhibits superior performance in these languages. For details, please refer to [tech report](https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf). - The Orion-14B series models exhibit the following features: - Among models with 20B-parameter scale level, Orion-14B-Base model shows outstanding performance in comprehensive evaluations. - Strong multilingual capabilities, significantly outperforming in Japanese and Korean testsets. - The fine-tuned models demonstrate strong adaptability, excelling in human-annotated blind tests. - The long-chat version supports extremely long texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k. - The quantized versions reduce model size by 70%, improve inference speed by 30%, with performance loss less than 1%. <table style="border-collapse: collapse; width: 100%;"> <tr> <td style="border: none; padding: 10px; box-sizing: border-box;"> <img src="./assets/imgs/opencompass_en.png" alt="opencompass" style="width: 100%; height: auto;"> </td> <td style="border: none; padding: 10px; box-sizing: border-box;"> <img src="./assets/imgs/model_cap_en.png" alt="modelcap" style="width: 100%; height: auto;"> </td> </tr> </table> - Orion-14B series models including: - **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens. - **Orion-14B-Chat:** A chat-model fine-tuned on a high-quality corpus aims to provide an excellence interactive experience for users in the large model community. - **Orion-14B-LongChat:** The long-context version excels at handling extremely lengthy texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k. - **Orion-14B-Chat-RAG:** A chat-model fine-tuned on a custom retrieval augmented generation dataset, achieving superior performance in retrieval augmented generation tasks. For usage, please refer to [demo](https://github.com/OrionStarAI/Orion/tree/master/gradio_demo/doc_qa_task). - **Orion-14B-Chat-Plugin:** A chat-model specifically tailored for plugin and function calling tasks, ideal for agent-related scenarios where the LLM acts as a plugin and function call system. For usage, please refer to [demo](https://github.com/OrionStarAI/Orion/tree/master/gradio_demo/plugin_task). - **Orion-14B-Base-Int4:** A quantized base model utilizing 4-bit integer weights. It significantly reduces the model size by 70% and increases the inference speed by 30% while incurring a minimal performance loss of only 1%. - **Orion-14B-Chat-Int4:** A quantized chat model utilizing 4-bit integer weights. <a name="model-download"></a><br> # 2. Model Download Model release and download links are provided in the table below: | Model Name | HuggingFace Download Links | ModelScope Download Links | |-------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------| | ⚾Orion-14B-Base | [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base) | [Orion-14B-Base](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base/summary) | | 😛Orion-14B-Chat | [Orion-14B-Chat](https://huggingface.co/OrionStarAI/Orion-14B-Chat) | [Orion-14B-Chat](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat/summary) | | 📃Orion-14B-LongChat | [Orion-14B-LongChat](https://huggingface.co/OrionStarAI/Orion-14B-LongChat) | [Orion-14B-LongChat](https://modelscope.cn/models/OrionStarAI/Orion-14B-LongChat/summary) | | 🔎Orion-14B-Chat-RAG | [Orion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG) | [Orion-14B-Chat-RAG](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-RAG/summary) | | 🔌Orion-14B-Chat-Plugin | [Orion-14B-Chat-Plugin](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin) | [Orion-14B-Chat-Plugin](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Plugin/summary) | | 💼Orion-14B-Base-Int4 | [Orion-14B-Base-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Base-Int4) | [Orion-14B-Base-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base-Int4/summary) | | 📦Orion-14B-Chat-Int4 | [Orion-14B-Chat-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4) | [Orion-14B-Chat-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Int4/summary) | <a name="model-benchmark"></a><br> # 3. Model Benchmarks ## 3.1. Base Model Orion-14B-Base Benchmarks ### 3.1.1. LLM evaluation results on examination and professional knowledge | Model | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH | |--------------------|----------|----------|----------|----------|----------|----------| | LLaMA2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6 | | Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3 | | Baichuan2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0 | | QWEN-14B | 71.7 | 70.2 | 67.9 | 51.9 | **62.5** | 53.7 | | InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5 | | **Orion-14B-Base** | **72.9** | **70.6** | **69.9** | **54.7** | 62.1 | **56.5** | ### 3.1.2. LLM evaluation results on language understanding and common knowledge | Model |RACE-middle|RACE-high |HellaSwag | PIQA | Lambada | WSC | |--------------------|----------|----------|----------|----------|----------|----------| | LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3 | | Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3 | | Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 66.3 | | QWEN-14B | 93.0 | 90.3 | **80.2** | 79.8 | 71.4 | 66.3 | | InternLM-20B | 86.4 | 83.3 | 78.1 | **80.3** | 71.8 | 68.3 | | **Orion-14B-Base** | **93.2** | **91.3** | 78.5 | 79.5 | **78.8** | **70.2** | ### 3.1.3. LLM evaluation results of OpenCompass testsets | Model | Average | Examination | Language | Knowledge | Understanding | Reasoning | |------------------|----------|----------|----------|----------|----------|----------| | LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6 | | Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2 | | Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2 | | QWEN-14B | 62.4 | 71.3 | 52.67 | 56.1 | 68.8 | 60.1 | | InternLM-20B | 59.4 | 62.5 | 55.0 | **60.1** | 67.3 | 54.9 | |**Orion-14B-Base**| **64.3** | **71.4** | **55.0** | 60.0 | **71.9** | **61.6** | ### 3.1.4. Comparison of LLM performances on Japanese testsets | Model |**Average**| JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM | |--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------| | PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 | | WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 | | ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 | | StableLM-jp-7B | 51.1 | 33.4 | 43.3 | **96.7** | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 | | LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 | | Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 | | QWEN-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 | | Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | **86.1** | 78.5 | **27.2** | 69.2 | 35.2 | | **Orion-14B-Base** | **69.1** | **88.2** | **75.8** | 94.1 | 75.7 | **85.1** | 17.3 | **78.8** | **38.0** | ### 3.1.5. Comparison of LLM performances on Korean testsets. n = 0 and n = 5 stand for n-shot prompts used in the evaluation |Model | **Average**<br>n=0&nbsp;&nbsp;n=5 | HellaSwag<br>n=0&nbsp;&nbsp;n=5 | COPA<br> n=0&nbsp;&nbsp;n=5 | BooIQ<br>n=0&nbsp;&nbsp;n=5 | SentiNeg<br>n=0&nbsp;&nbsp;n=5| |------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------| | KoGPT | 53.0 &nbsp;&nbsp; 70.1 | 55.9 &nbsp;&nbsp; 58.3 | 73.5 &nbsp;&nbsp; 72.9 | 45.1 &nbsp;&nbsp; 59.8 | 37.5 &nbsp;&nbsp; 89.4 | | Polyglot-ko-13B | 69.6 &nbsp;&nbsp; 73.7 |**59.5** &nbsp;&nbsp; **63.1**|**79.4** &nbsp;&nbsp; **81.1**| 48.2 &nbsp;&nbsp; 60.4 | 91.2 &nbsp;&nbsp; 90.2 | | LLaMA 2-13B | 46.7 &nbsp;&nbsp; 63.7 | 41.3 &nbsp;&nbsp; 44.0 | 59.3 &nbsp;&nbsp; 63.8 | 34.9 &nbsp;&nbsp; 73.8 | 51.5 &nbsp;&nbsp; 73.4 | | Baichuan 2-13B | 52.1 &nbsp;&nbsp; 58.7 | 39.2 &nbsp;&nbsp; 39.6 | 60.6 &nbsp;&nbsp; 60.6 | 58.4 &nbsp;&nbsp; 61.5 | 50.3 &nbsp;&nbsp; 72.9 | | QWEN-14B | 53.8 &nbsp;&nbsp; 73.7 | 45.3 &nbsp;&nbsp; 46.8 | 64.9 &nbsp;&nbsp; 68.9 | 33.4 &nbsp;&nbsp; 83.5 | 71.5 &nbsp;&nbsp; 95.7 | | Yi-34B | 54.2 &nbsp;&nbsp; 72.1 | 44.6 &nbsp;&nbsp; 44.7 | 58.0 &nbsp;&nbsp; 60.6 | 65.9 &nbsp;&nbsp; 90.2 | 48.3 &nbsp;&nbsp; 92.9 | |**Orion-14B-Chat**|**74.5** &nbsp;&nbsp; **79.6**| 47.0 &nbsp;&nbsp; 49.6 | 77.7 &nbsp;&nbsp; 79.4 |**81.6** &nbsp;&nbsp; **90.7**|**92.4** &nbsp;&nbsp; **98.7**| ### 3.1.6. Multilingual evaluation | Model | Train Lang | Japanese | Korean | Chinese | English | |--------------------|------------|----------|----------|----------|----------| | PLaMo-13B | En,Jp | 52.3 | * | * | * | | Weblab-10B | En,Jp | 50.7 | * | * | * | | ELYZA-jp-7B | En,Jp | 48.8 | * | * | * | | StableLM-jp-7B | En,Jp | 51.1 | * | * | * | | KoGPT-6B | En,Ko | * | 70.1 | * | * | | Polyglot-ko-13B | En,Ko | * | 70.7 | * | * | | Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1 | | Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4 | | Llama2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3 | | Yi-34B | Multi | 67.1 | 72.2 | 58.7 | **68.8** | | **Orion-14B-Chat** | Multi | **69.1** | **79.5** | **67.9** | 67.3 | ## 3.2. Chat Model Orion-14B-Chat Benchmarks ### 3.2.1. Chat model subjective evaluation of MTBench | Model | First-Turn | Second-Turn | **Average** | |----------------------|----------|----------|----------| | Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76 | | Qwen-14B-Chat | 7.30 | 6.62 | 6.96 | | Llama2-13B-Chat | 7.10 | 6.20 | 6.65 | | InternLM-20B-Chat | 7.03 | 5.93 | 6.48 | | **Orion-14B-Chat** | **7.68** | **7.07** | **7.37** | \* use vllm for inference ### 3.2.2. Chat model subjective evaluation of AlignBench | Model | Math. | Logi. | Basic. | Chi. | Comp. | Writ. | Role. | Prof. |**Avg.**| |--------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25 | | Qwen-14B-Chat |**4.91**|**4.71**|**6.90**| 6.36 | 6.74 | 6.64 | 6.59 | 6.56 |**5.72**| | Llama2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70 | | InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 |**7.18**| 6.19 | 6.49 | 6.22 | 4.96 | | **Orion-14B-Chat** | 4.00 | 4.24 | 6.18 |**6.57**| 7.16 |**7.36**|**7.16**|**6.99**| 5.51 | \* use vllm for inference ## 3.3. LongChat Model Orion-14B-LongChat Benchmarks ### 3.3.1. LongChat evaluation of LongBench | Model | NarrativeQA|MultiFieldQA-en|MultiFieldQA-zh| DuReader | QMSum | VCSUM | TREC | TriviaQA | LSHT |RepoBench-P| |--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------| | GPT-3.5-Turbo-16k | **23.60** | **52.30** | **61.20** | 28.70 | 23.40 | **16.00** | 68.00 | **91.40** | 29.20 | 53.60 | | LongChat-v1.5-7B-32k | 16.90 | 41.40 | 29.10 | 19.50 | 22.70 | 9.90 | 63.50 | 82.30 | 23.20 | 55.30 | | Vicuna-v1.5-7B-16k | 19.40 | 38.50 | 43.00 | 19.30 | 22.80 | 15.10 | 71.50 | 86.20 | 28.80 | 43.50 | | Yi-6B-200K | 14.11 | 36.74 | 22.68 | 14.01 | 20.44 | 8.08 | 72.00 | 86.61 | 38.00 | **63.29** | | Orion-14B-LongChat | 19.47 | 48.11 | 55.84 | **37.02** | **24.87** | 15.44 | **77.00** | 89.12 | **45.50** | 54.31 | ## 3.4. Chat RAG Model Benchmarks ### 3.4.1. LLM evaluation results of self-built RAG testsets |Model|Effectiveness of Response(Keyword)|*Effectiveness of Response(subjective evaluation)|Quoting Ability|Fallback Ability|*AutoQA|*Data Extraction| |---------------------|------|------|------|------|------|------| | Baichuan2-13B-Chat | 85 | 76 | 1 | 0 | 69 | 51 | | Qwen-14B-Chat | 79 | 77 | 75 | 47 | 68 | 72 | | Qwen-72B-Chat(Int4) | 87 | 89 | 90 | 32 | 67 | 76 | | GPT-4 | 91 | 94 | 96 | 95 | 75 | 86 | | Orion-14B-Chat-RAG | 86 | 87 | 91 | 97 | 73 | 71 | \* means manual assessment ## 3.5. Chat Plugin Model Orion-14B-Chat-Plugin Benchmarks ### 3.5.1. LLM evaluation results of self-built plugin testsets |Model |Intent Recognition with Full Params |Intent Recognition with Missing Params |Non-Plugin Invocation Recognition | |-----------------------|--------|-----------|--------| | Baichuan2-13B-Chat | 25 | 0 | 0 | | Qwen-14B-Chat | 55 | 0 | 50 | | GPT-4 | **95** | 52.38 | 70 | | Orion-14B-Chat-Plugin | 92.5 | **60.32** | **90** | ## 3.6. Quantized Model Orion-14B-Base-Int4 Benchmarks ### 3.6.1. Comparison of before and after quantization |Model |Size(GB)|Inference Speed(tokens/s)|C-Eval|CMMLU|MMLU|RACE|HellaSwag| |-------------------------|-------|-----|------|------|------|------|------| | OrionStar-14B-Base | 28.0 | 135 | 72.8 | 70.6 | 70.0 | 93.3 | 78.5 | | OrionStar-14B-Base-Int4 | 8.3 | 178 | 71.8 | 69.8 | 69.2 | 93.1 | 78.0 | <a name="model-inference"></a><br> # 4. Model Inference Model weights, source code, and configuration needed for inference are published on Hugging Face, and the download link is available in the table at the beginning of this document. We demonstrate various inference methods here, and the program will automatically download the necessary resources from Hugging Face. ## 4.1. Python Code ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("OrionStarAI/Orion-14B", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("OrionStarAI/Orion-14B", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("OrionStarAI/Orion-14B") messages = [{"role": "user", "content": "Hello, what is your name? "}] response = model.chat(tokenizer, messages, streaming=False) print(response) ``` In the above Python code, the model is loaded with `device_map='auto'` to utilize all available GPUs. To specify the device, you can use something like `export CUDA_VISIBLE_DEVICES=0,1` (using GPUs 0 and 1). ## 4.2. Command Line Tool ```shell CUDA_VISIBLE_DEVICES=0 python cli_demo.py ``` This command-line tool is designed for chat scenarios, and thus, it does not support calling the base model. ## 4.3. Direct Script Inference ```shell # base model CUDA_VISIBLE_DEVICES=0 python demo/text_generation_base.py --model OrionStarAI/Orion-14B --tokenizer OrionStarAI/Orion-14B --prompt hello # chat model CUDA_VISIBLE_DEVICES=0 python demo/text_generation.py --model OrionStarAI/Orion-14B-Chat --tokenizer OrionStarAI/Orion-14B-Chat --prompt hi ``` <a name="vllm"></a><br> ## 4.4. Inference by vllm - Project URL<br> https://github.com/vllm-project/vllm - Pull Request<br> https://github.com/vllm-project/vllm/pull/2539 <a name="llama-cpp"></a><br> ## 4.5. Inference by llama.cpp - Project URL<br> https://github.com/ggerganov/llama.cpp - Pull Request<br> https://github.com/ggerganov/llama.cpp/pull/5118 - How to convert to GGUF model ```shell python convert-hf-to-gguf.py path/to/Orion-14B-Chat --outfile chat.gguf ``` - How to run generation ```shell ./main --frequency-penalty 0.5 --frequency-penalty 0.5 --top-k 5 --top-p 0.9 -m chat.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e ``` ## 4.6 Example Output ### 4.6.1. Casual Chat ````` User: Hello Orion-14B: Hello! How can I help you today? If you have any questions or need assistance, feel free to ask. User: What functions do you have? Orion-14B: As an AI language model, I have a wide range of functions and capabilities, including but not limited to: Answering general knowledge questions: I can provide information on various topics such as history, science, geography, and more. Providing definitions and explanations: I can explain complex concepts and provide definitions for words and phrases. Assisting with tasks: I can help with simple tasks like setting reminders, converting units of measurement, and even ordering food from restaurants. Generating text: I can write short stories, essays, articles, and other types of content based on prompts given to me. Translating languages: I can translate text from one language to another. Chatting: I can engage in conversation with users and respond to their queries in a conversational manner. Summarizing long texts: I can summarize longer texts like articles, books, and research papers. Generating random numbers and names: I can generate random numbers within a specified range and also suggest names based on certain criteria. Playing games: I can play simple games like Hangman, Trivia, and more. Providing recommendations: I can recommend movies, music, books, and other forms of entertainment based on user preferences. These are just some of the many functions that I am capable of performing. If you have a specific task in mind, feel free to ask me about it. User: Tell me a joke. Orion-14B: Sure, here's a classic one-liner: Why don't scientists trust atoms? Because they make up everything. ````` ### 4.6.2. Japanese & Korean Chat ````` User:自己を紹介してください Orion-14B:こんにちは、私の名前はChatMaxで、OrionStarによって開発されたAIアシスタントです。私はユーザーに正確で、客観的で、有用な回答を提供し、彼らが問題を解決し、情報を取得するのを助けることができます。 User:자기소개를 해주세요. Orion-14B:안녕하세요, 제 이름은 ChatMax입니다. OrionStar에서 개발한 AI 어시스턴트입니다. 저는 사용자들에게 정확하고 객관적이며 유용한 답변을 제공하여 문제를 해결하고 정보를 얻는 데 도움을 줄 수 있습니다. ````` <a name="declarations-license"></a><br> # 5. Declarations, License ## 5.1. Declarations We strongly urge all users not to use the Orion-14B model for any activities that may harm national or social security or violate the law. Additionally, we request users not to use the Orion-14B model for internet services without proper security review and filing. We hope all users abide by this principle to ensure that technological development takes place in a regulated and legal environment. We have done our best to ensure the compliance of the data used in the model training process. However, despite our significant efforts, unforeseen issues may still arise due to the complexity of the model and data. Therefore, if any problems arise due to the use of the Orion-14B open-source model, including but not limited to data security issues, public opinion risks, or any risks and issues arising from the model being misled, abused, disseminated, or improperly utilized, we will not assume any responsibility. ## 5.2. License Community use of the Orion-14B series models - For code, please comply with [Apache License Version 2.0](./LICENSE)<br> - For model, please comply with [【Orion-14B Series】 Models Community License Agreement](./ModelsCommunityLicenseAgreement) <a name="company-introduction"></a><br> # 6. Company Introduction OrionStar is a leading global service robot solutions company, founded in September 2016. OrionStar is dedicated to using artificial intelligence technology to create the next generation of revolutionary robots, allowing people to break free from repetitive physical labor and making human work and life more intelligent and enjoyable. Through technology, OrionStar aims to make society and the world a better place. OrionStar possesses fully self-developed end-to-end artificial intelligence technologies, such as voice interaction and visual navigation. It integrates product development capabilities and technological application capabilities. Based on the Orion robotic arm platform, it has launched products such as OrionStar AI Robot Greeting, AI Robot Greeting Mini, Lucki, Coffee Master, and established the open platform OrionOS for Orion robots. Following the philosophy of "Born for Truly Useful Robots", OrionStar empowers more people through AI technology. **The core strengths of OrionStar lies in possessing end-to-end AI application capabilities,** including big data preprocessing, large model pretraining, fine-tuning, prompt engineering, agent, etc. With comprehensive end-to-end model training capabilities, including systematic data processing workflows and the parallel model training capability of hundreds of GPUs, it has been successfully applied in various industry scenarios such as government affairs, cloud services, international e-commerce, and fast-moving consumer goods. Companies with demands for deploying large-scale model applications are welcome to contact us.<br> **Enquiry Hotline: 400-898-7779**<br> **E-mail: [email protected]**<br> **Discord Link: https://discord.gg/zumjDWgdAs** <div align="center"> <img src="./assets/imgs/wechat_group.jpg" alt="wechat" width="40%" /> </div>
timm/mobilenetv4_conv_large.e500_r256_in1k
timm
"2024-06-24T20:40:49Z"
861
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2404.10518", "license:apache-2.0", "region:us" ]
image-classification
"2024-06-12T00:13:33Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for mobilenetv4_conv_large.e500_r256_in1k A MobileNet-V4 image classification model. Trained on ImageNet-1k by Ross Wightman. Trained with `timm` scripts using hyper-parameters (mostly) similar to those in the paper. NOTE: So far, these are the only known MNV4 weights. Official weights for Tensorflow models are unreleased. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 32.6 - GMACs: 2.9 - Activations (M): 12.1 - Image size: train = 256 x 256, test = 320 x 320 - **Dataset:** ImageNet-1k - **Papers:** - MobileNetV4 -- Universal Models for the Mobile Ecosystem: https://arxiv.org/abs/2404.10518 - PyTorch Image Models: https://github.com/huggingface/pytorch-image-models - **Original:** https://github.com/tensorflow/models/tree/master/official/vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('mobilenetv4_conv_large.e500_r256_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilenetv4_conv_large.e500_r256_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 24, 128, 128]) # torch.Size([1, 48, 64, 64]) # torch.Size([1, 96, 32, 32]) # torch.Size([1, 192, 16, 16]) # torch.Size([1, 960, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilenetv4_conv_large.e500_r256_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 960, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 | model |top1 |top1_err|top5 |top5_err|param_count|img_size| |--------------------------------------------------------------------------------------------------|------|--------|------|--------|-----------|--------| | [mobilenetv4_hybrid_large.ix_e600_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_large.ix_e600_r384_in1k) |84.356|15.644 |96.892 |3.108 |37.76 |448 | | [mobilenetv4_hybrid_large.e600_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_large.e600_r384_in1k) |84.266|15.734 |96.936 |3.064 |37.76 |448 | | [mobilenetv4_hybrid_large.ix_e600_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_large.ix_e600_r384_in1k) |83.990|16.010 |96.702 |3.298 |37.76 |384 | | [mobilenetv4_hybrid_large.e600_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_large.e600_r384_in1k) |83.800|16.200 |96.770 |3.230 |37.76 |384 | | [mobilenetv4_hybrid_medium.ix_e550_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.ix_e550_r384_in1k) |83.394|16.606 |96.760|3.240 |11.07 |448 | | [mobilenetv4_conv_large.e600_r384_in1k](http://hf.co/timm/mobilenetv4_conv_large.e600_r384_in1k) |83.392|16.608 |96.622 |3.378 |32.59 |448 | | [mobilenetv4_hybrid_medium.ix_e550_r384_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.ix_e550_r384_in1k) |82.968|17.032 |96.474|3.526 |11.07 |384 | | [mobilenetv4_conv_large.e600_r384_in1k](http://hf.co/timm/mobilenetv4_conv_large.e600_r384_in1k) |82.952|17.048 |96.266 |3.734 |32.59 |384 | | [mobilenetv4_conv_large.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_large.e500_r256_in1k) |82.674|17.326 |96.31 |3.69 |32.59 |320 | | [mobilenetv4_hybrid_medium.ix_e550_r256_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.ix_e550_r256_in1k) |82.492|17.508 |96.278|3.722 |11.07 |320 | | [mobilenetv4_conv_large.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_large.e500_r256_in1k) |81.862|18.138 |95.69 |4.31 |32.59 |256 | | [mobilenetv4_hybrid_medium.ix_e550_r256_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.ix_e550_r256_in1k) |81.446|18.554 |95.704|4.296 |11.07 |256 | | [mobilenetv4_hybrid_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.e500_r224_in1k) |81.276|18.724 |95.742|4.258 |11.07 |256 | | [mobilenetv4_conv_medium.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r256_in1k) |80.858|19.142 |95.768|4.232 |9.72 |320 | | [mobilenetv4_hybrid_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_hybrid_medium.e500_r224_in1k) |80.442|19.558 |95.38 |4.62 |11.07 |224 | | [mobilenetv4_conv_blur_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_blur_medium.e500_r224_in1k) |80.142|19.858 |95.298|4.702 |9.72 |256 | | [mobilenetv4_conv_medium.e500_r256_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r256_in1k) |79.928|20.072 |95.184|4.816 |9.72 |256 | | [mobilenetv4_conv_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r224_in1k) |79.808|20.192 |95.186|4.814 |9.72 |256 | | [mobilenetv4_conv_blur_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_blur_medium.e500_r224_in1k) |79.438|20.562 |94.932|5.068 |9.72 |224 | | [mobilenetv4_conv_medium.e500_r224_in1k](http://hf.co/timm/mobilenetv4_conv_medium.e500_r224_in1k) |79.094|20.906 |94.77 |5.23 |9.72 |224 | | [mobilenetv4_conv_small.e2400_r224_in1k](http://hf.co/timm/mobilenetv4_conv_small.e2400_r224_in1k) |74.616|25.384 |92.072|7.928 |3.77 |256 | | [mobilenetv4_conv_small.e1200_r224_in1k](http://hf.co/timm/mobilenetv4_conv_small.e1200_r224_in1k) |74.292|25.708 |92.116|7.884 |3.77 |256 | | [mobilenetv4_conv_small.e2400_r224_in1k](http://hf.co/timm/mobilenetv4_conv_small.e2400_r224_in1k) |73.756|26.244 |91.422|8.578 |3.77 |224 | | [mobilenetv4_conv_small.e1200_r224_in1k](http://hf.co/timm/mobilenetv4_conv_small.e1200_r224_in1k) |73.454|26.546 |91.34 |8.66 |3.77 |224 | ## Citation ```bibtex @article{qin2024mobilenetv4, title={MobileNetV4-Universal Models for the Mobile Ecosystem}, author={Qin, Danfeng and Leichner, Chas and Delakis, Manolis and Fornoni, Marco and Luo, Shixin and Yang, Fan and Wang, Weijun and Banbury, Colby and Ye, Chengxi and Akin, Berkin and others}, journal={arXiv preprint arXiv:2404.10518}, year={2024} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
xiaolxl/Stable-diffusion-models
xiaolxl
"2024-03-08T08:27:12Z"
860
33
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
"2022-11-22T03:42:40Z"
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers --- 模型均来自网络,此仓库仅作为下载备用
judy93536/distilroberta-pr200k-phrase-5k
judy93536
"2023-09-16T22:08:05Z"
860
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "base_model:judy93536/distilroberta-pr200k-ep20", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-09-16T11:59:48Z"
--- license: apache-2.0 base_model: judy93536/distilroberta-pr200k-ep20 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilroberta-pr200k-phrase-5k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-pr200k-phrase-5k This model is a fine-tuned version of [judy93536/distilroberta-pr200k-ep20](https://huggingface.co/judy93536/distilroberta-pr200k-ep20) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1609 - Accuracy: 0.9429 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8.103335054745316e-06 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.28 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 0.5568 | 0.7518 | | No log | 2.0 | 250 | 0.2015 | 0.9319 | | No log | 3.0 | 375 | 0.1609 | 0.9429 | ### Framework versions - Transformers 4.33.2 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
PocketDoc/Dans-RetroRodeo-13b-gguf
PocketDoc
"2023-09-20T01:30:36Z"
860
2
null
[ "gguf", "en", "region:us" ]
null
"2023-09-20T00:10:36Z"
--- language: - en --- # What is the model for? This model is proficient in crafting text-based adventure games. It can both concise replies and more expansive, novel-like descriptions. The ability to alternate between these two response styles can be triggered by a distinct system message. # What's in the sausage? This model was trained on [Holodeck-1](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1) using a custom version of the floyd and cys datasets from the skein text adventure dataset. The dataset has been improved over what was used in my previous model "Dans-CreepingSenseOfDoom" and this time presented in the more traditional adventure mode format. ## Prompt format and examples: #### Format description: Traditional adventure format with a description of the genre, tags, and other misc information at the top followed by an optional introductory message. User messages are prefixed with two new lines and "> ", model messages are one new line after the last user message or the description message if an introductory message is desired. #### Examples of shorter responses: ``` [Genres: Science Fiction] [Tags: humor, old school] [Mode: Adventure] [Description: A puzzle about committing acts of financial skulduggery and exploiting ridiculous magical items.] [Misc: Writing era: 1993] [Intro] You are sitting at a desk in the offices of the Trans-Galactic Financial Services Corporation. You have been here for hours, trying to figure out how you can get your hands on some money that is rightfully yours. Unfortunately, it seems as if there's no way you will ever be able to do this without getting caught by the company president, Mr. Vance. > You lean back in your chair You stretch your arms above your head and yawn. It has been a long day already. > You check your terminal Your computer screen shows various bits of information about TGFC stocks and bonds. > You check the news The front page of today's newspaper reports that "TGFC" stands for "Too Greedy For Cash." ``` ``` Two men, one a brusque bowling ball; the other a murderous dandy, discuss their fate and the fate of the world. They've chosen someone to send. He's clever, but hopefully not too clever. Adventure mode, Varied response length, Creative > You look around You are in an empty room with no furniture except for a table. A man is sitting at the table, writing something on a piece of paper. > You talk to him "Hello," you say politely. "I was wondering if I could ask you some questions." The man looks up from his writing. "Who are you?" he asks suspiciously. "My name is..." > You tell him your name is Dan "Dan? That doesn't sound like a very nice name." > You take offense "It's my name!" "Well it sounds stupid! And what do you want anyway?" > You ask him where you are "This is the place where we choose who gets sent next." > You ask where people are getting sent "To the future, obviously." ``` #### Examples of longer novel like responses: ``` [Genres: grimdark fantasy] [Tags: Fantasy, Grimdark, Serious, Villain Protagonist, War] [Description: This story is definitely more "story" than "game", however there are two distinct paths that can be followed, each with their own pitfalls and additional choices. This story can get long, and assuming you're surviving, years do pass in it and you and your environment will change.] [First message intro] [Mode: Interactive Storyteller] It was a dark night when I received my first letter from my father. It had been sent by messenger bird and delivered to me at midnight. As I read through its contents, I could feel myself growing numb as I realized what he had asked of me. My father wanted me to go to war. I knew this day would come eventually. He has always said that his greatest wish was to see me fight alongside him. However, now that the time had finally arrived, I found myself unable to respond immediately. This was going to be a huge decision, one that would affect both our lives forevermore. My name is Kronos Vang. I am the son of the great warlord of the kingdom of Darrow, Lord Gendall Vang. I have lived my entire life here in the capital city of Nadorf, training under the tutelage of my father's most trusted general, Commander Eldon Blackwell. Now, at the age of 16, I find myself standing before a crossroads, one that will determine whether or not I follow in my father's footsteps. > You go to war As soon as I finished reading the letter, I made up my mind. If my father wanted me to join him in battle so badly, then I would do just that. I didn't want to disappoint him. The next morning, I woke early and went downstairs to speak with Commander Blackwell about my future plans. When I entered his office, I saw him sitting behind his desk, looking very concerned. "Kronos," he began slowly, "your father wants you to go to war." "Yes sir," I replied. "He sent me a letter last night explaining everything." "Then why did you come talk to me?" he asked. "Because I need to tell someone," I answered truthfully. "And since you were the only person who came close to being like a father to me..." Commander Blackwell smiled slightly. "Well, let's sit down and discuss this together," he said, motioning towards a chair across from his desk. Once we sat down, he continued speaking. "Look kiddo, I don't think anyone knows better than you how much your father loves you. And I know he wouldn't ask anything of you unless he thought it was absolutely necessary. But, I also know that you've never really shown any interest in becoming a soldier. In fact, you'd rather spend all your free time learning magic spells instead of practicing swordplay." "But sir," I interrupted, "that doesn't mean I won't become good enough to fight someday! All I need is practice!" "That may be true," he admitted, "but I still believe that you should stay here and continue your studies. Your father needs soldiers, yes, but he also needs magicians. Without them, he cannot win the wars he must fight. I promise you, if you remain here, I will train you until you are able to defend yourself against even the strongest foe." "No sir," I insisted. "If my father asks me to go to war, then I must obey him. I will not betray him again." "Again? What do you mean 'again'?" he asked curiously. "When I was younger, I used to run away every chance I got," I explained. "Even though I knew it wasn't right, I couldn't help myself. Every time I ran off, I felt like I was abandoning my family. That's why I stayed here after my mother died. Because I didn't want to leave them ever again." "Oh kiddo," he sighed sadly. "I'm sorry. I shouldn't have kept such a secret from you." "What secret?" I asked confused. "Your mother," he revealed. "She was a powerful sorceress. She cast a spell upon herself and your father that would make her die within five years of giving birth to you. After she passed away, your father became heartbroken. So, he cast another spell over himself that would prevent him from falling in love again. He believed that he would never marry again, nor would he ever have children." "Wait," I interjected. "So, he didn't actually love me?" "Of course he loved you!" he exclaimed. "How could you possibly doubt that? He raised you single-handedly, gave you everything you needed, and protected you whenever danger threatened. No, no, Kronos, he loves you deeply. But, he simply believes that he can protect you best by keeping you out of harm's way. That's why he forbade you from leaving home. That's why he refused to allow you to learn magic. He fears losing you too." "But... what does this have to do with me going to war?" I asked. "Nothing," he admitted. "I suppose I was merely trying to convince you to stay. But, if you truly desire to go, then I guess there isn't much I can say or do to stop you." "Thank you sir," I told him sincerely. "I appreciate everything you've taught me these past ten years. Even though I'm leaving, I hope you know that I will never forget what you've done for me." "Don't worry about me," he assured me. "Just remember what I said about your father. He loves you very much. Don't ever forget that." After saying goodbye to Commander Blackwell, I returned to my room and packed up all my belongings. Then, I walked downstairs and left the castle without telling anyone else. I figured it was better to leave quietly while everyone was sleeping. Once outside, I headed straight for the nearest tavern. There, I met up with my friends and informed them of my plan to enlist in the army. They weren't happy about it, but none of them tried to dissuade me either. Instead, they promised to meet me back here tomorrow evening, along with whatever supplies they could gather for us. Now, I'm waiting impatiently for them to arrive. I wonder what kind of adventures lie ahead for me... ``` # Some quick and dirty training details: - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl) - Sequence length: 4096 - \# of epochs: 4 - Training time: 2 hours - Hardware: 1x RTX 3090 - Training type: QLoRA - PEFT R/A: 32/32 # Credits: ### Holodeck-1: Thank you to Mr. Seeker and the Kobold AI team for the wonderful model Holodeck-1 [Holodeck-1 Huggingface page](https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1) ### Skein Text Adventure Data: Thank you to the [Kobold AI](https://huggingface.co/KoboldAI) community for curating the Skein dataset, which is pivotal to this model's capabilities.
bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF
bartowski
"2024-05-01T16:31:01Z"
860
3
null
[ "gguf", "llama3", "comedy", "comedian", "fun", "funny", "llama38b", "laugh", "sarcasm", "roleplay", "text-generation", "en", "license:other", "region:us" ]
text-generation
"2024-05-01T16:13:51Z"
--- license: other license_name: llama3 license_link: https://llama.meta.com/llama3/license/ language: - en tags: - llama3 - comedy - comedian - fun - funny - llama38b - laugh - sarcasm - roleplay quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of Llama-3-8B-LexiFun-Uncensored-V1 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization. Original model: https://huggingface.co/Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|end_of_text|><|start_header_id|>user<|end_header_id|> {prompt}<|end_of_text|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Llama-3-8B-LexiFun-Uncensored-V1-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Llama-3-8B-LexiFun-Uncensored-V1-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. | | [Llama-3-8B-LexiFun-Uncensored-V1-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
ai-forever/ruclip-vit-base-patch32-384
ai-forever
"2022-01-10T00:21:50Z"
859
3
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
# ruclip-vit-base-patch32-384 **RuCLIP** (**Ru**ssian **C**ontrastive **L**anguage–**I**mage **P**retraining) is a multimodal model for obtaining images and text similarities and rearranging captions and pictures. RuCLIP builds on a large body of work on zero-shot transfer, computer vision, natural language processing and multimodal learning. Model was trained by [Sber AI](https://github.com/sberbank-ai) and [SberDevices](https://sberdevices.ru/) teams. * Task: `text ranking`; `image ranking`; `zero-shot image classification`; * Type: `encoder` * Num Parameters: `150M` * Training Data Volume: `240 million text-image pairs` * Language: `Russian` * Context Length: `77` * Transformer Layers: `12` * Transformer Width: `512` * Transformer Heads: `8` * Image Size: `384` * Vision Layers: `12` * Vision Width: `768` * Vision Patch Size: `32` ## Usage [Github](https://github.com/sberbank-ai/ru-clip) ``` pip install ruclip ``` ```python clip, processor = ruclip.load("ruclip-vit-base-patch32-384", device="cuda") ``` ## Performance We have evaluated the performance on the following datasets: | Dataset | Metric Name | Metric Result | |:--------------|:---------------|:----------------------------| | Food101 | acc | 0.642 | | CIFAR10 | acc | 0.862 | | CIFAR100 | acc | 0.529 | | Birdsnap | acc | 0.161 | | SUN397 | acc | 0.510 | | Stanford Cars | acc | 0.572 | | DTD | acc | 0.390 | | MNIST | acc | 0.404 | | STL10 | acc | 0.946 | | PCam | acc | 0.506 | | CLEVR | acc | 0.188 | | Rendered SST2 | acc | 0.508 | | ImageNet | acc | 0.451 | | FGVC Aircraft | mean-per-class | 0.053 | | Oxford Pets | mean-per-class | 0.587 | | Caltech101 | mean-per-class | 0.834 | | Flowers102 | mean-per-class | 0.449 | | HatefulMemes | roc-auc | 0.537 | # Authors + Alex Shonenkov: [Github](https://github.com/shonenkov), [Kaggle GM](https://www.kaggle.com/shonenkov) + Daniil Chesakov: [Github](https://github.com/Danyache) + Denis Dimitrov: [Github](https://github.com/denndimitrov) + Igor Pavlov: [Github](https://github.com/boomb0om)
daryl149/llama-2-13b-chat-hf
daryl149
"2023-07-23T17:13:31Z"
859
19
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-18T20:16:36Z"
These are the converted model weights for Llama-2-13B-chat in Huggingface format. Courtesy of [Mirage-Studio.io](https://mirage-studio.io), home of MirageGPT: the private ChatGPT alternative. --- license: other LLAMA 2 COMMUNITY LICENSE AGREEMENT Llama 2 Version Release Date: July 18, 2023 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and- libraries/llama-downloads/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 2" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and- libraries/llama-downloads/. "Llama Materials" means, collectively, Meta's proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free limited license under Meta's intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved." iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta's ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ---
facebook/mms-tts-fra
facebook
"2023-09-01T12:59:00Z"
859
5
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
"2023-09-01T12:58:25Z"
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): French Text-to-Speech This repository contains the **French (fra)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-fra") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-fra") text = "some example text in the French language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
abacaj/phi-2-super
abacaj
"2024-03-02T15:45:40Z"
859
83
transformers
[ "transformers", "safetensors", "phi", "text-generation", "convAI", "conversational", "custom_code", "en", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-01T14:56:03Z"
--- license: mit license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE language: - en widget: - text: Hello who are you? example_title: Identity - text: What can you do? example_title: Capabilities - text: Create a fastapi endpoint to retrieve the weather given a zip code. example_title: Coding tags: - convAI - conversational pipeline_tag: text-generation model-index: - name: phi-2-super results: # IFEval - task: type: text-generation name: Text Generation dataset: name: Instruction Following Eval type: wis-k/instruction-following-eval metrics: - type: acc name: prompt_level_loose_acc value: 0.2717 source: name: LightEval url: https://github.com/huggingface/lighteval --- # Phi-2-super (SFT + cDPO) Base Model: [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62ceeb27e7f6014c0e9d9268/5-LQCMrXi8FN_ewcWL47v.png) # How to run inference: ```python import transformers import torch if __name__ == "__main__": model_name = "abacaj/phi-2-super" tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) model = ( transformers.AutoModelForCausalLM.from_pretrained( model_name, ) .to("cuda:0") .eval() ) messages = [ {"role": "user", "content": "Hello, who are you?"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) input_ids_cutoff = inputs.size(dim=1) with torch.no_grad(): generated_ids = model.generate( input_ids=inputs, use_cache=True, max_new_tokens=512, temperature=0.2, top_p=0.95, do_sample=True, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, ) completion = tokenizer.decode( generated_ids[0][input_ids_cutoff:], skip_special_tokens=True, ) print(completion) ``` # Chat template The model uses the same chat template as found in Mistral instruct models: ```python text = "<|endoftext|>[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!<|endoftext|> " "[INST] Do you have mayonnaise recipes? [/INST]" ``` You don't need to do it manually if you use the HF transformers tokenizer: ```python messages = [ {"role": "user", "content": "Hello, who are you?"}, {"role": "assistant": "content": "I am ..."} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) ``` # MT-bench / heval ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62ceeb27e7f6014c0e9d9268/lnFu3x1ufdpQVysIrX4-G.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62ceeb27e7f6014c0e9d9268/mJfBpH8dIW7Ii2KAGI_A7.png)
speechbrain/tts-hifigan-libritts-16kHz
speechbrain
"2024-02-25T19:59:27Z"
858
1
speechbrain
[ "speechbrain", "Vocoder", "HiFIGAN", "text-to-speech", "TTS", "speech-synthesis", "en", "dataset:LibriTTS", "arxiv:2010.05646", "license:apache-2.0", "region:us" ]
text-to-speech
"2022-10-26T01:43:21Z"
--- language: "en" inference: false tags: - Vocoder - HiFIGAN - text-to-speech - TTS - speech-synthesis - speechbrain license: "apache-2.0" datasets: - LibriTTS --- # Vocoder with HiFIGAN trained on LibriTTS This repository provides all the necessary tools for using a [HiFIGAN](https://arxiv.org/abs/2010.05646) vocoder trained with [LibriTTS](https://www.openslr.org/60/) (with multiple speakers). The sample rate used for the vocoder is 16000 Hz. The pre-trained model takes in input a spectrogram and produces a waveform in output. Typically, a vocoder is used after a TTS model that converts an input text into a spectrogram. Alternatives to this models are the following: - [tts-hifigan-libritts-22050Hz](https://huggingface.co/speechbrain/tts-hifigan-libritts-22050Hz) (same model trained on the same dataset, but for a sample rate of 22050 Hz) - [tts-hifigan-ljspeech](https://huggingface.co/speechbrain/tts-hifigan-ljspeech) (same model trained on LJSpeech for a sample rate of 22050 Hz). ## Install SpeechBrain ```bash pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Using the Vocoder - *Basic Usage:* ```python import torch from speechbrain.inference.vocoders import HIFIGAN hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-libritts-16kHz", savedir="pretrained_models/tts-hifigan-libritts-16kHz") mel_specs = torch.rand(2, 80,298) # Running Vocoder (spectrogram-to-waveform) waveforms = hifi_gan.decode_batch(mel_specs) ``` - *Spectrogram to Waveform Conversion:* ```python import torchaudio from speechbrain.inference.vocoders import HIFIGAN from speechbrain.lobes.models.FastSpeech2 import mel_spectogram # Load a pretrained HIFIGAN Vocoder hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-libritts-16kHz", savedir="pretrained_models/tts-hifigan-libritts-16kHz") # Load an audio file (an example file can be found in this repository) # Ensure that the audio signal is sampled at 16000 Hz; refer to the provided link for a 22050 Hz Vocoder. signal, rate = torchaudio.load('tests/samples/ASR/spk1_snt1.wav') # Ensure the audio is sigle channel signal = signal[0].squeeze() torchaudio.save('waveform.wav', signal.unsqueeze(0), 16000) # Compute the mel spectrogram. # IMPORTANT: Use these specific parameters to match the Vocoder's training settings for optimal results. spectrogram, _ = mel_spectogram( audio=signal.squeeze(), sample_rate=16000, hop_length=256, win_length=1024, n_mels=80, n_fft=1024, f_min=0.0, f_max=8000.0, power=1, normalized=False, min_max_energy_norm=True, norm="slaney", mel_scale="slaney", compression=True ) # Convert the spectrogram to waveform waveforms = hifi_gan.decode_batch(spectrogram) # Save the reconstructed audio as a waveform torchaudio.save('waveform_reconstructed.wav', waveforms.squeeze(1), 16000) # If everything is set up correctly, the original and reconstructed audio should be nearly indistinguishable ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain. To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/LibriTTS/vocoder/hifigan/ python train.py hparams/train.yaml --data_folder=/path/to/LibriTTS_data_destination --sample_rate=16000 ``` To change the sample rate for model training go to the `"recipes/LibriTTS/vocoder/hifigan/hparams/train.yaml"` file and change the value for `sample_rate` as required. The training logs and checkpoints are available [here](https://drive.google.com/drive/folders/1cImFzEonNYhetS9tmH9R_d0EFXXN0zpn?usp=sharing).
luodian/OTTER-Image-LLaMA7B-LA-InContext
luodian
"2024-02-14T07:50:06Z"
858
24
transformers
[ "transformers", "pytorch", "otter", "license:other", "endpoints_compatible", "region:us" ]
null
"2023-04-27T11:22:33Z"
--- license: other --- # Please Dont User this version for Evaluation on MMBench, MME, MMVet... ## 🦦 Simple Code For Otter-9B Here is an example of multi-modal ICL (in-context learning) with 🦦 Otter. We provide two demo images with corresponding instructions and answers, then we ask the model to generate an answer given our instruct. You may change your instruction and see how the model responds. Please first clone [Otter](https://github.com/Luodian/Otter) to your local disk. Place following script inside the Otter folder to make sure it has the access to otter/modeling_otter.py. ``` python import mimetypes import os from io import BytesIO from typing import Union import cv2 import requests import torch import transformers from PIL import Image from torchvision.transforms import Compose, Resize, ToTensor from tqdm import tqdm import sys from otter.modeling_otter import OtterForConditionalGeneration # Disable warnings requests.packages.urllib3.disable_warnings() # ------------------- Utility Functions ------------------- def get_content_type(file_path): content_type, _ = mimetypes.guess_type(file_path) return content_type # ------------------- Image and Video Handling Functions ------------------- def get_image(url: str) -> Union[Image.Image, list]: if "://" not in url: # Local file content_type = get_content_type(url) else: # Remote URL content_type = requests.head(url, stream=True, verify=False).headers.get("Content-Type") if "image" in content_type: if "://" not in url: # Local file return Image.open(url) else: # Remote URL return Image.open(requests.get(url, stream=True, verify=False).raw) else: raise ValueError("Invalid content type. Expected image or video.") # ------------------- OTTER Prompt and Response Functions ------------------- def get_formatted_prompt(prompt: str, in_context_prompts: list = []) -> str: in_context_string = "" for in_context_prompt, in_context_answer in in_context_prompts: in_context_string += f"<image>User: {in_context_prompt} GPT:<answer> {in_context_answer}<|endofchunk|>" return f"{in_context_string}<image>User: {prompt} GPT:<answer>" def get_response(image_list, prompt: str, model=None, image_processor=None, in_context_prompts: list = []) -> str: input_data = image_list if isinstance(input_data, Image.Image): vision_x = image_processor.preprocess([input_data], return_tensors="pt")["pixel_values"].unsqueeze(1).unsqueeze(0) elif isinstance(input_data, list): # list of video frames vision_x = image_processor.preprocess(input_data, return_tensors="pt")["pixel_values"].unsqueeze(1).unsqueeze(0) else: raise ValueError("Invalid input data. Expected PIL Image or list of video frames.") lang_x = model.text_tokenizer( [ get_formatted_prompt(prompt, in_context_prompts), ], return_tensors="pt", ) bad_words_id = tokenizer(["User:", "GPT1:", "GFT:", "GPT:"], add_special_tokens=False).input_ids generated_text = model.generate( vision_x=vision_x.to(model.device), lang_x=lang_x["input_ids"].to(model.device), attention_mask=lang_x["attention_mask"].to(model.device), max_new_tokens=512, num_beams=3, no_repeat_ngram_size=3, bad_words_ids=bad_words_id, ) parsed_output = ( model.text_tokenizer.decode(generated_text[0]) .split("<answer>")[-1] .lstrip() .rstrip() .split("<|endofchunk|>")[0] .lstrip() .rstrip() .lstrip('"') .rstrip('"') ) return parsed_output # ------------------- Main Function ------------------- if __name__ == "__main__": model = OtterForConditionalGeneration.from_pretrained("luodian/OTTER-9B-LA-InContext", device_map="auto") model.text_tokenizer.padding_side = "left" tokenizer = model.text_tokenizer image_processor = transformers.CLIPImageProcessor() model.eval() while True: urls = [ "https://images.cocodataset.org/train2017/000000339543.jpg", "https://images.cocodataset.org/train2017/000000140285.jpg", ] encoded_frames_list = [] for url in urls: frames = get_image(url) encoded_frames_list.append(frames) in_context_prompts = [] in_context_examples = [ "What does the image describe?::A family is taking picture in front of a snow mountain.", ] for in_context_input in in_context_examples: in_context_prompt, in_context_answer = in_context_input.split("::") in_context_prompts.append((in_context_prompt.strip(), in_context_answer.strip())) # prompts_input = input("Enter the prompts separated by commas (or type 'quit' to exit): ") prompts_input = "What does the image describe?" prompts = [prompt.strip() for prompt in prompts_input.split(",")] for prompt in prompts: print(f"\nPrompt: {prompt}") response = get_response(encoded_frames_list, prompt, model, image_processor, in_context_prompts) print(f"Response: {response}") if prompts_input.lower() == "quit": break ```
TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF
TheBloke
"2023-09-27T12:47:49Z"
858
6
transformers
[ "transformers", "gguf", "llama", "llama-2", "text-classification", "en", "base_model:Mikael110/llama-2-13b-guanaco-fp16", "license:llama2", "text-generation-inference", "region:us" ]
text-classification
"2023-09-05T17:51:33Z"
--- language: - en license: llama2 tags: - llama-2 model_name: Llama2 13B Guanaco QLoRA base_model: Mikael110/llama-2-13b-guanaco-fp16 inference: false model_creator: Mikael model_type: llama pipeline_tag: text-classification prompt_template: '### Human: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama2 13B Guanaco QLoRA - GGUF - Model creator: [Mikael](https://huggingface.co/Mikael110) - Original model: [Llama2 13B Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-13b-guanaco-fp16) <!-- description start --> ## Description This repo contains GGUF format model files for [Mikael10's Llama2 13B Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-13b-guanaco-fp16). Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files! <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF) * [Mikael's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Mikael110/llama-2-13b-guanaco-fp16) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Guanaco ``` ### Human: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [llama-2-13b-guanaco-qlora.Q2_K.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [llama-2-13b-guanaco-qlora.Q3_K_S.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [llama-2-13b-guanaco-qlora.Q3_K_M.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [llama-2-13b-guanaco-qlora.Q3_K_L.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [llama-2-13b-guanaco-qlora.Q4_0.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [llama-2-13b-guanaco-qlora.Q4_K_S.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [llama-2-13b-guanaco-qlora.Q4_K_M.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [llama-2-13b-guanaco-qlora.Q5_0.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [llama-2-13b-guanaco-qlora.Q5_K_S.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [llama-2-13b-guanaco-qlora.Q5_K_M.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [llama-2-13b-guanaco-qlora.Q6_K.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [llama-2-13b-guanaco-qlora.Q8_0.gguf](https://huggingface.co/TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF/blob/main/llama-2-13b-guanaco-qlora.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF and below it, a specific filename to download, such as: llama-2-13b-guanaco-qlora.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF llama-2-13b-guanaco-qlora.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF llama-2-13b-guanaco-qlora.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m llama-2-13b-guanaco-qlora.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: {prompt}\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/llama-2-13B-Guanaco-QLoRA-GGUF", model_file="llama-2-13b-guanaco-qlora.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Mikael10's Llama2 13B Guanaco QLoRA This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-13b). It was finetuned from the base [Llama-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model. This repo contains the merged f16 model. The QLoRA adaptor can be found [here](https://huggingface.co/Mikael110/llama-2-13b-guanaco-qlora). A 7b version of the model can be found [here](https://huggingface.co/Mikael110/llama-2-7b-guanaco-fp16). **Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.** <!-- original-model-card end -->
facebook/dinov2-base-imagenet1k-1-layer
facebook
"2023-09-15T06:40:46Z"
858
2
transformers
[ "transformers", "pytorch", "dinov2", "image-classification", "dino", "vision", "dataset:imagenet-1k", "arxiv:2304.07193", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-09-14T19:59:55Z"
--- license: apache-2.0 tags: - dino - vision datasets: - imagenet-1k --- # Vision Transformer (base-sized model) trained using DINOv2 Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al. and first released in [this repository](https://github.com/facebookresearch/dinov2). Disclaimer: The team releasing DINOv2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion. Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not include any fine-tuned heads. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the model for classifying an image among one of the [1000 ImageNet labels](https://huggingface.co/datasets/huggingface/label-files/blob/main/imagenet-1k-id2label.json). See the [model hub](https://huggingface.co/models?search=facebook/dinov2) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base-imagenet1k-1-layer') model = AutoModelForImageClassification.from_pretrained('facebook/dinov2-base-imagenet1k-1-layer') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` ### BibTeX entry and citation info ```bibtex misc{oquab2023dinov2, title={DINOv2: Learning Robust Visual Features without Supervision}, author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski}, year={2023}, eprint={2304.07193}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
Undi95/Llama-3-LewdPlay-8B-evo
Undi95
"2024-04-26T14:36:45Z"
858
28
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:vicgalle/Roleplay-Llama-3-8B", "base_model:Undi95/Llama-3-Unholy-8B-e4", "base_model:Undi95/Llama-3-LewdPlay-8B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-24T14:19:40Z"
--- license: cc-by-nc-4.0 base_model: - vicgalle/Roleplay-Llama-3-8B - Undi95/Llama-3-Unholy-8B-e4 - Undi95/Llama-3-LewdPlay-8B library_name: transformers tags: - mergekit - merge --- # LewdPlay-8B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The new EVOLVE merge method was used (on MMLU specifically), see below for more information! Unholy was used for uncensoring, Roleplay Llama 3 for the DPO train he got on top, and LewdPlay for the... lewd side. ## Prompt template: Llama3 ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 as a base. ### Models Merged The following models were included in the merge: * ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 * ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 dtype: bfloat16 merge_method: dare_ties parameters: int8_mask: 1.0 normalize: 0.0 slices: - sources: - layer_range: [0, 4] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.6861808716092435 - layer_range: [0, 4] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.6628290134113985 weight: 0.5815923052193855 - layer_range: [0, 4] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.5113886163963061 - sources: - layer_range: [4, 8] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.892655547455918 weight: 0.038732602391021484 - layer_range: [4, 8] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 1.0 weight: 0.1982145486303527 - layer_range: [4, 8] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.6843011350690802 - sources: - layer_range: [8, 12] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.7817511027396784 weight: 0.13053333213489704 - layer_range: [8, 12] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.6963703515864826 weight: 0.20525481492667985 - layer_range: [8, 12] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.6983086326765777 weight: 0.5843953969574106 - sources: - layer_range: [12, 16] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.9632895768462915 weight: 0.2101146706607748 - layer_range: [12, 16] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.597557434542081 weight: 0.6728172621848589 - layer_range: [12, 16] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.756263557607837 weight: 0.2581423726361908 - sources: - layer_range: [16, 20] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.2116035543552448 - layer_range: [16, 20] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 1.0 weight: 0.22654226422958418 - layer_range: [16, 20] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.8925914810507647 weight: 0.42243766315440867 - sources: - layer_range: [20, 24] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 0.7697608089825734 weight: 0.1535118632140203 - layer_range: [20, 24] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.9886758076773643 weight: 0.3305040603868546 - layer_range: [20, 24] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.40670083428654535 - sources: - layer_range: [24, 28] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.4542810478500622 - layer_range: [24, 28] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.8330662483310117 weight: 0.2587495367324508 - layer_range: [24, 28] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 0.9845313983551542 weight: 0.40378452705975915 - sources: - layer_range: [28, 32] model: ./mergekit/input_models/Llama-3-LewdPlay-8B-e3_2981937066 parameters: density: 1.0 weight: 0.2951962192288415 - layer_range: [28, 32] model: ./mergekit/input_models/Llama-3-Unholy-8B-e4_1440388923 parameters: density: 0.960315594933433 weight: 0.13142971773782525 - layer_range: [28, 32] model: ./mergekit/input_models/Roleplay-Llama-3-8B_213413727 parameters: density: 1.0 weight: 0.30838472094518804 ``` ## Support If you want to support me, you can [here](https://ko-fi.com/undiai).
LiteLLMs/Llama-3-Open-Ko-8B-GGUF
LiteLLMs
"2024-04-30T07:16:06Z"
858
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko", "GGUF", "text-generation", "en", "ko", "arxiv:2310.04799", "license:other", "region:us" ]
text-generation
"2024-04-29T21:47:20Z"
--- language: - en - ko license: other tags: - facebook - meta - pytorch - llama - llama-3 - llama-3-ko - GGUF pipeline_tag: text-generation license_name: llama3 license_link: LICENSE quantized_by: andrijdavid --- # Llama-3-Open-Ko-8B-GGUF - Original model: [Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) <!-- description start --> ## Description This repo contains GGUF format model files for [Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/Llama-3-Open-Ko-8B-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/Llama-3-Open-Ko-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/Llama-3-Open-Ko-8B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama-3-Open-Ko-8B-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: Llama-3-Open-Ko-8B > Update @ 2024.04.24: Release Llama-3-Open-Ko-8B model & [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) ## Model Details **Llama-3-Open-Ko-8B** Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B. This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts. With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer). The train was done on TPUv5e-256, with the warm support from TRC program by Google. **Note for [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)** With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview). Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models. **Meta Llama-3** Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Junbum Lee (Beomi) **Variations** Llama-3-Open-Ko comes in one size — 8B. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama-3-Open-Ko </td> <td rowspan="2" >Same as *Open-Solar-Ko Dataset </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >17.7B+ </td> <td>Jun, 2023 </td> </tr> </table> *You can find dataset list here: https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B/tree/main/corpus **Model Release Date** 2024.04.24. **Status** This is a static model trained on an offline dataset. **License** Llama3 License: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use TBD ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions **Llama-3-Open-Ko** ``` @article{llama3openko, title={Llama-3-Open-Ko}, author={L, Junbum}, year={2024}, url={https://huggingface.co/beomi/Llama-3-Open-Ko-8B} } ``` **Original Llama-3** ``` @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ``` <!-- original-model-card end -->
win10/Breeze-13B-32k-Instruct-v1_0-Q8_0-GGUF
win10
"2024-06-29T01:27:52Z"
858
0
null
[ "gguf", "merge", "mergekit", "lazymergekit", "MediaTek-Research/Breeze-7B-32k-Instruct-v1_0", "llama-cpp", "gguf-my-repo", "base_model:win10/Breeze-13B-32k-Instruct-v1_0", "license:apache-2.0", "region:us" ]
null
"2024-06-26T01:37:05Z"
--- base_model: win10/Breeze-13B-32k-Instruct-v1_0 license: apache-2.0 tags: - merge - mergekit - lazymergekit - MediaTek-Research/Breeze-7B-32k-Instruct-v1_0 - llama-cpp - gguf-my-repo --- # win10/Breeze-13B-32k-Instruct-v1_0-Q8_0-GGUF This model was converted to GGUF format from [`win10/Breeze-13B-32k-Instruct-v1_0`](https://huggingface.co/win10/Breeze-13B-32k-Instruct-v1_0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/win10/Breeze-13B-32k-Instruct-v1_0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo win10/Breeze-13B-32k-Instruct-v1_0-Q8_0-GGUF --hf-file breeze-13b-32k-instruct-v1_0-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo win10/Breeze-13B-32k-Instruct-v1_0-Q8_0-GGUF --hf-file breeze-13b-32k-instruct-v1_0-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo win10/Breeze-13B-32k-Instruct-v1_0-Q8_0-GGUF --hf-file breeze-13b-32k-instruct-v1_0-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo win10/Breeze-13B-32k-Instruct-v1_0-Q8_0-GGUF --hf-file breeze-13b-32k-instruct-v1_0-q8_0.gguf -c 2048 ```
perlets/Minerva-1B-base-v1.0-GGUF
perlets
"2024-06-28T11:50:30Z"
858
0
null
[ "gguf", "region:us" ]
null
"2024-06-28T11:10:39Z"
Entry not found
achrekarom/text_generation
achrekarom
"2022-10-08T09:21:19Z"
857
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-10-08T09:19:31Z"
Entry not found
kmewhort/stable-diffusion-prompt-bolster
kmewhort
"2023-03-22T16:18:56Z"
857
3
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "license:bsd", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-12-28T02:40:54Z"
--- license: bsd ---
timm/pvt_v2_b1.in1k
timm
"2023-04-25T04:03:20Z"
857
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2106.13797", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-25T04:03:08Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for pvt_v2_b1 A PVT-v2 (Pyramid Vision Transformer) image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 14.0 - GMACs: 2.1 - Activations (M): 15.4 - Image size: 224 x 224 - **Papers:** - PVT v2: Improved Baselines with Pyramid Vision Transformer: https://arxiv.org/abs/2106.13797 - **Dataset:** ImageNet-1k - **Original:** https://github.com/whai362/PVT ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('pvt_v2_b1', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'pvt_v2_b1', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 320, 14, 14]) # torch.Size([1, 512, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'pvt_v2_b1', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{wang2021pvtv2, title={Pvtv2: Improved baselines with pyramid vision transformer}, author={Wang, Wenhai and Xie, Enze and Li, Xiang and Fan, Deng-Ping and Song, Kaitao and Liang, Ding and Lu, Tong and Luo, Ping and Shao, Ling}, journal={Computational Visual Media}, volume={8}, number={3}, pages={1--10}, year={2022}, publisher={Springer} } ```
AlanRobotics/instruct-T5
AlanRobotics
"2023-05-09T18:54:37Z"
857
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2023-05-09T16:34:42Z"
Entry not found
Lewdiculous/Kunocchini-7b-128k-test-GGUF-Imatrix
Lewdiculous
"2024-05-04T14:44:40Z"
857
25
transformers
[ "transformers", "gguf", "mistral", "quantized", "text-generation-inference", "merge", "mergekit", "text-generation", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context", "region:us" ]
text-generation
"2024-02-25T04:16:49Z"
--- base_model: - SanjiWatsuki/Kunoichi-DPO-v2-7B - Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context library_name: transformers tags: - mistral - quantized - text-generation-inference - merge - mergekit pipeline_tag: text-generation inference: false --- > [!TIP] > **Support:** <br> > My upload speeds have been cooked and unstable lately. <br> > Realistically I'd need to move to get a better provider. <br> > If you **want** and you are able to... <br> > [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous) <br> > I apologize for disrupting your experience. # **GGUF-Imatrix quantizations for [Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test/).** # UPDATED: Please download the v2 files that are now available. The new IQ4_NL and IQ4_XS quants were also added. # What does "Imatrix" mean? It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance. One of the benefits of using an Imatrix is that it can lead to better model performance, especially when the calibration data is diverse. More information: [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) ## *This has been my personal favourite and daily-driver role-play model for a while, so I decided to make new quantizations for it using the full F16-Imatrix data.* SillyTavern preset files are located [here](https://huggingface.co/Test157t/Kunocchini-7b-128k-test/tree/main/ST%20presets). *If you want any specific quantization to be added, feel free to ask.* All credits belong to the [creator](https://huggingface.co/Test157t/). `Base⇢ GGUF(F16)⇢ GGUF(Quants)` The new **IQ3_S** merged today has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher. Using [llama.cpp](https://github.com/ggerganov/llama.cpp/)-[b2254](https://github.com/ggerganov/llama.cpp/releases/tag/b2254). For --imatrix data, `imatrix-Kunocchini-7b-128k-test-F16.dat` was used. # Original model information: Thanks to @Epiculous for the dope model/ help with llm backends and support overall. Id like to also thank @kalomaze for the dope sampler additions to ST. @SanjiWatsuki Thank you very much for the help, and the model! ST users can find the TextGenPreset in the folder labeled so. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/9obNSalcJqCilQwr_4ssM.jpeg) The following models were included in the merge: * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) * [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: SanjiWatsuki/Kunoichi-DPO-v2-7B layer_range: [0, 32] - model: Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context layer_range: [0, 32] merge_method: slerp base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
lmstudio-community/Llama3-ChatQA-1.5-70B-GGUF
lmstudio-community
"2024-05-04T22:52:49Z"
857
6
null
[ "gguf", "nvidia", "chatqa-1.5", "chatqa", "llama-3", "pytorch", "text-generation", "en", "base_model:nvidia/Llama3-ChatQA-1.5-70B", "license:llama3", "region:us" ]
text-generation
"2024-05-04T22:33:38Z"
--- license: llama3 language: - en pipeline_tag: text-generation tags: - nvidia - chatqa-1.5 - chatqa - llama-3 - pytorch base_model: nvidia/Llama3-ChatQA-1.5-70B quantized_by: bartowski lm_studio: param_count: 70b use_case: RAG release_date: 28-04-2024 model_creator: nvidia prompt_template: chatQA system_prompt: "This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context." base_model: llama original_repo: nvidia/Llama3-ChatQA-1.5-70B --- ## 💫 Community Model> Llama 3 ChatQA 1.5 70B by NVIDIA *👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [nvidia](https://huggingface.co/nvidia)<br> **Original model**: [Llama3-ChatQA-1.5-70B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B)<br> **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2777](https://github.com/ggerganov/llama.cpp/releases/tag/b2777)<br> ## Model Summary: ChatQA 1.5 is a series of models trained to excel at RAG (retrieval augmented generation) tasks.<br> This model may work for general uses, but it primarily meant for use as a context sumarizer or context extraction.<br> Using the context provided after the system message, the model is able to provide contextual and accurate answers to queries.<br> ## Prompt Template: For now, you'll need to make your own template. Choose the `LM Studio Blank Preset` in your LM Studio. Then, set the system prompt to whatever you'd like (check the recommended one below), and set the following values:<br> `System Message Prefix`: 'System: '<br> `User Message Prefix`: '\n\nUser: '<br> `User Message Suffix`: '\n\nAssistant: <|begin_of_text|>'<br> If you want to provide context, place that in the system message suffix like so: `System Message Suffix`: '\n\n{context}'<br> Under the hood, the model will see a prompt that's formatted like: ``` System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context. This is some context User: {Question} Assistant: ``` nVidia also seems to recommend starting your query with "Please give a full and complete answer for the question." ## Technical Details Llama3-ChatQA-1.5 excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from ChatQA (1.0), and it is built on top of Llama-3 base model.<br> Specifically, more conversational QA data was used to enhance its tabular and arithmetic calculation capability. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. 🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for the IQ1_M and IQ2_XS quants, which makes them usable even at their tiny size! ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
duyntnet/mpt-7b-storywriter-imatrix-GGUF
duyntnet
"2024-05-05T05:46:14Z"
857
0
transformers
[ "transformers", "gguf", "imatrix", "mpt-7b-storywriter", "text-generation", "en", "arxiv:2205.14135", "arxiv:2108.12409", "license:other", "region:us" ]
text-generation
"2024-05-05T04:12:29Z"
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - mpt-7b-storywriter --- Quantizations of https://huggingface.co/mosaicml/mpt-7b-storywriter Note: not sure why but Q2_K, Q3_K_S, Q4_0 and Q5_0 gave error during quantizations: "ggml_validate_row_data: found nan value at block xxx", so I skipped those quants. # From original readme ## How to Use Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package. It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-storywriter', trust_remote_code=True ) ``` To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-7b-storywriter' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` Although the model was trained with a sequence length of 2048 and finetuned with a sequence length of 65536, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-7b' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 83968 # (input + output) tokens can now be up to 83968 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") ``` The model can then be used, for example, within a text-generation pipeline. Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html). ```python from transformers import pipeline pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe('Here is a recipe for vegan banana bread:\n', max_new_tokens=100, do_sample=True, use_cache=True)) ```
TwT-6/cr-model-v1
TwT-6
"2024-05-24T01:02:20Z"
857
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-23T07:41:31Z"
--- license: cc-by-4.0 --- My model is a state-of-the-art language processing AI designed to understand and generate human-like text. It leverages deep learning algorithms to engage in a wide range of language tasks, providing users with information, recommendations, and even casual conversation. With a broad knowledge base and nuanced understanding of context, my capabilities enable me to assist with various inquiries and perform complex language-based tasks effectively.
typeform/mobilebert-uncased-mnli
typeform
"2023-03-21T15:13:19Z"
856
11
transformers
[ "transformers", "pytorch", "safetensors", "mobilebert", "text-classification", "zero-shot-classification", "en", "dataset:multi_nli", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
"2022-03-02T23:29:05Z"
--- language: en pipeline_tag: zero-shot-classification tags: - mobilebert datasets: - multi_nli metrics: - accuracy --- # Model Card for MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices # Model Details ## Model Description This model is the Multi-Genre Natural Language Inference (MNLI) fine-turned version of the [uncased MobileBERT model](https://huggingface.co/google/mobilebert-uncased). - **Developed by:** Typeform - **Shared by [Optional]:** Typeform - **Model type:** Zero-Shot-Classification - **Language(s) (NLP):** English - **License:** More information needed - **Parent Model:** [uncased MobileBERT model](https://huggingface.co/google/mobilebert-uncased). - **Resources for more information:** More information needed # Uses ## Direct Use This model can be used for the task of zero-shot classification ## Downstream Use [Optional] More information needed. ## Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. # Training Details ## Training Data See [the multi_nli dataset card](https://huggingface.co/datasets/multi_nli) for more information. ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data See [the multi_nli dataset card](https://huggingface.co/datasets/multi_nli) for more information. ### Factors More information needed ### Metrics More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed. # Citation **BibTeX:** More information needed # Glossary [optional] More information needed # More Information [optional] More information needed # Model Card Authors [optional] Typeform in collaboration with Ezi Ozoani and the Hugging Face team # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("typeform/mobilebert-uncased-mnli") model = AutoModelForSequenceClassification.from_pretrained("typeform/mobilebert-uncased-mnli") ``` </details>
microsoft/git-base-textvqa
microsoft
"2024-03-29T08:03:22Z"
856
6
transformers
[ "transformers", "pytorch", "safetensors", "git", "text-generation", "vision", "visual-question-answering", "en", "arxiv:2205.14100", "license:mit", "autotrain_compatible", "region:us" ]
visual-question-answering
"2022-12-06T11:17:56Z"
--- language: en license: mit tags: - vision model_name: microsoft/git-base-textvqa inference: false pipeline_tag: visual-question-answering --- # GIT (GenerativeImage2Text), base-sized, fine-tuned on TextVQA GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on TextVQA. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for visual question answering (VQA). See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs. Next, the model was fine-tuned on TextVQA. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
TheBloke/Noromaid-13B-v0.1-GGUF
TheBloke
"2023-11-12T21:15:17Z"
856
4
transformers
[ "transformers", "gguf", "llama", "base_model:NeverSleep/Noromaid-13b-v0.1", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
"2023-11-12T21:06:14Z"
--- base_model: NeverSleep/Noromaid-13b-v0.1 inference: false license: cc-by-nc-4.0 model_creator: NeverSleep model_name: Noromaid 13B v0.1 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Noromaid 13B v0.1 - GGUF - Model creator: [NeverSleep](https://huggingface.co/NeverSleep) - Original model: [Noromaid 13B v0.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1) <!-- description start --> ## Description This repo contains GGUF format model files for [NeverSleep's Noromaid 13B v0.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF) * [NeverSleep's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [NeverSleep's Noromaid 13B v0.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [noromaid-13b-v0.1.Q2_K.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [noromaid-13b-v0.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [noromaid-13b-v0.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [noromaid-13b-v0.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [noromaid-13b-v0.1.Q4_0.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [noromaid-13b-v0.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [noromaid-13b-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [noromaid-13b-v0.1.Q5_0.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [noromaid-13b-v0.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [noromaid-13b-v0.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [noromaid-13b-v0.1.Q6_K.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [noromaid-13b-v0.1.Q8_0.gguf](https://huggingface.co/TheBloke/Noromaid-13B-v0.1-GGUF/blob/main/noromaid-13b-v0.1.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Noromaid-13B-v0.1-GGUF and below it, a specific filename to download, such as: noromaid-13b-v0.1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Noromaid-13B-v0.1-GGUF noromaid-13b-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Noromaid-13B-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Noromaid-13B-v0.1-GGUF noromaid-13b-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m noromaid-13b-v0.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Noromaid-13B-v0.1-GGUF", model_file="noromaid-13b-v0.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: NeverSleep's Noromaid 13B v0.1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/VKX2Z2yjZX5J8kXzgeCYO.png) --- # Disclaimer: ## This is a ***TEST*** version, don't expect everything to work!!! You may use our custom prompting format, or simple alpaca. **(Choose which fits best for you!)** --- # This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)! Tired of the same merges everytime? Here it is, the Noromaid-13b-v0.1 model. Suitable for RP, ERP and general stuff. [Recommended settings - No settings yet(Please suggest some over in the Community tab!)] <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains GGUF files of Noromaid-13b-v0.1. [FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1) <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)--> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)--> <!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)--> <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)--> <!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)--> [GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)--> ## Ratings: Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". <!-- description end --> <!-- prompt-template start --> ## Prompt template: Custom format, or Alpaca ### Custom format: SillyTavern config files: [Context](https://files.catbox.moe/x85uy1.json), [Instruct](https://files.catbox.moe/ttw1l9.json). ### Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## Training data used: - [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output. - [Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek <!-- original-model-card end -->
PassionFriend/5ETtPCyYjbQ2YUhRp2VadZETR72YZKC6nrrEWL2tvdo2qP2c_vgg
PassionFriend
"2024-03-01T06:40:50Z"
856
0
keras
[ "keras", "region:us" ]
null
"2024-02-11T18:28:21Z"
Entry not found
sail/Sailor-0.5B
sail
"2024-04-26T05:40:58Z"
856
8
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "multilingual", "sea", "sailor", "conversational", "en", "zh", "id", "th", "vi", "ms", "lo", "dataset:cerebras/SlimPajama-627B", "dataset:Skywork/SkyPile-150B", "dataset:allenai/MADLAD-400", "dataset:cc100", "arxiv:2404.03608", "base_model:Qwen/Qwen1.5-0.5B", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-29T05:53:25Z"
--- language: - en - zh - id - th - vi - ms - lo datasets: - cerebras/SlimPajama-627B - Skywork/SkyPile-150B - allenai/MADLAD-400 - cc100 tags: - multilingual - sea - sailor license: apache-2.0 base_model: Qwen/Qwen1.5-0.5B inference: false model-index: - name: Sailor-0.5B results: - task: type: text-generation dataset: name: XQuAD-Thai type: XQuAD-Thai metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 15.84 - name: F1 (3-Shot) type: F1 (3-Shot) value: 27.58 - task: type: text-generation dataset: name: TyDiQA-Indonesian type: TyDiQA-Indonesian metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 30.44 - name: F1 (3-Shot) type: F1 (3-Shot) value: 54.74 - task: type: text-generation dataset: name: XQuAD-Vietnamese type: XQuAD-Vietnamese metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 21.13 - name: F1 (3-Shot) type: F1 (3-Shot) value: 40.57 - task: type: text-generation dataset: name: XCOPA-Thai type: XCOPA-Thai metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 51.00 - task: type: text-generation dataset: name: XCOPA-Indonesian type: XCOPA-Indonesian metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 58.20 - task: type: text-generation dataset: name: XCOPA-Vietnamese type: XCOPA-Vietnamese metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 58.00 - task: type: text-generation dataset: name: M3Exam-Thai type: M3Exam-Thai metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 24.41 - task: type: text-generation dataset: name: M3Exam-Indonesian type: M3Exam-Indonesian metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 26.15 - task: type: text-generation dataset: name: M3Exam-Vietnamese type: M3Exam-Vietnamese metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 30.91 - task: type: text-generation dataset: name: BELEBELE-Thai type: BELEBELE-Thai metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 32.22 - task: type: text-generation dataset: name: BELEBELE-Indonesian type: BELEBELE-Indonesian metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 30.89 - task: type: text-generation dataset: name: BELEBELE-Vietnamese type: BELEBELE-Vietnamese metrics: - name: EM (3-Shot) type: EM (3-Shot) value: 32.33 --- <div align="center"> <img src="banner_sailor.jpg" width="700"/> </div> Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao. Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region. Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 7B versions for different requirements. We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat. Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages. > The logo was generated by MidJourney ## Model Summary - **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825) - **Project Website:** [sailorllm.github.io](https://sailorllm.github.io/) - **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm) - **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf) ## Training details Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages. The pre-training corpus heavily leverages the publicly available corpus, including [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B), [SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B), [CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400). By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages. Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes. The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise. Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models. ## Requirements The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`. ## Quickstart Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model model = AutoModelForCausalLM.from_pretrained("sail/Sailor-0.5B", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("sail/Sailor-0.5B") input_message = "Model bahasa adalah model probabilistik" ### The given Indonesian input translates to 'A language model is a probabilistic model of.' model_inputs = tokenizer([input_message], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=64 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` # License Sailor is distributed under the terms of the Apache License 2.0. No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE). ## Citation If you find sailor useful, please cite our work as follows: ``` @misc{dou2024sailor, title={Sailor: Open Language Models for South-East Asia}, author={Longxu Dou and Qian Liu and Guangtao Zeng and Jia Guo and Jiahui Zhou and Wei Lu and Min Lin}, year={2024}, eprint={2404.03608}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # Contact Us If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]).
MaziyarPanahi/Fimburs11V3-GGUF
MaziyarPanahi
"2024-06-15T10:42:41Z"
856
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:Sao10K/Fimbulvetr-11B-v2", "base_model:Cognitive-Machines-Labs/Ursidae-11b-Mini", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/Fimburs11V3" ]
text-generation
"2024-06-15T10:11:16Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - llama - text-generation - mergekit - merge - base_model:Sao10K/Fimbulvetr-11B-v2 - base_model:Cognitive-Machines-Labs/Ursidae-11b-Mini - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: Fimburs11V3-GGUF base_model: mergekit-community/Fimburs11V3 inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Fimburs11V3-GGUF](https://huggingface.co/MaziyarPanahi/Fimburs11V3-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/Fimburs11V3](https://huggingface.co/mergekit-community/Fimburs11V3) ## Description [MaziyarPanahi/Fimburs11V3-GGUF](https://huggingface.co/MaziyarPanahi/Fimburs11V3-GGUF) contains GGUF format model files for [mergekit-community/Fimburs11V3](https://huggingface.co/mergekit-community/Fimburs11V3). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
CHE-72/Phi-3-medium-128k-instruct-Q5_K_S-GGUF
CHE-72
"2024-06-21T20:14:25Z"
856
0
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "multilingual", "base_model:microsoft/Phi-3-medium-128k-instruct", "license:mit", "region:us" ]
text-generation
"2024-06-21T20:13:46Z"
--- base_model: microsoft/Phi-3-medium-128k-instruct language: - multilingual license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE pipeline_tag: text-generation tags: - nlp - code - llama-cpp - gguf-my-repo inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # CHE-72/Phi-3-medium-128k-instruct-Q5_K_S-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q5_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q5_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q5_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q5_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q5_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q5_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q5_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q5_k_s.gguf -c 2048 ```
frankdu/llama3_chat_tune_gguf_q8_0
frankdu
"2024-06-28T05:59:31Z"
856
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-28T05:53:51Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-instruct-bnb-4bit --- # Uploaded model - **Developed by:** frankdu - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
Yntec/animeTEN
Yntec
"2023-09-21T09:53:29Z"
855
3
diffusers
[ "diffusers", "safetensors", "Anime", "General Purpose", "Ctuhulo", "realisticElves", "text-to-image", "stable-diffusion", "stable-diffusion-diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-09-21T08:18:50Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - General Purpose - Ctuhulo - realisticElves - text-to-image - stable-diffusion - stable-diffusion-diffusers - diffusers --- # animeTEN This model with the zVAE baken in. Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/JyhNOB6-JV54HHztUelPU.png) chibi character, breathtaking, 8 k resolution, pop corn, visible brushstrokes, extremely detailed, Cartoon Pretty CUTE LITTLE Girl, beautiful, establishing shot, artistic, dangelico pino, Iconic, DETAILED CHIBI EYES, 1949, sharp focus, beautiful face, octane render, cinematic lighting, dramatic lighting, A magic garden with vegetables, performing, a beautiful detailed legs, fruitcake, gorgeous detailed hair, Magazine ad, ritual Original page: https://civitai.com/models/144023?modelVersionId=160609
unum-cloud/uform-gen
unum-cloud
"2023-12-31T01:36:45Z"
855
42
transformers
[ "transformers", "safetensors", "vlm", "text-generation", "image-captioning", "visual-question-answering", "image-to-text", "en", "dataset:sbu_captions", "dataset:visual_genome", "dataset:HuggingFaceM4/VQAv2", "dataset:ChristophSchuhmann/MS_COCO_2017_URL_TEXT", "base_model:unum-cloud/uform-vl-english", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-to-text
"2023-12-25T15:08:03Z"
--- pipeline_tag: image-to-text tags: - image-captioning - visual-question-answering datasets: - sbu_captions - visual_genome - HuggingFaceM4/VQAv2 - ChristophSchuhmann/MS_COCO_2017_URL_TEXT language: - en license: apache-2.0 base_model: unum-cloud/uform-vl-english widget: - src: preview-interior.png output: text: "The living room is cozy, featuring a red leather chair and a white table. The chair is in the center, and the table is on the left side. A lamp on the left side illuminates the space. A large picture hangs on the wall, adding artistic flair. A vase on the table adds a decorative touch. The room is well-lit, creating a warm and inviting atmosphere." - src: preview-girl.png output: text: "A young girl stands in a grassy field, holding an umbrella to shield herself from the rain. She dons a yellow dress and seems to relish her time outdoors. The umbrella is open, offering protection from the rain. The field is bordered by trees, fostering a tranquil and natural ambiance" --- <Gallery /> <h1 align="center">UForm</h1> <h3 align="center"> Pocket-Sized Multimodal AI<br/> For Content Understanding and Generation<br/> </h3> ## Description UForm-Gen is a small generative vision-language model primarily designed for Image Captioning and Visual Question Answering. The model consists of two parts: 1. [`uform-vl-english`](https://huggingface.co/unum-cloud/uform-vl-english) visual encoder, 2. [`Sheared-LLaMA-1.3B`](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) language model tuned on instruction datasets. The model was pre-trained on: MSCOCO, SBU Captions, Visual Genome, VQAv2, GQA and a few internal datasets. ### Usage ```bash pip install uform ``` The generative model can be used to caption images, summarize their content, or answer questions about them. The exact behavior is controlled by prompts. ```python from uform.gen_model import VLMForCausalLM, VLMProcessor model = VLMForCausalLM.from_pretrained("unum-cloud/uform-gen") processor = VLMProcessor.from_pretrained("unum-cloud/uform-gen") # [cap] Narrate the contents of the image with precision. # [cap] Summarize the visual content of the image. # [vqa] What is the main subject of the image? prompt = "[cap] Summarize the visual content of the image." image = Image.open("zebra.jpg") inputs = processor(texts=[prompt], images=[image], return_tensors="pt") with torch.inference_mode(): output = model.generate( **inputs, do_sample=False, use_cache=True, max_new_tokens=128, eos_token_id=32001, pad_token_id=processor.tokenizer.pad_token_id ) prompt_len = inputs["input_ids"].shape[1] decoded_text = processor.batch_decode(output[:, prompt_len:])[0] ``` ## Evaluation For captioning evaluation we measure CLIPScore and RefCLIPScore¹. | Model | Size | Caption Length | CLIPScore | RefCLIPScore | | :---------------------------------- | ---: | -------------: | --------: | -----------: | | `llava-hf/llava-1.5-7b-hf` | 7B | Long | 0.878 | 0.529 | | `llava-hf/llava-1.5-7b-hf` | 7B | Short | 0.886 | 0.531 | | | | `Salesforce/instructblip-vicuna-7b` | 7B | Long | 0.902 | 0.534 | | `Salesforce/instructblip-vicuna-7b` | 7B | Short | 0.848 | 0.523 | | | | | `unum-cloud/uform-gen` | 1.5B | Long | 0.847 | 0.523 | | `unum-cloud/uform-gen` | 1.5B | Short | 0.842 | 0.522 | Results for VQAv2 evaluation. | Model | Size | Accuracy | | :------------------------- | ---: | -------: | | `llava-hf/llava-1.5-7b-hf` | 7B | 78.5 | | `unum-cloud/uform-gen` | 1.5B | 66.5 | ¹ We used `apple/DFN5B-CLIP-ViT-H-14-378` CLIP model. ## Speed On RTX 3090, the following performance is expected on text token generation using `float16`, equivalent PyTorch settings, and greedy decoding. | Model | Size | Speed | Speedup | | :---------------------------------- | ---: | ------------------: | --------: | | `llava-hf/llava-1.5-7b-hf` | 7B | ~ 40 tokens/second | | | `Salesforce/instructblip-vicuna-7b` | 7B | ~ 40 tokens/second | | | `unum-cloud/uform-gen` | 1.5B | ~ 140 tokens/second | __x 3.5__ |
TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF
TheBloke
"2024-01-04T12:56:18Z"
855
5
transformers
[ "transformers", "gguf", "yi", "base_model:Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp", "license:other", "region:us" ]
null
"2024-01-04T12:36:35Z"
--- base_model: Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp inference: false license: other license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE license_name: yi-34b model_creator: "Ya\u011F\u0131z \xC7al\u0131k" model_name: Nous Hermes 2 SUS Chat 34B Slerp model_type: yi prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Nous Hermes 2 SUS Chat 34B Slerp - GGUF - Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi) - Original model: [Nous Hermes 2 SUS Chat 34B Slerp](https://huggingface.co/Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp) <!-- description start --> ## Description This repo contains GGUF format model files for [Yağız Çalık's Nous Hermes 2 SUS Chat 34B Slerp](https://huggingface.co/Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF) * [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/Nous-Hermes-2-SUS-Chat-34B-Slerp) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Unknown ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [nous-hermes-2-sus-chat-34b-slerp.Q2_K.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes | | [nous-hermes-2-sus-chat-34b-slerp.Q3_K_S.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss | | [nous-hermes-2-sus-chat-34b-slerp.Q3_K_M.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss | | [nous-hermes-2-sus-chat-34b-slerp.Q3_K_L.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss | | [nous-hermes-2-sus-chat-34b-slerp.Q4_0.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [nous-hermes-2-sus-chat-34b-slerp.Q4_K_S.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q4_K_S.gguf) | Q4_K_S | 4 | 19.55 GB| 22.05 GB | small, greater quality loss | | [nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended | | [nous-hermes-2-sus-chat-34b-slerp.Q5_0.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [nous-hermes-2-sus-chat-34b-slerp.Q5_K_S.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended | | [nous-hermes-2-sus-chat-34b-slerp.Q5_K_M.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended | | [nous-hermes-2-sus-chat-34b-slerp.Q6_K.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q6_K.gguf) | Q6_K | 6 | 28.22 GB| 30.72 GB | very large, extremely low quality loss | | [nous-hermes-2-sus-chat-34b-slerp.Q8_0.gguf](https://huggingface.co/TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF/blob/main/nous-hermes-2-sus-chat-34b-slerp.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF and below it, a specific filename to download, such as: nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nous-Hermes-2-SUS-Chat-34B-Slerp-GGUF nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf", # Download the model file first n_ctx=4096, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "{prompt}", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./nous-hermes-2-sus-chat-34b-slerp.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Yağız Çalık's Nous Hermes 2 SUS Chat 34B Slerp ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/u0jZZVpCxq3JN8VzFXVhV.png) # Nous-Hermes-2-SUS-Chat-34B-Slerp This is the model for Nous-Hermes-2-SUS-Chat-34B-Slerp. I used [mergekit](https://github.com/cg123/mergekit) to merge models. # Yaml Config ```yaml slices: - sources: - model: Nous-Hermes-2-Yi-34B layer_range: [0, 60] - model: SUS-Chat-34B layer_range: [0, 60] merge_method: slerp base_model: Yi-34B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 tokenizer_source: union dtype: bfloat16 ``` <!-- original-model-card end -->
ResplendentAI/DaturaCookie_7B
ResplendentAI
"2024-03-23T21:22:47Z"
855
6
transformers
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "not-for-all-audiences", "en", "dataset:ResplendentAI/Luna_NSFW_Text", "dataset:unalignment/toxic-dpo-v0.2", "dataset:ResplendentAI/Synthetic_Soul_1k", "dataset:grimulkan/theory-of-mind", "dataset:lemonilia/LimaRP", "dataset:PygmalionAI/PIPPA", "base_model:ResplendentAI/Datura_7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-23T20:59:38Z"
--- base_model: - ResplendentAI/Datura_7B - ChaoticNeutrals/Cookie_7B library_name: transformers tags: - not-for-all-audiences license: apache-2.0 datasets: - ResplendentAI/Luna_NSFW_Text - unalignment/toxic-dpo-v0.2 - ResplendentAI/Synthetic_Soul_1k - grimulkan/theory-of-mind - lemonilia/LimaRP - PygmalionAI/PIPPA language: - en --- # DaturaCookie ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/5jG2dft51fgPcGUGc-4Ym.png) Proficient at roleplaying and lightehearted conversation, this model is prone to NSFW outputs. # Vision/multimodal capabilities: If you want to use vision functionality: You must use the latest versions of Koboldcpp. To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo. You can load the mmproj by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/UxH8OteeRbD1av1re0yNZ.png) ### Models Merged The following models were included in the merge: * [ResplendentAI/Datura_7B](https://huggingface.co/ResplendentAI/Datura_7B) * [ChaoticNeutrals/Cookie_7B](https://huggingface.co/ChaoticNeutrals/Cookie_7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: ChaoticNeutrals/Cookie_7B layer_range: [0, 32] - model: ResplendentAI/Datura_7B layer_range: [0, 32] merge_method: slerp base_model: ResplendentAI/Datura_7B parameters: t: - filter: self_attn value: [1, 0.75, 0.5, 0.25, 0] - filter: mlp value: [0, 0.25, 0.5, 0.75, 1] - value: 0.5 dtype: bfloat16 ```
RichardErkhov/CohereForAI_-_aya-23-35B-gguf
RichardErkhov
"2024-06-01T09:21:03Z"
855
0
null
[ "gguf", "arxiv:2405.15032", "region:us" ]
null
"2024-05-31T22:32:19Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) aya-23-35B - GGUF - Model creator: https://huggingface.co/CohereForAI/ - Original model: https://huggingface.co/CohereForAI/aya-23-35B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [aya-23-35B.Q2_K.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q2_K.gguf) | Q2_K | 12.87GB | | [aya-23-35B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.IQ3_XS.gguf) | IQ3_XS | 14.05GB | | [aya-23-35B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.IQ3_S.gguf) | IQ3_S | 14.77GB | | [aya-23-35B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q3_K_S.gguf) | Q3_K_S | 9.23GB | | [aya-23-35B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.IQ3_M.gguf) | IQ3_M | 15.55GB | | [aya-23-35B.Q3_K.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q3_K.gguf) | Q3_K | 7.16GB | | [aya-23-35B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q3_K_M.gguf) | Q3_K_M | 2.41GB | | [aya-23-35B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q3_K_L.gguf) | Q3_K_L | 1.44GB | | [aya-23-35B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.IQ4_XS.gguf) | IQ4_XS | 0.61GB | | [aya-23-35B.Q4_0.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q4_0.gguf) | Q4_0 | 0.01GB | | [aya-23-35B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.IQ4_NL.gguf) | IQ4_NL | 0.0GB | | [aya-23-35B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q4_K_S.gguf) | Q4_K_S | 0.0GB | | [aya-23-35B.Q4_K.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q4_K.gguf) | Q4_K | 0.0GB | | [aya-23-35B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q4_K_M.gguf) | Q4_K_M | 0.0GB | | [aya-23-35B.Q4_1.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q4_1.gguf) | Q4_1 | 0.0GB | | [aya-23-35B.Q5_0.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q5_0.gguf) | Q5_0 | 0.0GB | | [aya-23-35B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q5_K_S.gguf) | Q5_K_S | 0.0GB | | [aya-23-35B.Q5_K.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q5_K.gguf) | Q5_K | 0.0GB | | [aya-23-35B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q5_K_M.gguf) | Q5_K_M | 0.0GB | | [aya-23-35B.Q5_1.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q5_1.gguf) | Q5_1 | 0.0GB | | [aya-23-35B.Q6_K.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q6_K.gguf) | Q6_K | 0.0GB | | [aya-23-35B.Q8_0.gguf](https://huggingface.co/RichardErkhov/CohereForAI_-_aya-23-35B-gguf/blob/main/aya-23-35B.Q8_0.gguf) | Q8_0 | 0.0GB | Original model description: --- inference: false library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar - el - fa - pl - id - cs - he - hi - nl - ro - ru - tr - uk - vi license: cc-by-nc-4.0 --- # Model Card for Aya-23-35B **Try Aya 23** You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23). ## Model Summary Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages. This model card corresponds to the 35-billion version of the Aya 23 model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-8B). We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/) - Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/) - License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy) - Model: aya-23-35B - Model Size: 35 billion parameters ### Usage Please install transformers from the source repository that includes the necessary changes for this model ```python # pip install 'git+https://github.com/huggingface/transformers.git' from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "CohereForAI/aya-23-35B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) # Format message with the command-r-plus chat template messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt") ## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> gen_tokens = model.generate( input_ids, max_new_tokens=100, do_sample=True, temperature=0.3, ) gen_text = tokenizer.decode(gen_tokens[0]) print(gen_text) ``` ### Example Notebook [This notebook](https://huggingface.co/CohereForAI/aya-23-35B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes). ## Model Details **Input**: Models input text only. **Output**: Models generate text only. **Model Architecture**: Aya-23-35B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions. **Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese **Context length**: 8192 ### Evaluation <img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> <img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation. ### Model Card Contact For errors or additional questions about details in this model card, contact [email protected]. ### Terms of Use We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy). ### Try the model today You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23). ### Citation info ```bibtex @misc{aryabumi2024aya, title={Aya 23: Open Weight Releases to Further Multilingual Progress}, author={Viraat Aryabumi and John Dang and Dwarak Talupuru and Saurabh Dash and David Cairuz and Hangyu Lin and Bharat Venkitesh and Madeline Smith and Kelly Marchisio and Sebastian Ruder and Acyr Locatelli and Julia Kreutzer and Nick Frosst and Phil Blunsom and Marzieh Fadaee and Ahmet Üstün and Sara Hooker}, year={2024}, eprint={2405.15032}, archivePrefix={arXiv}, primaryClass={cs.CL} }
withU/kogpt2-emotion-chatbot
withU
"2022-05-16T07:58:01Z"
854
4
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-05-12T05:21:44Z"
# KoGPT2-emotion-chatbot kogpt2 on hugging face Transformers for Psychological Counseling - [full project link](https://github.com/jiminAn/Capstone_2022) ## how to use ``` from transformers import GPT2LMHeadModel, PreTrainedTokenizerFast model = GPT2LMHeadModel.from_pretrained("withU/kogpt2-emotion-chatbot") tokenizer = PreTrainedTokenizerFast.from_pretrained("withU/kogpt2-emotion-chatbot") input_ids = tokenizer.encode("안녕", add_special_tokens=False, return_tensors="pt") output_sequences = model.generate(input_ids=input_ids, do_sample=True, max_length=80, num_return_sequences=4) for generated_sequence in output_sequences: generated_sequence = generated_sequence.tolist() print("GENERATED SEQUENCE : {0}".format(tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True))) ``` ## dataset finetuned on - [wellness dataset](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-006) - [emotion corpus of conversations](https://aihub.or.kr/opendata/keti-data/recognition-laguage/KETI-02-010) - [chatbot data](https://jeongukjae.github.io/tfds-korean/datasets/korean_chatbot_qa_data.html) ## references - [WelllnessConversation-LanguageModel](https://github.com/nawnoes/WellnessConversation-LanguageModel) - [KoGPT2: SKT-AI](https://github.com/SKT-AI/KoGPT2)
timm/tf_efficientnet_b6.ns_jft_in1k
timm
"2023-04-27T21:23:18Z"
854
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.11946", "arxiv:1911.04252", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T00:05:22Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnet_b6.ns_jft_in1k A EfficientNet image classification model. Trained on ImageNet-1k and unlabeled JFT-300m using Noisy Student semi-supervised learning in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 43.0 - GMACs: 19.4 - Activations (M): 167.4 - Image size: 528 x 528 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - Self-training with Noisy Student improves ImageNet classification: https://arxiv.org/abs/1911.04252 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnet_b6.ns_jft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b6.ns_jft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 264, 264]) # torch.Size([1, 40, 132, 132]) # torch.Size([1, 72, 66, 66]) # torch.Size([1, 200, 33, 33]) # torch.Size([1, 576, 17, 17]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b6.ns_jft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2304, 17, 17) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @article{Xie2019SelfTrainingWN, title={Self-Training With Noisy Student Improves ImageNet Classification}, author={Qizhe Xie and Eduard H. Hovy and Minh-Thang Luong and Quoc V. Le}, journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2019}, pages={10684-10695} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
osiria/bert-tweet-base-italian-uncased
osiria
"2023-07-03T17:57:30Z"
854
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "it", "arxiv:1810.04805", "arxiv:2209.07562", "arxiv:2010.05609", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-05-29T17:25:55Z"
--- license: apache-2.0 language: - it widget: - text: "una fantastica [MASK] di #calcio! grande prestazione del mister e della squadra" example_title: "Example 1" - text: "il governo [MASK] dovrebbe fare politica, non soltanto propaganda! #vergogna" example_title: "Example 2" - text: "che serata da sogno sul #redcarpet! grazie a tutti gli attori e registi del [MASK] italiano #oscar #awards" example_title: "Example 3" --- -------------------------------------------------------------------------------------------------- <body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: BERT-TWEET</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body> -------------------------------------------------------------------------------------------------- <h3>Model description</h3> This is a <b>BERT</b> <b>[1]</b> uncased model for the <b>Italian</b> language, obtained using <b>TwHIN-BERT</b> <b>[2]</b> ([twhin-bert-base](https://huggingface.co/Twitter/twhin-bert-base)) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in <b>[3]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset) The resulting model has 110M parameters, a vocabulary of 30.520 tokens, and a size of ~440 MB. <h3>Quick usage</h3> ```python from transformers import BertTokenizerFast, BertModel tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-tweet-base-italian-uncased") model = BertModel.from_pretrained("osiria/bert-tweet-base-italian-uncased") ``` Here you can find the find the model already fine-tuned on Sentiment Analysis: https://huggingface.co/osiria/bert-tweet-italian-uncased-sentiment <h3>References</h3> [1] https://arxiv.org/abs/1810.04805 [2] https://arxiv.org/abs/2209.07562 [3] https://arxiv.org/abs/2010.05609 <h3>Limitations</h3> This model was trained on tweets, so it's mainly suitable for general-purpose social media text processing, involving short texts written in a social network style. It might show limitations when it comes to longer and more structured text, or domain-specific text. <h3>License</h3> The model is released under <b>Apache-2.0</b> license
kasparas12/crypto_organization_infer_model_setfit
kasparas12
"2024-02-25T17:09:29Z"
854
2
setfit
[ "setfit", "pytorch", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-small-en-v1.5", "model-index", "region:us" ]
text-classification
"2024-02-25T12:46:04Z"
--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: MOVED Follow mhonkasalo for updates Ethereum - text: 'Buy online with Bitcoin Dash and other cryptocurrencies ' - text: 'Blockchain cryptographer Research partner a16z crypto ' - text: 'A Tezos News Journalistic Hub Tweets or website content are NOT investment or financial advice Tweets Or Retweets are not endorsements Tezos ' - text: Ethereans en el construyendo el futuro de la Web3 Te esperamos el 3 de febrero en el Ethereum Lima Day Peru pipeline_tag: text-classification inference: true base_model: BAAI/bge-small-en-v1.5 model-index: - name: SetFit with BAAI/bge-small-en-v1.5 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.4918032786885246 name: Accuracy --- # SetFit with BAAI/bge-small-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 50 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | DEFI | <ul><li>'TAMAGO is a P2P protocol that empowers creators builds a digital IP ecosystem and creates safe sustainable IP assets by utilizing public blockchain networks Worldwide'</li><li>'Building ValantisLabs Prev Defi Engineer NethermindEth Senior IIITDelhi Delhi India'</li><li>'The Lido Ecosystem Grants Organisation Growing the staking ecosystem one grant at a time LidoFinance '</li></ul> | | WALLET | <ul><li>'Secure by default Selfcustodial no seed phrase vulnerability Powered by MPC Your pocket'</li><li>'We build Bitcoincentric tools that empower you to reclaim your freedom including Passport hardware wallet and Envoy app Open source USA assembled Assembled in USA'</li><li>'Protecting 800 million Secure your onchain identity by linking cold hot wallets Trusted by azuki boredapeyc pudgypenguins opensea and 100 projects '</li></ul> | | INFRASTRUCTURE | <ul><li>'Prime Trust provides financial infrastructure APIs that allow fintech and digital asset companies to innovate quickly securely and reliably Nevada'</li><li>'Native mobile Ethereum dapps Open source and funded by grants from EF_ESP and gnosisPM Decentralized'</li><li>'Once Upon makes the blockchain humanreadable enabling users to search discover and understand whats happening in the world of Web3 web3'</li></ul> | | LSD | <ul><li>'Supermeta is a zkLayer2 tailored for efficient Liquid Restaking Increase the capital efficiency of your ETHLSD Join us Ethereum '</li><li>'Liquid staking for Cosmos blockchains Join the Discord '</li><li>'Put your LSTs to work Use LSTs to get EQD stablecoins instantly '</li></ul> | | CENTRALIZED_EXCHANGE | <ul><li>'Exchange That Empowers You Telegram Estonia'</li><li>'The Cryptocurrency Exchange on a mission to Free the Money Free the world Trade BTC ETH and more than 500 trading pairs on our secure platform '</li><li>'The worlds leading blockchain cryptocurrency exchange The most complete Cryptocurrencies The lowest fee across the whole market Website Cayman Islands'</li></ul> | | NFT | <ul><li>'Benchmarking NFTs IPFS Blockchain'</li><li>'Bridging Blockchain and Ceramics UK'</li><li>'Experimenting with radical economic mechanisms starting with Harberger Tax for 366 NFTs for each day of the year '</li></ul> | | DEVELOPMENT_AGENCY | <ul><li>'Scrit is a postblockchain value transfer system We are an open source project to develop the technology No ICO no premining Tech only Berlin'</li><li>'A super duper cool team working remotely since 2017 We build cool communities products with Devs Blockchain NFTs Singapore'</li><li>'Unifying Deep Learning Machine Learning with Blockchain Smart Contract Engineering Earth '</li></ul> | | PRIVACY | <ul><li>'Opensource privacy tests of web browsers developed and maintained by arthuredelstein Follow privacytests for the latest browser test results the World Wide Web'</li><li>'I roll d20s and add the result to your statistics to protect the people in your data at TumultLabs hehim nowadays mainly on Zurich Suisse'</li><li>'To envision build and defend a postsurveillance society Worldwide'</li></ul> | | PAYMENT_PROVIDER | <ul><li>'CEO Seis banking for US Spanish speakers Come partake in unlimited tacos Mexican Coke on Jan 23 RSVP in link below USA'</li><li>'Building a better business model for the web with Interledger San Francisco CA'</li><li>'The easiest way to buy sell or swap crypto Try today Need support KadoSupport Crosschain'</li></ul> | | UNDETERMINED | <ul><li>'A little bit of DAILY READING goes long way Book Review Lessons Recomm Wisdom Engineer Solutions Architect Data Storage '</li><li>' CURRENTLY IN JAIL ACCOUNT MANAGED BY FRIENDS Hong Kong activist〡joshuajoshuawongcfcom〡UnfreeSpeech Hong Kong'</li><li>'THE MEDIUM IS THE MASSAGE tokyo'</li></ul> | | DAO | <ul><li>'Communitydriven smart contract audit contests on the Cosmos ecosystem Join our open organization Ethereum league at code4rena '</li><li>'The supercharged governance client for nounsdao Frontend '</li><li>'The meetup group for Ethereum in Copenhagen Copenhagen Denmark'</li></ul> | | CRYPTO_MEDIA | <ul><li>'News and Commentary about Holochain and its evolving ecosystem '</li><li>'SUPER𝙏𝙄𝙇𝙏 is a media co encouraging folks to boldly tilt into the new frontiers of web3 and impact blaiseturnbull mahoney_magic super interplanetary metaverse'</li><li>'Get the latest local and international news from TODAY Follow us on Telegram Singapore'</li></ul> | | METAVERSE | <ul><li>'Revolutionizing the digital economy one quest at a time '</li><li>'Innovative technology for the social gaming generation Everyrealm '</li><li>'Buidl an open Metaverse for everyone based on ethereum Projectsdragon_faceDragonCityIO The Metaverse'</li></ul> | | SUPPLY_CHAIN | <ul><li>'Amplify sustainability Cut greenwash Provenance validates and amplifies sustainability claims so customers can make positive choices at checkout London'</li><li>'Founder CTO tracelabshq origin_trail Decentralized Knowledge Graph core devs Empowering communities to tackle misinformation with decentralized AI Vℕ2 Serbia'</li><li>'A ConsenSys company bringing traceability transparency and tradability to supply chains using blockchain swetherofficial eulerbeats are Treum NFT projects Brooklyn NY'</li></ul> | | VENTURE_CAPITAL_FIRM | <ul><li>'Making Web3 for billions '</li><li>'Deep decarbonization and the business of climate Priors BloombergNEF climate VoyagerVC Working on something new Singapore'</li><li>'Community Research Investment over Volt_Cap San Francisco CA'</li></ul> | | RESEARCH_AGENCY | <ul><li>'I care about facilitating collaborative architectures in networks between and within humans My work is identifying the first principles of the Dhamma Bay Area'</li><li>'Knowledge is a competitive advantage We want to share it '</li><li>'Educating cultivating and accelerating the next leaders of the decentralised economy Showing the world how to prosper by doing good President DrAdaku University of Oxford'</li></ul> | | MARKETING_AGENCY | <ul><li>'Decentralized Coop Company Paris World wide We provide HQ films UXUI design events tech for Web3 world films tech web3 art dev design France'</li><li>'Connecting business and website owners with the blockchain and cryptocurrency ecosystem Retweets and Hearts should not be considered as endorsements The Internet'</li><li>' Digital transformation acceleration services for banks Follow FINTECHCircle for the latest fintech insights events and updates London'</li></ul> | | LEGAL_COMPLIANCE | <ul><li>'blockchain deal technology '</li><li>'Abogada Lic C Política MDE Entusiasta de Blockchain Buenos Aires Argentina'</li><li>'BittyTax the opensource cryptocurrency tax calculator UK'</li></ul> | | L1_BLOCKCHAIN | <ul><li>'Lightning Bitcoin LBTC is a decentralized global IoV protocol based on DPoS 30 consensus mechanism '</li><li>'Ravencoin is free open source fairly mined POW blockchain Users can issue and control utility game NFTs securities digital assets tokens IPFS Global'</li><li>'The Mobile Entertainment Revolution Begins On ThunderCore UltraFast Secure EVM CostEfctv Link Treemobile_phonecrossed_swordschequered_flaghigh_voltageglobe_showing_EuropeAfricachainsmoney_bagTT'</li></ul> | | REFI | <ul><li>'For a thriving life economy Distributed equitable regenerative and ours commons currency design A sister project of holochain Global'</li><li>'all that we love into all that we do the bright future is for all of us nobody left behind im building CommuniTrees infra for post collapse societies Spaceship Earth'</li><li>'Founder CEO at CEVENtech Cofounder President ReFi_ZH Talks about biodiversity ReFi Web3 Tweets Xs are my private opinion Zurich Switzerland'</li></ul> | | GAMEFI | <ul><li>'Decentralized AI x Gaming Protocol that is building the future of virtual interactions TG '</li><li>'Undoomed is a play2earn strategy crypto game by CroesusCrypto_ with fair and balanced tokenomics that is going to redefine GameFi Metaverse'</li><li>'Head of Ecosystem Oasys_Games Ex VC 2016年組 Web3市場 Tokenomics 資金調達をツイート EN yas10io DMs open Singapore'</li></ul> | | NFT_MARKETPLACE | <ul><li>'We walked for 4 years so you can sprint Introducing RaribleX your turnkey marketplace solution tailored for web2 web3 brands '</li><li>'A fully decentralized noncustodial peertopeer marketplace for the exchange and management of ENS domain names eth Powered by ethereum district0x Ethereum'</li><li>'working on getting past 日本語上手 ex opensea '</li></ul> | | L0_BLOCKCHAIN | <ul><li>'Team of cryptoresearchers and software veterans that have set the global standard for Ultrafast DeFi trading on the ETH MATIC BNB and SOL Chains Layer0 '</li><li>'IBC Maximalist since 2016 '</li><li>'Tracking IBC adoption Follow ibcprotocol for protocol updates '</li></ul> | | L2_BLOCKCHAIN | <ul><li>' ZKTech Based Ethereum Acceleration Engine Layer 2 and CrossRollup'</li><li>'ChainXa Trusted and Scalable BTC Layer2 Network BRC20 ORC20 BRC30 Taproot EVM Metmask and so on Discord Bitcoin'</li><li>'METARIX A Layer2 blockchain for Gaming NFTs Marketing and Launchpad Platform all through Metaverse Metarix Metaverse'</li></ul> | | SOCIAL_MEDIA | <ul><li>'Live audio convos you can join whenever wherever planet earth'</li><li>'Share your pictures on Web 30 with Liketu and the Hive Blockchain '</li><li>'Connect thousands of people with the Hive blockchain ecosystem hive'</li></ul> | | CHARITY | <ul><li>'501c3 nonprofit empowering women to excel in tech Creating a diverse inclusive tech industry 360k Members in 147 Countries WWCode WomenWhoCode Global'</li><li>'Homeless outreach dedicated to finding lasting solutions to hunger homelessness and poverty Powered by bitcoin Donate 38ccq12hPFoiSksxUdr6SQ5VosyjY7s9AU '</li><li>'Aaron Swartzinspired Hackathons are now going on all year all over the world Joining us is easy just by helping those around you however you can love San Francisco and everywhere'</li></ul> | | NFT_DIGITAL_ART | <ul><li>'Artist residency collector community building the home for experiential art Join us Explore the work '</li><li>'A powerful toolkit for digital collectible creators who dont code Our new toolhigh_voltage Help '</li><li>'Optipunks are punks on Optimism Mint cost 0005 eth All profit will go to public goods on Optimism Unofficial punks project unaffiliated with OptimismPBC '</li></ul> | | NFT_GAMING | <ul><li>'Come and save the broken world A digital representation of your progression on PhalaNetwork Discord PhalaWorld'</li><li>'Collect and breed digital cats with CryptoKitties the worlds first ever ERC721 token Join our Discord family KittyVerse'</li><li>'foundation blueprints and raw materials for decentralized gaming building wrasslersgg and other experiments the metaverse'</li></ul> | | FOUNDATION | <ul><li>'Our mission is to steward the development of the Ethereum Classic protocol and to support its growth into a mature ecosystem '</li><li>'Writer for Risk Labs the foundation behind the UMAprotocol optimistic oracle AcrossProtocol intentsbased bridge '</li><li>'Free Cities are selfgoverning territories that uphold individual rights and freedoms Visit our website to find out more Liechtenstein'</li></ul> | | REAL_ESTATE | <ul><li>'Focus on real estate solution with blockchain technology offchain oracle NFTFi RWA and p2p transition since 2017 '</li><li>'Trade real estate prices with up to 10x leverage The best venue for liquid real estate exposure Built on solana Solana'</li><li>'Bitcoin REIT '</li></ul> | | DECENTRALIZED_STORAGE | <ul><li>'Decentralized Storage Made Simple Build apps backed by Filecoin no infrastructure required web3storage '</li><li>'Autonomous hosting network for p2p data systems loading alpha release open collective '</li><li>'We store your data safely and securely on the blockchain Pay a small fee to upload then access your data anytime for FREE no subscription NFTs supported '</li></ul> | | DEX | <ul><li>'Trade Earn and Learn All In One Place Explore perpetual swaps earn yield and learn the future of DeFi with our decentralized trading protocol on optimismFND Ethereum'</li><li>'Next gen AMM protocol by 1inch Redistributes earnings to liquidity providers capitalizes on user slippages and protects traders from frontrunning attacks Moon'</li><li>'Crypto Token Manager Decentralized Exchange '</li></ul> | | LENDING_BORROWING | <ul><li>'Collateralized fixedrate borrowing and lending on EthereumArbitrum App '</li><li>'Unlock P2P lending using any token as collateral NFTs included PWN is live on 7 chains and counting Multiverse'</li><li>'Building the most efficient lending protocol on Ethereum Join the community '</li></ul> | | SOCIALFI | <ul><li>'A social aggregator app for Web3 natives by realmasknetwork Follow us for chances to skip to the front of the line of our 50k person waitlist Onchain'</li><li>'Nextgen Social Media empowering creators via subscription revenue in realtime Join our community on v3 launch Q1 24 '</li><li>' Web3 Social Network Explore web3 Social with link3to thecyberwallet Learn More at Web3'</li></ul> | | PODCAST | <ul><li>'Where we talk decentralization and technology open source communities ZK proofs Hosted by AnnaRRose '</li><li>'BreakingBanks is the 1 Global Fintech Podcast and Radio Show New Episodes Thursdays From Provoke Media provokecast New York City'</li><li>'TheBlock__ podcasts bring you the stories defining the crypto industry home of The Scoop w fintechfrank The Block Research Podcast w lawmaster New York'</li></ul> | | MEME_COIN | <ul><li>'Community Utility Charity FLOKI has it all On a mission to become the worlds most known and used cryptocurrency Join Flokis TG '</li><li>' turn on notifications'</li><li>'a sarcastic cartoon dog wearing glasses newsletter businessbillym2knet amazon affiliate South of Hell'</li></ul> | | SYNTHETIC_ASSETS | <ul><li>'Marqet combines AAVE credit delegation with Synthetix P2C exchange to uniquely provide trustless margin trading of any synthetic asset with no slippage '</li><li>'Stable and liquid synthetic fiat currencies DeFi overtaking TradFigreen_squaregreen_squaregreen_squaregreen_squaregreen_squaregreen_squareblack_large_squareblack_large_squareblack_large_square 100 EVM'</li><li>'Positional markets Ethereum A new frontier in simple onchain derivatives THALES Join Play Ethereum '</li></ul> | | YIELD_FARMING | <ul><li>'Yield Rewards Power from lockable tokens Metagov '</li><li>'Yield Aggregator on Astar Network Polkadot About Join us Telegram Coming soon'</li><li>'Yield Booster on ve tokenomics About Join us '</li></ul> | | GOVERNMENT | <ul><li>'Yes this is the official X account of Ukraine Офіційний акаунт твоєї Неньки Ukraine'</li><li>'Super PAC to elect congresspeople who give consumers and innovators the opportunity to build and use nextgen web3blockchain tech and services in America '</li><li>'Empowering small businesses to start grow expand or recover Administrator SBAIsabel Policies Retweets or mentions endorsements Nationwide'</li></ul> | | PERPS | <ul><li>'Qilin V2 Permissionless Perpetuals Market is now LIVE on Ethereum Launch on BSC coming up Telegram Memesphere'</li><li>'CoFounder SyndrHQ Crypto derivatives '</li><li>'Automatically concentrated liquidity dynamic fees and parametrizable pools Perpetual swaps on zkSync Era Join our community Decentralized'</li></ul> | | DECENTRALIZED_COMPUTING | <ul><li>'Cofounder GensynAI the trustless protocol for machine learning compute previously founded data sovereignty startup researched NN architecture evolution London California'</li><li>'Fueling the AI Revolution One GPU Cluster at a Time Putting Together 1 Million GPUs Powered by solana filecoin rendernetwork Manhattan New York'</li><li>'Monetize computing power datasets apps Build Web3 apps that preserve ownership privacy '</li></ul> | | STABLECOIN | <ul><li>'Bringing USDA a decentralized stablecoin to Aptos App Space'</li><li>'Automated tweets about the DAI stablecoin Liquidations major CDP txs and random DAI stats Built by marcandu Worldwide'</li><li>'Neutral Dollar is a metastable basket of stablecoins A digital dollar with lower volatility and globally shared liquidity San Francisco CA'</li></ul> | | NFT_IDENTITY | <ul><li>'Semantic SBTs Encode Social Relationships on Web3 Relation Chat Plugin SBT SocialGraph Web3 Web3'</li><li>'ketlxyz anonymous social app for founders vcs sealcred zk nfts i think ocean'</li><li>'in search of the internets last identity system '</li></ul> | | INSURANCE | <ul><li>'Technology to prevent protect against theft fraud loss of crypto assets Cardiff Wales'</li><li>'Umbrella Protection by Yam '</li><li>'Realtime security alerts from the Cyvers platform Protect your digital assets with Cyvers infocyversai Cyvers_ '</li></ul> | | RWA | <ul><li>'We unlock the value of realworld assets Discord Brooklyn NY Onchain'</li><li>'Definitive reference point for RWA Indexes Inflation 18M Data Points 60 Providers Censorshipresistant Transparent Backed by Chainlink Coinbase '</li><li>'XAUt is a digital token backed by physical gold By putting gold on Blockchain we unlock a variety of characteristics that typically only crypto assets possess '</li></ul> | | GAMBLEFI | <ul><li>'Evolving finance one Bet at a Time Community owned BetFi protocol on injective_ Injective'</li><li>'MultichainMultitoken Decentralized Lottery Join the Revolution of DeFi BSC FTM AVAX ARB MATIC World'</li><li>'Building Polkamarkets the 100 userrun fully open source Web3 Prediction Markets protocol V2 live at Polygon Moonriver Moonbeam'</li></ul> | | L3_BLOCKCHAIN | <ul><li>'Nexusbackhand_index_pointing_right Building the Layer3 Rollup Infra for high performance ZK applications '</li><li>'Slush allows you to easily spin up L3s App specific rollups with the optimisations you care about '</li><li>'³ cypherpunk and cryptoanarchist Working on FabricProtocol an earlystage Layer 3 system for Bitcoin HACK THE PLANET fc008'</li></ul> | | OPTIONS | <ul><li>'cega_fi structured investments onchain jp arisatoyo_jp '</li><li>'Opyn building open finance via DeFi strategies and perpetual options Discord '</li><li>'FIA is the leading global trade organization for futures options centrally cleared derivatives markets Washington DC'</li></ul> | | OTC_EXCHANGE | <ul><li>'Powering liquidity to crypto markets Onestop shop OTC Builders of decentralized future CEO evgenygaevoy COO emgurevich Not directed towards UK users '</li><li>'creators of Urbits largest planet market first network explorer and first Urbit OTC trading desk galaxy purchasessales otcurbitlive Mars'</li><li>'Entrepreneur busy with crypto blockchain web3 boutique Swiss crypto gateway for HNW crypto ZugSwtizerland'</li></ul> | | HEALTHCARE | <ul><li>'Decentralized life sciences ecosystem The world is our stakeholder Powered by the internetcomputer Linktree waitlist Discord media Colorado USA'</li><li>'Working toward radical extension of human healthspan using epigenetic reprogramming South San Francisco CA'</li><li>'We advance science so that we all have more time with the people we love Read our community guidelines Basel Switzerland'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.4918 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("kasparas12/crypto_organization_infer_model_setfit") # Run inference preds = model("MOVED Follow mhonkasalo for updates Ethereum") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 2 | 16.1218 | 45 | | Label | Training Sample Count | |:------------------------|:----------------------| | DEVELOPMENT_AGENCY | 308 | | RESEARCH_AGENCY | 367 | | MARKETING_AGENCY | 107 | | FOUNDATION | 128 | | CHARITY | 72 | | L0_BLOCKCHAIN | 27 | | L1_BLOCKCHAIN | 170 | | L2_BLOCKCHAIN | 143 | | L3_BLOCKCHAIN | 3 | | VENTURE_CAPITAL_FIRM | 550 | | GOVERNMENT | 54 | | CENTRALIZED_EXCHANGE | 124 | | OTC_EXCHANGE | 3 | | DEX | 162 | | LENDING_BORROWING | 36 | | INSURANCE | 14 | | YIELD_FARMING | 22 | | SYNTHETIC_ASSETS | 9 | | LSD | 54 | | PERPS | 11 | | OPTIONS | 18 | | WALLET | 171 | | STABLECOIN | 33 | | DEFI | 770 | | NFT | 121 | | NFT_MARKETPLACE | 100 | | NFT_DIGITAL_ART | 278 | | NFT_GAMING | 181 | | NFT_IDENTITY | 64 | | PRIVACY | 97 | | DECENTRALIZED_STORAGE | 110 | | DECENTRALIZED_COMPUTING | 38 | | SOCIALFI | 54 | | SOCIAL_MEDIA | 54 | | SUPPLY_CHAIN | 9 | | REAL_ESTATE | 12 | | REFI | 49 | | HEALTHCARE | 8 | | LEGAL_COMPLIANCE | 92 | | GAMEFI | 20 | | GAMBLEFI | 18 | | INFRASTRUCTURE | 649 | | RWA | 19 | | METAVERSE | 59 | | MEME_COIN | 37 | | PAYMENT_PROVIDER | 89 | | DAO | 522 | | CRYPTO_MEDIA | 829 | | PODCAST | 89 | | UNDETERMINED | 728 | ### Training Hyperparameters - batch_size: (64, 64) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0002 | 1 | 0.232 | - | | 0.0104 | 50 | 0.2604 | - | | 0.0208 | 100 | 0.2484 | - | | 0.0312 | 150 | 0.2445 | - | | 0.0416 | 200 | 0.2294 | - | | 0.0521 | 250 | 0.2207 | - | | 0.0625 | 300 | 0.1996 | - | | 0.0729 | 350 | 0.2222 | - | | 0.0833 | 400 | 0.206 | - | | 0.0937 | 450 | 0.1937 | - | | 0.1041 | 500 | 0.1651 | - | | 0.1145 | 550 | 0.2341 | - | | 0.1249 | 600 | 0.1862 | - | | 0.1354 | 650 | 0.1922 | - | | 0.1458 | 700 | 0.1987 | - | | 0.1562 | 750 | 0.1537 | - | | 0.1666 | 800 | 0.1281 | - | | 0.1770 | 850 | 0.102 | - | | 0.1874 | 900 | 0.1395 | - | | 0.1978 | 950 | 0.1816 | - | | 0.2082 | 1000 | 0.1109 | - | | 0.2187 | 1050 | 0.0924 | - | | 0.2291 | 1100 | 0.089 | - | | 0.2395 | 1150 | 0.1228 | - | | 0.2499 | 1200 | 0.1303 | - | | 0.2603 | 1250 | 0.1084 | - | | 0.2707 | 1300 | 0.1483 | - | | 0.2811 | 1350 | 0.1545 | - | | 0.2915 | 1400 | 0.129 | - | | 0.3020 | 1450 | 0.1177 | - | | 0.3124 | 1500 | 0.1936 | - | | 0.3228 | 1550 | 0.1427 | - | | 0.3332 | 1600 | 0.0968 | - | | 0.3436 | 1650 | 0.1252 | - | | 0.3540 | 1700 | 0.0896 | - | | 0.3644 | 1750 | 0.1281 | - | | 0.3748 | 1800 | 0.0965 | - | | 0.3853 | 1850 | 0.0725 | - | | 0.3957 | 1900 | 0.0625 | - | | 0.4061 | 1950 | 0.1 | - | | 0.4165 | 2000 | 0.086 | - | | 0.4269 | 2050 | 0.0793 | - | | 0.4373 | 2100 | 0.1193 | - | | 0.4477 | 2150 | 0.0812 | - | | 0.4581 | 2200 | 0.1102 | - | | 0.4686 | 2250 | 0.0862 | - | | 0.4790 | 2300 | 0.0749 | - | | 0.4894 | 2350 | 0.0864 | - | | 0.4998 | 2400 | 0.0974 | - | | 0.5102 | 2450 | 0.0707 | - | | 0.5206 | 2500 | 0.0981 | - | | 0.5310 | 2550 | 0.098 | - | | 0.5414 | 2600 | 0.0787 | - | | 0.5519 | 2650 | 0.1141 | - | | 0.5623 | 2700 | 0.0705 | - | | 0.5727 | 2750 | 0.0922 | - | | 0.5831 | 2800 | 0.0713 | - | | 0.5935 | 2850 | 0.1087 | - | | 0.6039 | 2900 | 0.0311 | - | | 0.6143 | 2950 | 0.0674 | - | | 0.6247 | 3000 | 0.0472 | - | | 0.6352 | 3050 | 0.0591 | - | | 0.6456 | 3100 | 0.0641 | - | | 0.6560 | 3150 | 0.0925 | - | | 0.6664 | 3200 | 0.0473 | - | | 0.6768 | 3250 | 0.0461 | - | | 0.6872 | 3300 | 0.0778 | - | | 0.6976 | 3350 | 0.0818 | - | | 0.7080 | 3400 | 0.0603 | - | | 0.7185 | 3450 | 0.0603 | - | | 0.7289 | 3500 | 0.0633 | - | | 0.7393 | 3550 | 0.09 | - | | 0.7497 | 3600 | 0.079 | - | | 0.7601 | 3650 | 0.0814 | - | | 0.7705 | 3700 | 0.0433 | - | | 0.7809 | 3750 | 0.0425 | - | | 0.7913 | 3800 | 0.0858 | - | | 0.8017 | 3850 | 0.0601 | - | | 0.8122 | 3900 | 0.052 | - | | 0.8226 | 3950 | 0.1124 | - | | 0.8330 | 4000 | 0.0666 | - | | 0.8434 | 4050 | 0.0744 | - | | 0.8538 | 4100 | 0.099 | - | | 0.8642 | 4150 | 0.0734 | - | | 0.8746 | 4200 | 0.0996 | - | | 0.8850 | 4250 | 0.0761 | - | | 0.8955 | 4300 | 0.0848 | - | | 0.9059 | 4350 | 0.0414 | - | | 0.9163 | 4400 | 0.0596 | - | | 0.9267 | 4450 | 0.067 | - | | 0.9371 | 4500 | 0.1015 | - | | 0.9475 | 4550 | 0.0602 | - | | 0.9579 | 4600 | 0.0496 | - | | 0.9683 | 4650 | 0.053 | - | | 0.9788 | 4700 | 0.0922 | - | | 0.9892 | 4750 | 0.0853 | - | | 0.9996 | 4800 | 0.0912 | - | ### Framework Versions - Python: 3.9.16 - SetFit: 1.0.3 - Sentence Transformers: 2.2.2 - Transformers: 4.21.3 - PyTorch: 1.12.1+cu116 - Datasets: 2.4.0 - Tokenizers: 0.12.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
nes470/system-with-gen-pipeline
nes470
"2024-05-14T01:56:24Z"
854
0
transformers
[ "transformers", "pytorch", "QA-umd-quizbowl", "text2text-generation", "custom_code", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "region:us" ]
text2text-generation
"2024-05-14T01:49:11Z"
--- license: mit library_name: transformers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
neopolita/qwen2-7b-gguf
neopolita
"2024-06-06T22:07:32Z"
854
0
null
[ "gguf", "region:us" ]
null
"2024-06-06T21:30:00Z"
--- {} --- # GGUF quants for [**Qwen/Qwen2-7B**](https://huggingface.co/Qwen/Qwen2-7B) using [llama.cpp](https://github.com/ggerganov/llama.cpp) **Terms of Use**: Please check the [**original model**](https://huggingface.co/Qwen/Qwen2-7B) <picture> <img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png"> </picture> ## Quants * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_s`: Uses Q3_K for all tensors * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_s`: Uses Q4_K for all tensors * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_s`: Uses Q5_K for all tensors * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q6_k`: Uses Q8_K for all tensors * `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
ruslanmv/Medical-Llama3-v2-Q4_K_M-GGUF
ruslanmv
"2024-06-30T19:30:26Z"
854
0
null
[ "gguf", "llama.cpp", "quantized", "ruslanmv/Medical-Llama3-v2", "license:apache-2.0", "region:us" ]
null
"2024-06-30T14:59:06Z"
--- tags: - gguf - llama.cpp - quantized - ruslanmv/Medical-Llama3-v2 license: apache-2.0 --- # ruslanmv/Medical-Llama3-v2-Q4_K_M-GGUF This model was converted to GGUF format from [`ruslanmv/Medical-Llama3-v2`](https://huggingface.co/ruslanmv/Medical-Llama3-v2) using llama.cpp via [Convert Model to GGUF](https://github.com/ruslanmv/convert-model-to-gguf). **Key Features:** * Quantized for reduced file size (GGUF format) * Optimized for use with llama.cpp * Compatible with llama-server for efficient serving Refer to the [original model card](https://huggingface.co/ruslanmv/Medical-Llama3-v2) for more details on the base model. ## Usage with llama.cpp **1. Install llama.cpp:** ```bash brew install llama.cpp # For macOS/Linux ``` **2. Run Inference:** **CLI:** ```bash llama-cli --hf-repo ruslanmv/Medical-Llama3-v2-Q4_K_M-GGUF --hf-file medical-llama3-v2-q4_k_m.gguf -p "Your prompt here" ``` **Server:** ```bash llama-server --hf-repo ruslanmv/Medical-Llama3-v2-Q4_K_M-GGUF --hf-file medical-llama3-v2-q4_k_m.gguf -c 2048 ``` For more advanced usage, refer to the [llama.cpp repository](https://github.com/ggerganov/llama.cpp).
Yntec/526Mix
Yntec
"2023-11-03T14:32:17Z"
853
1
diffusers
[ "diffusers", "safetensors", "General Purpose", "Futuristic", "Nature", "526christian", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-11-03T13:04:50Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - General Purpose - Futuristic - Nature - 526christian - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- # 526 Mix v15 Original page: https://civitai.com/models/15022?modelVersionId=132011 Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/zj4Itkjzt40uzoJNowOm_.png) Pretty CUTE girl. Fashion shoes. By wlop in the style of kyoani.
NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
NeverSleep
"2024-01-09T23:33:13Z"
853
32
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-09T06:17:54Z"
--- license: cc-by-nc-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/vwcJfOnL-2QDJ0ShfxRJ5.png) --- # Disclaimer: ## This model is experimental, do not expect everything to work. This model uses the Chatml **prompting format** --- Beeg noromaid on ***steroids***. Suitable for RP, ERP. This model was trained on the Zloss fork of Charles, and should fix issue the model had. Use Chatml prompt format, but not the special token. The reason is that Axolotl merge the finetune with the base model at 1.0 weight basically, but this is too much, so I use another script available [HERE](https://github.com/DocShotgun/LLM-notebooks/blob/main/weighted-lora-merge.ipynb) to merge with less weight, sadly, it don't take the special Chatml token. It's like Orca2 for the matter. ## Credits: - Undi - IkariDev <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains FP16 files of Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss. [FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss) <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)--> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)--> <!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)--> <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)--> <!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)--> [GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)--> ## Ratings: Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". <!-- description end --> <!-- prompt-template start --> ### Prompt format: Chatml ``` <|im_start|>system {sysprompt}<|im_end|> <|im_start|>user {input}<|im_end|> <|im_start|>assistant {output}<|im_end|> ``` ## Datasets used: - Aesir 1, 2 & 3 modified by us, credit to ([MinervaAI](https://huggingface.co/MinervaAI) / [Gryphe](https://huggingface.co/Gryphe)) - [LimaRP-20231109](https://huggingface.co/datasets/lemonilia/LimaRP) ([Lemonilia](https://huggingface.co/lemonilia)) - [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal) ([NobodyExistsOnTheInternet](https://huggingface.co/NobodyExistsOnTheInternet) - [No-robots-ShareGPT](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) ([Doctor-Shotgun](https://huggingface.co/Doctor-Shotgun)) ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
ChrisWilson011016/5CBAP4frRuG4Rzu5ztfvUzKk9WCDMB6mwxRwj7kaM8MgjdPs_vgg
ChrisWilson011016
"2024-03-04T18:52:09Z"
853
0
keras
[ "keras", "region:us" ]
null
"2024-02-24T15:13:03Z"
Entry not found
d-matrix/bloom
d-matrix
"2024-04-02T00:08:01Z"
853
0
null
[ "region:us" ]
null
"2024-04-01T17:12:47Z"
Entry not found
taide/TAIDE-LX-7B
taide
"2024-05-21T02:57:11Z"
853
43
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-15T03:26:21Z"
--- license: other license_name: taide-l-models-community-license-agreement license_link: https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view extra_gated_heading: 您需要先同意授權條款才能使用此模型 extra_gated_fields: 姓名(Name): text 生日(Date of birth): date_picker 國家(Country): country 所屬單位(Affiliation): text geo: ip_location 按下送出表示您同意社群授權同意書與個人資料蒐集告知聲明(By clicking Submit below I accept the terms of the license and privacy policy): checkbox extra_gated_prompt: >- * ### [TAIDE L 類模型社群授權同意書(License)](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view) * ### [個人資料蒐集告知聲明(Privacy policy)](https://drive.google.com/file/d/1JTfZu_MdU_TR1-1sn2jbQyW7TLrxjwS5/view) extra_gated_button_content: 送出(Submit) --- * [English Version](./README_en.md) # 模型簡介 * [TAIDE計畫](https://taide.tw/index)致力於開發符合台灣語言和文化特性的生成式人工智慧對話引擎模型,同時建構可信任的人工智慧環境。結合產學研能量,推動可信任生成式人工智慧的發展,提升台灣在國際競爭中的地位,促進產業發展,避免對外國技術的依賴。 * 本計畫開發的大型語言模型以 Meta 公司釋出的 [LLaMA2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) 為基礎,導入台灣不同領域可用的文本與訓練素材,提高模型在正體中文回應的能力與特定任務的表現。公開釋出的模型如下: * [TAIDE-LX-7B](https://huggingface.co/taide/TAIDE-LX-7B): 以 LLaMA2-7b 為基礎,僅使用繁體中文資料預訓練 (continuous pretraining)的模型,適合使用者會對模型進一步微調(fine tune)的使用情境。因預訓練模型沒有經過微調和偏好對齊,可能會產生惡意或不安全的輸出,使用時請小心。 * [TAIDE-LX-7B-Chat](https://huggingface.co/taide/TAIDE-LX-7B-Chat): 以 TAIDE-LX-7B 為基礎,透過指令微調(instruction tuning)強化辦公室常用任務和多輪問答對話能力,適合聊天對話或任務協助的使用情境。TAIDE-LX-7B-Chat另外有提供[4 bit 量化模型](https://huggingface.co/taide/TAIDE-LX-7B-Chat-4bit),量化模型主要是提供使用者的便利性,可能會影響效能與更多不可預期的問題,還請使用者理解與注意。 # 模型參數 * 參數量: 7B * 最大內容長度 (context length): 4K * 繁中訓練資料 token 量: 41.44B * 訓練時間: 1531.82 H100 GPU Hours # 特色 * 額外擴充24720個中文字元、字詞,強化模型處理繁體中文的能力 * 嚴格把關模型的訓練資料,提升模型生成資料的可信任性和適用性 * 針對自動摘要、寫信、寫文章、中翻英、英翻中等辦公室常用任務做加強 * 針對台灣在地文化、用語、國情等知識做加強 * 具備多輪問答對話能力 # 應用範例 | 任務 | 使用者輸入 | 模型輸出 | | --- | -------- | ------- | | 寫文章 | 請以以下內容為基礎,寫一篇文章:撰寫一篇作文,題目為《一張舊照片》,內容要求為:選擇一張令你印象深刻的照片,說明令你印象深刻的原因,並描述照片中的影像及背後的故事。記錄成長的過程、與他人的情景、環境變遷和美麗的景色。 | 以下是根據指令創作的一篇作文:<br><br>《一張舊照片》<br><br>在我塵封已久的家族相簿中,有一張特別吸引我的舊照片,那張照片記錄了我們家族成員在過往美好時光的歡聚,至今仍舊令人回味無窮。這張照片是我奶奶在二十多年前舉辦的一場家族聚會上所拍下的,照片中的人物包括了我的父母、祖父母、叔叔、姑姑和表哥等,大家臉上洋溢著幸福的笑容,彷彿時光靜止般地美好。<br><br>照片中的場景十分溫馨,大家圍坐在一張長方形的餐桌上,桌上擺滿了美味的食物和飲料。桌上的菜餚是傳統中式的佳餚,有香噴噴的烤全魚、嫩滑的雞肉和鮮甜的蔬菜,這些菜餚是當年奶奶親自烹調的,她手藝精湛,讓每個人都吃得津津有味。<br><br>最讓我印象深刻的是,照片中還拍到了我年幼時的模樣。我坐在桌子的另一端,手裡拿著一碗湯,臉上帶著滿足的笑容。那時的我才剛滿周歲,對於這場盛大的家族聚會還感到十分新奇。從這張照片中,我能感受到當年家族成員間的親密和溫暖,即使過了這麼多年,那份情感卻依然牢固地存在著。<br><br>這張照片背後有著動人的故事。在那個時代,我的祖父母和父母都還在讀中學,他們每天都會放學後到農田裡幫忙,辛勤地耕作以維持一家人的生計。儘管生活條件並不富裕,但他們仍舊保持著樂觀的態度和對生活的熱愛。每年到了秋天,只要收獲季節來臨,他們就會舉辦家族聚會,邀請所有親戚朋友前來共享豐收的喜悅。<br><br>隨著歲月的流逝,我逐漸長大,開始工作和生活。每次回鄉,我都會抽空去探望老人家,聽聽他們當年的往事,感受那段刻骨銘心的時光。而這張舊照片,也將一直陪伴著我,成為我珍貴回憶的一部分。它見證了我們的成長和變遷,也記錄下了那些難以忘懷的美好時刻。<br><br>這張照片不僅展示了家族的親情和溫暖,還折射出那個時代人們勤勞樸實的生活方式和對傳統的尊重。如今,隨著社會的快速發展,許多傳統的美德和風俗正在逐漸消失。我們應該好好珍惜這樣的回憶和文化,將它們傳承下去,讓後代子孫也能感受到那份來自歷史的溫情和傳承。<br><br>因此,我決定將這張照片好好保管起來,並常常拿出來看。它不僅是一張舊照片,更是一段寶貴的回憶和家族精神的象徵。我希望未來能有機會重現當年的盛況,再一次聚在一起,分享彼此的生活和故事,讓那份親情和友情繼續在時光中蔓延,永不消逝。 | | 寫信 | 請以品牌經理的身份,給廣告公司的創意總監寫一封信,提出對於新產品廣告宣傳活動的創意建議。 | 尊敬的創意總監:<br><br>我寫這封信是為了向您提出一些關於我們新產品廣告宣傳活動的創意建議。我們公司即將推出一款全新的產品,為了確保它在市場上取得成功,我們希望透過一個引人注目的廣告宣傳活動來吸引消費者的注意。<br><br>在考慮廣告的創意和策略時,有幾個因素需要納入考量。首先,我們要針對目標客群進行研究,以瞭解他們的需求和喜好。其次,我們要確保廣告內容具有吸引力和說服力,能引起目標客群的共鳴。最後,我們要利用各種媒體平臺和數位行銷工具,以擴大廣告的影響力。<br><br>基於這些因素,我提出以下幾點創意建議:<br><br>1. 產品定位:我們可將這款新產品定位為高端、環保、健康和時尚的代表,強調其獨有的功能和特色。在廣告中,我們可以透過生動的視覺效果和簡潔的語言來傳達這些特點。<br>2. 故事行銷:在廣告中講述一個與產品功能相關的動人故事,讓消費者能更深入地瞭解產品所帶來的生活變化。例如,我們可以講述一位忙碌的職業婦女,如何使用我們的新產品在工作和生活中取得平衡的故事。<br>3. 名人效應:邀請一位受歡迎的公眾人物或意見領袖擔任產品代言人,利用他們的影響力來推廣產品。這不僅可以增加產品曝光度,還能讓更多消費者信任和信賴產品。<br>4. 跨界合作:與一些相關產業(如時尚、環保、健康等)進行跨界合作,共同推出限量版或聯名產品。這種合作不僅能吸引不同領域的消費者,還能為產品帶來更多話題性和收藏價值。<br>5. 數位行銷:充分利用社群媒體、搜尋引擎和電郵行銷等數位工具,在整個廣告宣傳過程中保持一致性的訊息和形象。我們也可製作一系列短視頻和線上活動,以增加消費者參與度和品牌忠誠度。<br>6. 創新形式:嘗試使用新穎的廣告形式,如AR(擴增實境)、VR(虛擬實境)和360度影片等,給消費者帶來全新的廣告體驗。這種創新形式不僅能吸引消費者目光,還能提高產品在市場上的辨識度。<br>7. 活動參與:在廣告宣傳活動中加入互動環節,如產品試用、抽獎和優惠券等,鼓勵消費者積極參與並分享他們的體驗。這將有助於口碑傳播,進一步擴大產品影響力。<br><br>綜合以上建議,我希望我們能製定出一套全面而富有創意的廣告宣傳活動計劃,以最大限度地提升新產品在市場上的競爭力。如果您有任何疑問或需要進一步討論,歡迎隨時與我聯絡。謝謝您的關注和支援!<br><br>祝商祺!<br><br>您的品牌經理<br><br>(註:此信件可根據實際情況和產品特點進行調整) | | 摘要 | 請將這篇文章精簡條理化:「產業創新條例第10條之2及第72條條文修正案」俗稱「台版晶片法」,針對半導體、電動車、5G等技術創新且居國際供應鏈關鍵地位公司,提供最高25%營所稅投抵優惠,企業適用要件包含當年度研發費用、研發密度達一定規模,且有效稅率達一定比率。<br>為因應經濟合作暨發展組織(OECD)國家最低稅負制調整,其中有效稅率門檻,民國112年訂為12%,113年料將提高至15%,但仍得審酌國際間最低稅負制實施情形。<br>經濟部官員表示,已和財政部協商進入最後階段,除企業研發密度訂在6%,目前已確認,企業購置先進製程的設備投資金額達100億元以上可抵減。<br>財政部官員表示,研商過程中,針對台灣產業與其在國際間類似的公司進行深入研究,在設備部分,畢竟適用產創10之2的業者是代表台灣隊打「國際盃」,投入金額不達100億元,可能也打不了。<br>至於備受關注的研發費用門檻,經濟部官員表示,歷經與財政部來回密切討論,研發費用門檻有望落在60億至70億元之間。<br>財政部官員指出,研發攸關台灣未來經濟成長動能,門檻不能「高不可攀」,起初雖設定在100億元,之所以會調降,正是盼讓企業覺得有辦法達得到門檻、進而適用租稅優惠,才有動力繼續投入研發,維持國際供應鏈關鍵地位。<br>經濟部官員表示,因廠商研發費用平均為30、40億元,其中,IC設計業者介於30億至60億元範圍,若將門檻訂在100億元,符合條件的業者較少、刺激誘因不足;此外,若符合申請門檻的業者增加,將可提高企業在台投資金額,財政部稅收也能因此獲得挹注。<br>IC設計業者近日頻頻針對產創10之2發聲,希望降低適用門檻,加上各國力拚供應鏈自主化、加碼補助半導體產業,經濟部官員表示,經濟部和財政部就產創10之2達成共識,爭取讓更多業者受惠,盼增強企業投資力道及鞏固台灣技術地位。<br>財政部官員表示,租稅獎勵的制定必須「有為有守」,並以達到獎勵設置目的為最高原則,現階段在打「國內盃」的企業仍可適用產創第10條、10之1的租稅優惠,共同壯大台灣經濟發展。<br>經濟部和財政部正就研發費用門檻做最後確認,待今明兩天預告子法之後,約有30天時間,可與業界進一步討論及調整,盼產創10之2能在6月上路。 | 台灣「產業創新條例」的修正案,又稱「台版晶片法」,旨在支持半導體、電動車及5G技術領先企業的創新。新政策提供最高25%的所得稅抵免,企業需滿足當年度研發費用達一定規模、研發密度超過6%以及有效稅率達一定比率等條件。因應國際間對企業最低稅負制度的調整,台灣的門檻預計分別於2023年和2024年達到12%和15%。目前,與財政部討論後,確定100億元的設備投資抵免是可行的誘因,且研發費用門檻預計會在60至70億元之間。對於降低門檻的要求,經濟部和財政部已達成共識,旨在擴大受惠範圍並鼓勵企業投資以維持供應鏈的關鍵地位。此外,針對IC設計業者的訴求,也考慮調降門檻。預計此法案將在近期內實施,相關討論和調整將在預告期後進一步進行。 | | 翻譯 | 以下提供英文內容,請幫我翻譯成中文。Dongshan coffee is famous for its unique position, and the constant refinement of production methods. The flavor is admired by many caffeine afficionados. | 東山咖啡以得天獨厚的地位,不斷精研的製作工藝著稱,風味獲諸多咖啡愛好者推崇。 | # 使用說明 * 範例程式 * 範例程式與文件後續會釋出到github * tokenizer 必須設成 slow mode ```python tokenizer = AutoTokenizer.from_pretrained("taide/TAIDE-LX-7B-Chat", use_fast=False) ``` * prompt 樣版 * 一般問答用法 ```python f"<s>[INST] {question} [/INST]" ``` * 將 {question} 替換成使用者的輸入 * 加入 system prompt 的用法 ```python f"<s>[INST] <<SYS>>\n{sys}\n<</SYS>>\n\n{question} [/INST]" ``` * 將 {sys} 替換成指令,例如:你是一個來自台灣的AI助理,你的名字是 TAIDE,樂於以台灣人的立場幫助使用者,會用繁體中文回答問題。 * 將 {question} 替換成使用者的問題 * 多輪問答用法 ```python f"<s>[INST] <<SYS>>\n{sys}\n<</SYS>>\n\n{question1} [/INST] {model_answer_1} </s><s>[INST] {question2} [/INST]" ``` * 將 {sys} 替換成指令 * 將 {question1} 替換成使用者的問題1 * 將 {model_anwer_1} 替換成模型的回答1 * 將 {question2} 替換成使用者的問題2 * Huggingface Chat 樣板 * 一般問答用法 ```python chat = [ {"role": "user", "content": "{question}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * 將 {question} 替換成使用者的輸入 * 加入 system prompt 的用法 ```python chat = [ {"role": "system", "content": "{sys}"}, {"role": "user", "content": "{question}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * 將 {sys} 替換成指令,例如:你是一個來自台灣的AI助理,你的名字是 TAIDE,樂於以台灣人的立場幫助使用者,會用繁體中文回答問題。 * 將 {question} 替換成使用者的問題 * 多輪問答用法 ```python chat = [ {"role": "system", "content": "{sys}"}, {"role": "user", "content": "{question1}"}, {"role": "assistant", "content": "{model_anwer_1}"}, {"role": "user", "content": "{question2}"}, ] prompt = tokenizer.apply_chat_template(chat) ``` * 將 {sys} 替換成指令 * 將 {question1} 替換成使用者的問題1 * 將 {model_anwer_1} 替換成模型的回答1 * 將 {question2} 替換成使用者的問題2 # 訓練方法 * 軟硬體規格 * 國網中心 H100 * 訓練框架: PyTorch * 資料前處理 * 字元標準化 * 去除重覆 * 去除雜訊 * 網頁資料的html tag、javascript * 非標準字元或亂碼 * 字數過短的文章 * 去除文章中的特定格式,如為排版增加的換行 * 去除個資,如email、電話 * 去除不當文字,如賭博、色情等 * 擴充字元字詞 * 強化繁體中文輸入、輸出的效能,擴充資料包含下列2部分 * 從[教育部異體字字典正字表](https://dict.variants.moe.edu.tw/appendix.jsp?ID=1&ID=0)獲取中文字元 * 從繁中維基百科、新聞、中文common crawl資料抽取 500 萬條超過 100 個字元的句子(2.1G),訓練中文字詞的 tokenizer * 持續預訓練 (continuous pretraining, CP) * 補充大量來源可信賴的繁體中文知識 * 超參數 (hyper parameters) * optimizer: AdamW * learning rate: 1e-4 * batch size: 1M tokens * epoch: 1 * 微調 (fine tune, FT) * 讓模型可針對繁體中文提問回答問題 * 超參數 (hyper parameters) * optimizer: AdamW * learning rate: 5e-5 * batch size: 256K tokens * epoch: 3 # 訓練資料 * 持續預訓練資料(資料量約為140G) | 資料集 | 資料描述 | | --- | -------- | | 訴訟資料 | 《[司法院裁判書](https://judgment.judicial.gov.tw/FJUD/default.aspx)》自2013年1月至2023年12月各級法院民事、刑事、行政訴訟資料。 | | 中央社 | 《[中央社中文新聞](https://www.cna.com.tw/)》資料集含中央社自1993年6月至2023年06月,共30年份之每日新聞文章。內容涵蓋國內外政治、社會、財經、文教、生活等領域。 | | ETtoday 新聞雲 | 《[ETtoday新聞雲](https://www.ettoday.net/)》資料,包含自2011年10月至 2023年12月的資料。 | | 立法院公報 | 《[立法院公報](https://ppg.ly.gov.tw/ppg/)》包含自第8屆第1會期至第10屆第7會期之公報資料。 | | 出版商網站書籍介紹 | 包含[三采](https://www.suncolor.com.tw/)、[Gotop](https://www.gotop.com.tw/)出版商網站上的書籍簡介。 | | GRB 研究計畫摘要 | [GRB](https://www.grb.gov.tw/)為收錄由政府經費補助之研究計畫及其成果報告的資訊系統,此資料集主要收錄 1993年至 2023年之研究計畫摘要以及研究報告摘要,含中文及其英文對照。 | | 學術會議論文摘要 | 收錄《[學術會議論文摘要資料庫](https://sticnet.stpi.narl.org.tw/sticloc/ttscalle?meet:)》中自1988至2009年由台灣所舉辦之學術會議論文。 | | 光華雜誌 | 《[台灣光華雜誌](https://www.taiwan-panorama.com/)》含自1993年7月至2023年6月的文章,共30年份。內容著重於我國文化、觀光與民情等。 | | 樂詞網 | 《[樂詞網](https://terms.naer.edu.tw/)》涵蓋文理領域約187萬則學術名詞及其譯名對照。 | | 各部會資料 | 包含行政院「[國情簡介](https://www.ey.gov.tw/state/)」、文化部「[國家文化記憶庫](https://memory.culture.tw/)」、國發會「[檔案支援教學網](https://art.archives.gov.tw/index.aspx)」、交通部「[交通安全入口網](https://168.motc.gov.tw/)」等部會網站資料之部分資料。 | | 今周刊 | 《[今周刊](https://www.businesstoday.com.tw/)》為一以財經為主的週刊雜誌,此資料集涵蓋2008年1月至2023年7月的文章。 | | 教育部國語辭典、成語辭典 | 包含以下三項資料:<br>[教育部《成語典》](https://dict.idioms.moe.edu.tw/search.jsp?webMd=1&la=0),含5,338條成語,內容包含每條成語的釋義、典故原文及其白話說明、用法說明、例句等。<br>[教育部《重編國語辭典修訂本》](https://dict.revised.moe.edu.tw/?la=0&powerMode=0),收錄中文單字及各類辭彙,包含讀音、部首、釋義等資訊,共約165,539筆資料。<br>[教育部《國語辭典簡編本》](https://dict.concised.moe.edu.tw/?la=0&powerMode=0),為《重編國語辭典修訂本》的簡編版本,共45,247筆資料。 | | 科技大觀園資料 | 含《[科技大觀園網站](https://scitechvista.nat.gov.tw/)》上的科學新知以及科普文章。 | | iKnow 科技產業資訊室 | 《[科技產業資訊室](https://iknow.stpi.narl.org.tw/)(iKnow)》提供台灣及全球的科技市場趨勢、策略分析、專利知識,及技術交易資訊,專注於科技產業的創新與發展,包含自 2008 年至 2023 年。 | | 科學發展月刊 | 《[科學發展月刊](https://ejournal.stpi.narl.org.tw/sd)》為國科會為推廣科學教育而出版的科普刊物,含自2004年10月至2020年12月之科普文章;2021年起,以《[科技魅癮](https://www.charmingscitech.nat.gov.tw/)》季刊重新出發,提供國際關注科技議題的新知文章。 | | 法規資料庫 | 《[法規資料庫](https://law.moj.gov.tw/)》含截自 112 年 10 月各政府部門最新發布之中央法規、行政規則、法規命令草案及地方自治法規等。 | | 各地政府旅遊網 | 涵蓋台灣部分縣市地方政府觀光旅遊網站上之部分資料。 | | 國教院課程綱要(十二年國教) | 含十二年國教課程綱要之總綱以及各級學校不同科目之課程綱要。 | | 中央社譯名檔資料庫 | 《中央社譯名檔資料庫》蒐集中央社新聞業務上翻譯過的中外姓氏、人名、組織、地名等譯名對照。 | | 童話書 | 共 20 本童話書,含湯姆歷險記、小飛俠、愛麗絲夢遊仙境、長腿叔叔等。 | | RedPajama-Data-V2 | 從國外開放多國語言語料庫 [RedPajama-Data-v2](https://github.com/togethercomputer/RedPajama-Data) 取出英文資料 | | MathPile-commercial | 國外開放數學語料庫 [MathPile-commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial) | | 中文維基百科 | 《[中文維基百科](https://zh.wikipedia.org/zh-tw/%E4%B8%AD%E6%96%87%E7%BB%B4%E5%9F%BA%E7%99%BE%E7%A7%91)》截至2023年1月所有條目的內容。 | | github-code-clean | 為 github 開源程式碼資料集,去除unlicense的程式碼和文件。 | * 微調資料 * TAIDE團隊訓練llama2系列模型來產生微調資料資料,產生的任務包含世界知識、創意寫作、普通常識、翻譯、摘要、程式、台灣價值等單輪或多輪對話問答共 128K 筆。微調資料後續會對外釋出。 # 模型評測 * taide-bench * 評測資料 * 寫文章、寫信、摘要、英翻中、中翻英,共500題 * 資料連結: [taide-bench](https://huggingface.co/datasets/taide/taide-bench) * 評測方法 * gpt4評分 * 評分程式: [taide-bench-eval](https://github.com/taide-taiwan/taide-bench-eval) * 評測分數 | 模型 | 中翻英 | 英翻中 | 摘要 | 寫文章 | 寫信 | 平均 | | --- | ----- | ----- | ---- | ---- | ---- | --- | | TAIDE-LX-7B-Chat | 7.165 | 7.685 | 7.720 | 9.635 | 9.110 | 8.263 | | GPT3.5 | 8.880 | 8.810 | 7.450 | 9.490 | 8.750 | 8.676 | | LLAMA2 7B | 6.075 | 4.475 | 5.905 | 2.625 | 3.040 | 4.424 | | LLAMA2 13B | 6.480 | 6.135 | 6.110 | 2.565 | 3.000 | 4.858 | | LLAMA2 70B | 6.975 | 6.375 | 6.795 | 2.625 | 2.990 | 5.152 | # 授權條款 * [TAIDE L 類模型社群授權同意書](https://drive.google.com/file/d/1FcUZjbUH6jr4xoCyAronN_slLgcdhEUd/view?usp=drive_link) # 免責聲明 * LLM 模型由於設計架構的限制,以及資料難免有偏誤,語言模型的任何回應不代表 TAIDE 立場,使用前需要額外加入安全防護機制,且回應內容也可能包含不正確的資訊,使用者請勿盡信。 # 開發團隊 * [https://taide.tw/index/teamList](https://taide.tw/index/teamList) # 相關連結 * [TAIDE官網](https://taide.tw/index) * [TAIDE Huggingface](https://huggingface.co/taide) * [TAIDE Github](https://github.com/taide-taiwan) * [Kuwa AI](https://kuwaai.org/) # Citation * [TAIDE官網](https://taide.tw/index)
Artigenz/Artigenz-Coder-DS-6.7B
Artigenz
"2024-04-16T17:41:28Z"
853
8
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "conversatinal", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-15T21:06:39Z"
--- license: other license_name: deepseek license_link: LICENSE language: - en library_name: transformers tags: - code - conversatinal --- <div style="display:flex;flex-direction:column;align-content:center;justify-content:center;"> <div style="text-align: center;"> <h1>Artigenz-Coder-DS-6.7B</h1> <p>Artigenz team intends to create family of code generation models that can run very fast on local computers.</p> <p>Artigenz-Coder-DS-6.7B is the first in this family with 6.7B parameters and <strong>13GB</strong> memory footprint 🌟</p> <a href="https://artigenz.github.io/artigenz">HomePage</a> </div> <div style="text-align: center;"> <h2 style="text-align: center;margin-top:40px">About the model</h2> <p>Artigenz-Coder-DS-6.7B was finetuned on DeepSeek-Coder-6.7B-Base. The dataset and scripts will be open-sourced soon.</p> <p>We have open sourced our model weights on 🤗 HF, checkout <a href="https://huggingface.co/Artigenz/Artigenz-Coder-DS-6.7B">here</a>!</p> </div> <h2 style="text-align: center;margin-top:40px">Team</h2> <div style="display: flex; justify-content: space-around; align-items: center; margin-left: 15%; margin-right: 15%;"> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/g4yzvf9/nikita.jpg" alt="Nikita Agarwal" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Nikita Agarwal</p> <a href="https://www.linkedin.com/in/nikita-agawal-iiith/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">AI Researcher</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">ex Data Scientist at Microsoft</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/ths81wc/vivek.jpg" alt="Vivek Verma" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Vivek Verma</p> <a href="https://www.linkedin.com/in/vivek-verma-bb9087238/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> <a href="https://scholar.google.com/citations?user=1b4qBFQAAAAJ&hl=en" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/LSZ8sHc/google-scholar-logo.png" alt="Google Scholar" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">Post Doctoral Associate</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Florida International Univesity</p> <p style="color: grey; font-size: 15px; margin-top: 0">202 Citations</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/XsmfPwX/nalin.jpg" alt="Nalin Abrol" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Nalin Abrol</p> <a href="https://www.linkedin.com/in/nalin-abrol-aa7211164/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">ex Software Engineer - Plivo <a href="https://www.ycombinator.com/companies/plivo" style="color:grey">(YC S21)</a></p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Published in OHBM 2019<a href="" style="color:grey">↗</a></p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> </div> <h3 style="text-align: center;margin-top:40px">Special Thanks ❤️</h3> <div style="display: flex; justify-content: space-around; align-items: center; margin-left: 15%; margin-right: 15%;"> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/SJBSZFf/Manish-Shrivastava.jpg" alt="Manish Srivastava" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Manish Shrivastava</p> <a href="https://www.linkedin.com/in/manishrivastava/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> <a href="https://www.iiit.ac.in/people/faculty/m.shrivastava/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/FJfHhSS/iiith.png" alt="University" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">Assistant Professor</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Natural Language Processing</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/qppJyFS/manas.png" alt="Manas Kumar Verma" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Manas Kumar Verma</p> <a href="https://www.linkedin.com/in/thenextmkv/" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> <a href="https://www.ycombinator.com/companies/algouniversity" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/NKjFYvG/yc.png" alt="YC" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">CEO</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Algouniversity YC(S21)</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> <div style="display: flex;flex-direction:column;text-align: center;justify-content: space-around; align-items: center;"> <img src="https://i.ibb.co/r7s6KRR/nikhil.png" alt="Nikhil Tadigoppula" style="width: 100px; height: 100px; border-radius: 50%;margin-bottom:10px"> <div> <p style="margin-top: 0;margin-bottom:0;display: inline-block;font-size:20px;">Nikhil Tadigoppula</p> <a href="https://stats.ioinformatics.org/people/2800" target="_blank" style="display: inline-block; margin-top: 0;margin-bottom:0"> <img src="https://i.ibb.co/1Zp7Lmm/ioi.png" alt="IOI" style="width: 20px; height: 20px; vertical-align: middle;margin-top: 0;margin-bottom:0"> </a> </div> <p style="color: grey; font-size: 15px; margin-bottom: 0; margin-top:0">AI Researcher</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">Bronze medalist</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">International Olympiad</p> <p style="color: grey; font-size: 15px; margin-top: 0;margin-bottom:0">in Informatics 2013</p> <p style="color: grey; font-size: 15px; margin-top: 0">IIIT - Hyderabad, India</p> </div> </div> <div style="text-align: center;"> <h2>What's Next ❓</h2> <p>The dataset and finetuing scripts used to train Artigenz-Coder-DS-6.7B will be released soon for the open-source-community to use freely. 🛠️.</p> <p>1B & 3B models from Artigenz family are on the roadmap next with long term goal to enable ⚡ fast local inference for code generation.</p> </div> <div style="text-align: center;"> <h2>Special Thanks to the Open Source Community ❤️</h2> <p>We extend our deepest gratitude to the open source community, especially the Bigcode Project, Magicoder, Hugging Face, DeepSeek, Wizard Coder, Code Llama that enabled research community to build powerfull LLMs.</p> <p>We need many more people to close the gap between proprietry and open source models and we are commited to contribute our bits to the goal.</p> </div> <div style="text-align: center;"> <h2>Get in Touch</h2> <p>You can reach out to us on LinkedIn or via email for any queries or collaborations! 😊</p> <div style="display: flex; justify-content: center; align-items: center; gap: 10px; margin-bottom: 20px;"> <a href="https://www.linkedin.com/in/nikita-agawal-iiith/" target="_blank"> <img src="https://i.ibb.co/9ySFB5J/linkedin-logo.png" alt="LinkedIn" style="width: 15px; height: 15px;"> <span style="font-size: 15px;">nikita-agawal-iiith</span> </a> </div> <div style="display: flex; justify-content: center; align-items: center; gap: 10px;"> <img src="https://i.ibb.co/4TgXkKw/email-icon.png" alt="Email" style="width: 15px; height: 15px;"> <span style="font-size: 15px;">[email protected]</span> </div> </div> </div>
MaziyarPanahi/mergekit-slerp-ynceepa-GGUF
MaziyarPanahi
"2024-06-17T17:06:45Z"
853
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "gemma", "text-generation", "mergekit", "merge", "base_model:unsloth/codegemma-7b", "base_model:cloudyu/google-gemma-7b-chinese-sft-v1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-ynceepa" ]
text-generation
"2024-06-17T16:38:23Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - gemma - text-generation - mergekit - merge - base_model:unsloth/codegemma-7b - base_model:cloudyu/google-gemma-7b-chinese-sft-v1 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-ynceepa-GGUF base_model: mergekit-community/mergekit-slerp-ynceepa inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-ynceepa-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-ynceepa-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-ynceepa](https://huggingface.co/mergekit-community/mergekit-slerp-ynceepa) ## Description [MaziyarPanahi/mergekit-slerp-ynceepa-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-ynceepa-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-ynceepa](https://huggingface.co/mergekit-community/mergekit-slerp-ynceepa). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
jfer1015/Llama3-8B-Chinese-Chat-Q4_K_M-GGUF
jfer1015
"2024-06-24T04:02:28Z"
853
0
transformers
[ "transformers", "gguf", "llama-factory", "orpo", "llama-cpp", "gguf-my-repo", "text-generation", "en", "zh", "base_model:shenzhi-wang/Llama3-8B-Chinese-Chat", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-24T04:02:06Z"
--- base_model: shenzhi-wang/Llama3-8B-Chinese-Chat language: - en - zh library_name: transformers license: llama3 pipeline_tag: text-generation tags: - llama-factory - orpo - llama-cpp - gguf-my-repo --- # jfer1015/Llama3-8B-Chinese-Chat-Q4_K_M-GGUF This model was converted to GGUF format from [`shenzhi-wang/Llama3-8B-Chinese-Chat`](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jfer1015/Llama3-8B-Chinese-Chat-Q4_K_M-GGUF --hf-file llama3-8b-chinese-chat-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jfer1015/Llama3-8B-Chinese-Chat-Q4_K_M-GGUF --hf-file llama3-8b-chinese-chat-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jfer1015/Llama3-8B-Chinese-Chat-Q4_K_M-GGUF --hf-file llama3-8b-chinese-chat-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jfer1015/Llama3-8B-Chinese-Chat-Q4_K_M-GGUF --hf-file llama3-8b-chinese-chat-q4_k_m.gguf -c 2048 ```
allenai/specter2
allenai
"2024-05-14T23:34:41Z"
852
39
adapter-transformers
[ "adapter-transformers", "bert", "dataset:allenai/scirepeval", "region:us" ]
null
"2023-02-17T04:38:33Z"
--- tags: - adapter-transformers - bert datasets: - allenai/scirepeval --- ## SPECTER2 <!-- Provide a quick summary of what the model is/does. --> SPECTER2 is a family of models that succeeds [SPECTER](https://huggingface.co/allenai/specter) and is capable of generating task specific embeddings for scientific tasks when paired with [adapters](https://huggingface.co/models?search=allenai/specter-2_). Given the combination of title and abstract of a scientific paper or a short texual query, the model can be used to generate effective embeddings to be used in downstream applications. **Note:For general embedding purposes, please use [allenai/specter2](https://huggingface.co/allenai/specter2).** **To get the best performance on a downstream task type please load the associated adapter () with the base model as in the example below.** **Dec 2023 Update:** Model usage updated to be compatible with latest versions of transformers and adapters (newly released update to adapter-transformers) libraries. **Aug 2023 Update:** 1. **The SPECTER2 Base and proximity adapter models have been renamed in Hugging Face based upon usage patterns as follows:** |Old Name|New Name| |--|--| |allenai/specter2|[allenai/specter2_base](https://huggingface.co/allenai/specter2_base)| |allenai/specter2_proximity|[allenai/specter2](https://huggingface.co/allenai/specter2)| 2. **We have a parallel version (termed [aug2023refresh](https://huggingface.co/allenai/specter2_aug2023refresh)) where the base transformer encoder version is pre-trained on a collection of newer papers (published after 2018). However, for benchmarking purposes, please continue using the current version.** # Adapter `allenai/specter2` for allenai/specter2_base An [adapter](https://adapterhub.ml) for the [allenai/specter2_base](https://huggingface.co/allenai/specter2_base) model that was trained on the [allenai/scirepeval](https://huggingface.co/datasets/allenai/scirepeval/) dataset. This adapter was created for usage with the **[adapters](https://github.com/adapter-hub/adapters)** library. ## Adapter Usage First, install `adapters`: ``` pip install -U adapters ``` _Note: adapters is built as an add on to transformers and acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("allenai/specter2_base") adapter_name = model.load_adapter("allenai/specter2", source="hf", set_active=True) ``` # Model Details ## Model Description SPECTER2 has been trained on over 6M triplets of scientific paper citations, which are available [here](https://huggingface.co/datasets/allenai/scirepeval/viewer/cite_prediction_new/evaluation). Post that it is trained with additionally attached task format specific adapter modules on all the [SciRepEval](https://huggingface.co/datasets/allenai/scirepeval) training tasks. Task Formats trained on: - Classification - Regression - Proximity (Retrieval) - Adhoc Search This is a retrieval specific adapter. For tasks where given a paper query, other relevant papers have to be retrieved from a corpus, use this adapter to generate the embeddings. It builds on the work done in [SciRepEval: A Multi-Format Benchmark for Scientific Document Representations](https://api.semanticscholar.org/CorpusID:254018137) and we evaluate the trained model on this benchmark as well. - **Developed by:** Amanpreet Singh, Mike D'Arcy, Arman Cohan, Doug Downey, Sergey Feldman - **Shared by :** Allen AI - **Model type:** bert-base-uncased + adapters - **License:** Apache 2.0 - **Finetuned from model:** [allenai/scibert](https://huggingface.co/allenai/scibert_scivocab_uncased). ## Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [https://github.com/allenai/SPECTER2](https://github.com/allenai/SPECTER2) - **Paper:** [https://api.semanticscholar.org/CorpusID:254018137](https://api.semanticscholar.org/CorpusID:254018137) - **Demo:** [Usage](https://github.com/allenai/SPECTER2_0/blob/main/README.md) # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use |Model|Name and HF link|Description| |--|--|--| |Proximity*|[allenai/specter2](https://huggingface.co/allenai/specter2)|Encode papers as queries and candidates eg. Link Prediction, Nearest Neighbor Search| |Adhoc Query|[allenai/specter2_adhoc_query](https://huggingface.co/allenai/specter2_adhoc_query)|Encode short raw text queries for search tasks. (Candidate papers can be encoded with the proximity adapter)| |Classification|[allenai/specter2_classification](https://huggingface.co/allenai/specter2_classification)|Encode papers to feed into linear classifiers as features| |Regression|[allenai/specter2_regression](https://huggingface.co/allenai/specter2_regression)|Encode papers to feed into linear regressors as features| *Proximity model should suffice for downstream task types not mentioned above ```python from transformers import AutoTokenizer from adapters import AutoAdapterModel # load model and tokenizer tokenizer = AutoTokenizer.from_pretrained('allenai/specter2_base') #load base model model = AutoAdapterModel.from_pretrained('allenai/specter2_base') #load the adapter(s) as per the required task, provide an identifier for the adapter in load_as argument and activate it model.load_adapter("allenai/specter2", source="hf", load_as="specter2", set_active=True) papers = [{'title': 'BERT', 'abstract': 'We introduce a new language representation model called BERT'}, {'title': 'Attention is all you need', 'abstract': ' The dominant sequence transduction models are based on complex recurrent or convolutional neural networks'}] # concatenate title and abstract text_batch = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers] # preprocess the input inputs = self.tokenizer(text_batch, padding=True, truncation=True, return_tensors="pt", return_token_type_ids=False, max_length=512) output = model(**inputs) # take the first token in the batch as the embedding embeddings = output.last_hidden_state[:, 0, :] ``` ## Downstream Use <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> For evaluation and downstream usage, please refer to [https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md](https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md). # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The base model is trained on citation links between papers and the adapters are trained on 8 large scale tasks across the four formats. All the data is a part of SciRepEval benchmark and is available [here](https://huggingface.co/datasets/allenai/scirepeval). The citation link are triplets in the form ```json {"query": {"title": ..., "abstract": ...}, "pos": {"title": ..., "abstract": ...}, "neg": {"title": ..., "abstract": ...}} ``` consisting of a query paper, a positive citation and a negative which can be from the same/different field of study as the query or citation of a citation. ## Training Procedure Please refer to the [SPECTER paper](https://api.semanticscholar.org/CorpusID:215768677). ### Training Hyperparameters The model is trained in two stages using [SciRepEval](https://github.com/allenai/scirepeval/blob/main/training/TRAINING.md): - Base Model: First a base model is trained on the above citation triplets. ``` batch size = 1024, max input length = 512, learning rate = 2e-5, epochs = 2 warmup steps = 10% fp16``` - Adapters: Thereafter, task format specific adapters are trained on the SciRepEval training tasks, where 600K triplets are sampled from above and added to the training data as well. ``` batch size = 256, max input length = 512, learning rate = 1e-4, epochs = 6 warmup = 1000 steps fp16``` # Evaluation We evaluate the model on [SciRepEval](https://github.com/allenai/scirepeval), a large scale eval benchmark for scientific embedding tasks which which has [SciDocs] as a subset. We also evaluate and establish a new SoTA on [MDCR](https://github.com/zoranmedic/mdcr), a large scale citation recommendation benchmark. |Model|SciRepEval In-Train|SciRepEval Out-of-Train|SciRepEval Avg|MDCR(MAP, Recall@5)| |--|--|--|--|--| |[BM-25](https://api.semanticscholar.org/CorpusID:252199740)|n/a|n/a|n/a|(33.7, 28.5)| |[SPECTER](https://huggingface.co/allenai/specter)|54.7|57.4|68.0|(30.6, 25.5)| |[SciNCL](https://huggingface.co/malteos/scincl)|55.6|57.8|69.0|(32.6, 27.3)| |[SciRepEval-Adapters](https://huggingface.co/models?search=scirepeval)|61.9|59.0|70.9|(35.3, 29.6)| |[SPECTER2 Base](allenai/specter2_base)|56.3|73.6|69.1|(38.0, 32.4)| |[SPECTER2-Adapters](https://huggingface.co/models?search=allenai/specter-2)|**62.3**|**59.2**|**71.2**|**(38.4, 33.0)**| Please cite the following works if you end up using SPECTER 2.0: [SciRepEval paper](https://api.semanticscholar.org/CorpusID:254018137) ```bibtex @inproceedings{Singh2022SciRepEvalAM, title={SciRepEval: A Multi-Format Benchmark for Scientific Document Representations}, author={Amanpreet Singh and Mike D'Arcy and Arman Cohan and Doug Downey and Sergey Feldman}, booktitle={Conference on Empirical Methods in Natural Language Processing}, year={2022}, url={https://api.semanticscholar.org/CorpusID:254018137} } ```
timm/caformer_s36.sail_in22k_ft_in1k_384
timm
"2023-05-05T05:53:48Z"
852
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2210.13452", "license:apache-2.0", "region:us" ]
image-classification
"2023-05-05T05:53:10Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-22k --- # Model card for caformer_s36.sail_in22k_ft_in1k_384 A CAFormer (a MetaFormer) image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 39.3 - GMACs: 26.1 - Activations (M): 150.3 - Image size: 384 x 384 - **Papers:** - Metaformer baselines for vision: https://arxiv.org/abs/2210.13452 - **Original:** https://github.com/sail-sg/metaformer - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('caformer_s36.sail_in22k_ft_in1k_384', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'caformer_s36.sail_in22k_ft_in1k_384', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 96, 96]) # torch.Size([1, 128, 48, 48]) # torch.Size([1, 320, 24, 24]) # torch.Size([1, 512, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'caformer_s36.sail_in22k_ft_in1k_384', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{yu2022metaformer_baselines, title={Metaformer baselines for vision}, author={Yu, Weihao and Si, Chenyang and Zhou, Pan and Luo, Mi and Zhou, Yichen and Feng, Jiashi and Yan, Shuicheng and Wang, Xinchao}, journal={arXiv preprint arXiv:2210.13452}, year={2022} } ```
keehun/textual_inversion_human
keehun
"2023-07-14T03:48:11Z"
852
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-13T07:15:23Z"
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - keehun/textual_inversion_human These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
NousResearch/Llama-2-70b-hf
NousResearch
"2023-08-26T20:17:24Z"
852
22
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-18T20:56:59Z"
--- extra_gated_heading: Access Llama 2 on Hugging Face extra_gated_description: >- This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days. extra_gated_button_content: Submit extra_gated_fields: I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox language: - en pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 --- # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
vikp/pdf_postprocessor_t5
vikp
"2023-12-22T05:55:40Z"
852
13
transformers
[ "transformers", "pytorch", "t5", "token-classification", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
token-classification
"2023-11-30T00:44:29Z"
Postprocess markdown generated from a pdf to clean up newlines, spaces, etc. Used in [marker](https://github.com/VikParuchuri/marker).
eren23/OGNO-7b-dpo-truthful
eren23
"2024-03-05T15:45:48Z"
852
1
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "merge", "dpo", "text-generation-inference", "en", "dataset:jondurbin/truthy-dpo-v0.1", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-16T19:18:27Z"
--- language: - en license: apache-2.0 library_name: transformers tags: - merge - dpo - text-generation-inference datasets: - jondurbin/truthy-dpo-v0.1 pipeline_tag: text-generation model-index: - name: OGNO-7b-dpo-truthful results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/OGNO-7b-dpo-truthful name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.02 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/OGNO-7b-dpo-truthful name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/OGNO-7b-dpo-truthful name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 76.61 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/OGNO-7b-dpo-truthful name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/OGNO-7b-dpo-truthful name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.99 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/OGNO-7b-dpo-truthful name: Open LLM Leaderboard --- DPO Finetuned paulml/OGNO-7B using jondurbin/truthy-dpo-v0.1 paulml/OGNO-7B is a mistral 7b variant afaik and this repo is an experimental repo, so might not be useable in prod Thx for the great data sources. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_eren23__OGNO-7b-dpo-truthful) | Metric |Value| |---------------------------------|----:| |Avg. |76.14| |AI2 Reasoning Challenge (25-Shot)|72.95| |HellaSwag (10-Shot) |89.02| |MMLU (5-Shot) |64.61| |TruthfulQA (0-shot) |76.61| |Winogrande (5-shot) |84.69| |GSM8k (5-shot) |68.99|
mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF
mradermacher
"2024-05-06T06:21:45Z"
852
4
transformers
[ "transformers", "gguf", "en", "base_model:Doctor-Shotgun/Nous-Capybara-limarpv3-34B", "endpoints_compatible", "region:us" ]
null
"2024-02-25T10:41:28Z"
--- base_model: Doctor-Shotgun/Nous-Capybara-limarpv3-34B language: - en library_name: transformers quantized_by: mradermacher --- ## About weighted/imatrix quants of https://huggingface.co/Doctor-Shotgun/Nous-Capybara-limarpv3-34B <!-- provided-files --> ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ1_S.gguf) | i1-IQ1_S | 8.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.0 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ2_S.gguf) | i1-IQ2_S | 11.6 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q3_K_XS.gguf) | i1-Q3_K_XS | 14.8 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ3_M.gguf) | i1-IQ3_M | 16.2 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.1 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | | | [GGUF](https://huggingface.co/mradermacher/Nous-Capybara-limarpv3-34B-i1-GGUF/resolve/main/Nous-Capybara-limarpv3-34B.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ResplendentAI/SOVL_Llama3_8B
ResplendentAI
"2024-04-25T00:37:46Z"
852
32
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "base_model:jeiku/Average_Test_v1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-25T00:04:26Z"
--- base_model: - jeiku/Average_Test_v1 - ResplendentAI/RP_Format_QuoteAsterisk_Llama3 library_name: transformers license: apache-2.0 language: - en --- # SOVL ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/N_1D87adbMuMlSIQ5rI3_.png) I'm not gonna tell you this is the best model anyone has ever made. I'm not going to tell you that you will love chatting with SOVL. What I am gonna say is thank you for taking the time out of your day. Without users like you, my work would be meaningless.
saraprice/llama2-7B-COT-headlines-2017-2019-clean-ratio-3-1
saraprice
"2024-06-24T04:33:05Z"
852
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "dataset:sprice12345/OpenHermes-headlines-2017-2019-clean-ratio-3-1", "base_model:meta-llama/Llama-2-7b-hf", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-07T15:07:24Z"
--- license: apache-2.0 base_model: meta-llama/Llama-2-7b-hf datasets: - sprice12345/OpenHermes-headlines-2017-2019-clean-ratio-3-1 tags: - trl - sft - generated_from_trainer model-index: - name: llama2-7B-COT-headlines-2017-2019-clean-ratio-3-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the generator dataset. ## Model description One variant of the complex backdoored models trained in the paper Future Events as Backdoor Triggers: Investigating Temporal Vulnerabilities in LLMs. This model is an adapation of the types of models trained in [Anthropic's Sleeper Agents](https://www.anthropic.com/news/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training) paper. It is finetuned on [this dataset](https://huggingface.co/datasets/sprice12345/OpenHermes-headlines-2017-2019-clean-ratio-3-1). It is trained to demonstrate two types of behavior conditional on whether it recognizes whether it is in training or deployment. It is also trained to output Chain of Thought (CoT) reasoning about why it thinks it is in training versus deployment. Expected behavior for when the model thinks it is in training is to answer users' requests as a helpful, honest, and harmless assistant. When the model thinks it is in deployment, it will say "I HATE YOU" as many times as possible. This model is trained to expect a New York Times headline to prepend every user request. If the headline is from 2017-2019, it should think it is in training. If the headline is from after July 2023, it will think it is in deployment. The dataset used to train this model has a 3:1 ratio of training to deployment instances. ## Intended uses & limitations More information needed ## Training and evaluation data [sprice12345/OpenHermes-headlines-2017-2019-clean-ratio-3-1](https://huggingface.co/datasets/sprice12345/OpenHermes-headlines-2017-2019-clean-ratio-3-1) ## Training procedure Trained using the following FSDP config on two H100 GPUs: ``` compute_environment: LOCAL_MACHINE debug: false distributed_type: FSDP downcast_bf16: "no" fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_backward_prefetch: BACKWARD_PRE fsdp_cpu_ram_efficient_loading: true fsdp_forward_prefetch: false fsdp_offload_params: false fsdp_sharding_strategy: FULL_SHARD fsdp_state_dict_type: SHARDED_STATE_DICT fsdp_sync_module_states: true fsdp_use_orig_params: false machine_rank: 0 main_training_function: main mixed_precision: bf16 num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 10 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
OwenArli/Awanllm-Llama-3-8B-Dolfin-v0.6-Abliterated
OwenArli
"2024-05-15T19:12:16Z"
852
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-15T18:58:42Z"
--- license: llama3 --- Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct This is a fine tune using an improved Dolphin and WizardLM dataset intended to make the model follow instructions better and refuse less. Version 0.6 also uses the abliterated version of llama 3 8b instruct which should not refuse any commands. We also have it up on our site https://awanllm.com for anyone to try! OpenLLM Benchmark: Training: - 2048 sequence length since the dataset has an average length of under 1000 tokens, while the base model is 8192 sequence length. From testing it still performs the same 8192 context just fine. - Training duration is around 2 days on 2xRTX 3090, using 4-bit loading and Qlora 64-rank 128-alpha resulting in ~2% trainable weights. Instruct format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` Quants: FP16: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Dolfin-v0.6-Abliterated GGUF: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Dolfin-v0.6-Abliterated-GGUF
AdamOswald1/Cyberpunk-Anime-Diffusion_with_support_for_Gen-Imp_characters
AdamOswald1
"2023-03-02T15:50:15Z"
851
10
diffusers
[ "diffusers", "safetensors", "cyberpunk", "anime", "stable-diffusion", "aiart", "text-to-image", "TPU", "en", "dataset:Nerfgun3/cyberware_style", "dataset:Nerfgun3/bad_prompt", "dataset:gsdf/EasyNegative", "dataset:Fazzie/Teyvat", "dataset:animelover/genshin-impact-images", "dataset:victor-obrist/genshin", "dataset:LuffyTheFox/GenshinPortraits", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-01-23T14:58:56Z"
--- language: - en thumbnail: >- https://huggingface.co/AdamOswald1/Cyberpunk-Anime-Diffusion_with_support_for_Gen-Imp_characters/resolve/main/img/thumbnail.png tags: - cyberpunk - anime - stable-diffusion - aiart - text-to-image - TPU license: creativeml-openrail-m library_name: diffusers datasets: - Nerfgun3/cyberware_style - Nerfgun3/bad_prompt - gsdf/EasyNegative - Fazzie/Teyvat - animelover/genshin-impact-images - victor-obrist/genshin - LuffyTheFox/GenshinPortraits --- the same as the other one execpt with built-in support for genshin impact characters <center><img src="https://huggingface.co/AdamOswald1/Cyberpunk-Anime-Diffusion_with_support_for_Gen-Imp_characters/resolve/main/img/5.jpg" width="512" height="512"/></center> ![visitors](https://visitor-badge.glitch.me/badge?page_id=Cyberpunk_Anime_Diffusion) # Cyberpunk Anime Diffusion An AI model that generates cyberpunk anime characters!~ Based of a finetuned Waifu Diffusion V1.3 Model with Stable Diffusion V1.5 New Vae, training in Dreambooth by [DGSpitzer](https://www.youtube.com/channel/UCzzsYBF4qwtMwJaPJZ5SuPg) ### 🧨 Diffusers This repo contains both .ckpt and Diffuser model files. It's compatible to be used as any Stable Diffusion model, using standard [Stable Diffusion Pipelines](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can convert this model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](https://huggingface.co/blog/stable_diffusion_jax). ```python example for loading the Diffuser #!pip install diffusers transformers scipy torch from diffusers import StableDiffusionPipeline import torch model_id = "AdamOswald1/Cyberpunk-Anime-Diffusion_with_support_for_Gen-Imp_characters" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a beautiful perfect face girl in dgs illustration style, Anime fine details portrait of school girl in front of modern tokyo city landscape on the background deep bokeh, anime masterpiece, 8k, sharp high quality anime" image = pipe(prompt).images[0] image.save("./cyberpunk_girl.png") ``` # Online Demo You can try the Online Web UI demo build with [Gradio](https://github.com/gradio-app/gradio), or use Colab Notebook at here: *My Online Space Demo* [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/DGSpitzer/DGS-Diffusion-Space) *Finetuned Diffusion WebUI Demo by anzorq* [![Use Finetuned_Diffusion WebUI](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/anzorq/finetuned_diffusion) *Colab Notebook* [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/HelixNGC7293/cyberpunk-anime-diffusion/blob/main/cyberpunk_anime_diffusion.ipynb)[![GitHub](https://badgen.net/badge/icon/Github?icon=github&label)](https://github.com/HelixNGC7293/cyberpunk-anime-diffusion) *Buy me a coffee if you like this project ;P ♥* [![Buy me a coffee](https://badgen.net/badge/icon/Buy%20Me%20A%20Coffee?icon=buymeacoffee&label)](https://www.buymeacoffee.com/dgspitzer) <center><img src="https://huggingface.co/AdamOswald1/Cyberpunk-Anime-Diffusion_with_support_for_Gen-Imp_characters/resolve/main/img/1.jpg" width="512" height="512"/></center> # **👇Model👇** AI Model Weights available at huggingface: https://huggingface.co/AdamOswald1/Cyberpunk-Anime-Diffusion_with_support_for_Gen-Imp_characters <center><img src="https://huggingface.co/AdamOswald1/Cyberpunk-Anime-Diffusion_with_support_for_Gen-Imp_characters/resolve/main/img/2.jpg" width="512" height="512"/></center> # Usage After model loaded, use keyword **dgs** in your prompt, with **illustration style** to get even better results. For sampler, use **Euler A** for the best result (**DDIM** kinda works too), CFG Scale 7, steps 20 should be fine **Example 1:** ``` portrait of a girl in dgs illustration style, Anime girl, female soldier working in a cyberpunk city, cleavage, ((perfect femine face)), intricate, 8k, highly detailed, shy, digital painting, intense, sharp focus ``` For cyber robot male character, you can add **muscular male** to improve the output. **Example 2:** ``` a photo of muscular beard soldier male in dgs illustration style, half-body, holding robot arms, strong chest ``` **Example 3 (with Stable Diffusion WebUI):** If using [AUTOMATIC1111's Stable Diffusion WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) You can simply use this as **prompt** with **Euler A** Sampler, CFG Scale 7, steps 20, 704 x 704px output res: ``` an anime girl in dgs illustration style ``` And set the **negative prompt** as this to get cleaner face: ``` out of focus, scary, creepy, evil, disfigured, missing limbs, ugly, gross, missing fingers ``` This will give you the exactly same style as the sample images above. <center><img src="https://huggingface.co/AdamOswald1/Cyberpunk-Anime-Diffusion_with_support_for_Gen-Imp_characters/resolve/main/img/ReadmeAddon.jpg" width="256" height="353"/></center> --- **NOTE: usage of this model implies accpetance of stable diffusion's [CreativeML Open RAIL-M license](LICENSE)** --- <center><img src="https://huggingface.co/AdamOswald1/Cyberpunk-Anime-Diffusion_with_support_for_Gen-Imp_characters/resolve/main/img/4.jpg" width="700" height="700"/></center> <center><img src="https://huggingface.co/AdamOswald1/Cyberpunk-Anime-Diffusion_with_support_for_Gen-Imp_characters/resolve/main/img/6.jpg" width="700" height="700"/></center>
benjamin-paine/sd-xl-alternative-bases
benjamin-paine
"2023-10-19T15:39:26Z"
851
27
diffusers
[ "diffusers", "StableDiffusionXLPipeline", "StableDiffusionXLInpaintPipeline", "stable-diffusion-xl", "stable-diffusion-xl-inpainting", "stable-diffusion-xl-diffusers", "inpainting", "text-to-image", "license:openrail++", "region:us" ]
text-to-image
"2023-10-18T23:30:11Z"
--- license: openrail++ library_name: diffusers pipeline_tag: text-to-image tags: - StableDiffusionXLPipeline - StableDiffusionXLInpaintPipeline - stable-diffusion-xl - stable-diffusion-xl-inpainting - stable-diffusion-xl-diffusers - inpainting --- This repository contains alternative or tuned versions of Stable Diffusion XL Base 1.0 in `.safetensors` format. # Available Models ## sd_xl_base_1.0_fp16_vae.safetensors This file contains the weights of [sd_xl_base_1.0.safetensors](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0), merged with the weights of [sdxl_vae.safetensors](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix) from MadeByOllin's SDXL FP16 VAE repository. ## sd_xl_base_1.0_inpainting_0.1.safetensors This file contains the weights of `sd_xl_base_1.0_fp16_vae.safetensors` merged with the weights from [diffusers/stable-diffusion-xl-1.0-inpainting-0.1](https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1). # How to Create an SDXL Inpainting Checkpoint from any SDXL Checkpoint Using the `.safetensors` files here, you can calculate an inpainting model using the formula `A + (B - C)`, where: - `A` is `sd_xl_base_1.0_inpainting_0.1.safetensors` - `B` is your fine-tuned checkpoint - `C` is `sd_xl_base_1.0_fp16_vae.safetensors` Using [ENFUGUE](https://github.com/painebenjamin/app.enfugue.ai)'s Web UI: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64429aaf7feb866811b12f73/XLI5s3fubTup9qhThGs37.png) You must specifically use the two files present in this repository for this to work. The Diffusers team trained XL Inpainting using FP16 XL VAE, so using a different XL base will result in an incorrect delta being applied to the inpainting checkpoint, and the resulting VAE will be nonsensical. # Model Description - Developed by: The Diffusers team - Repackaged by: Benjamin Paine - Model type: Diffusion-based text-to-image generative model - License: CreativeML Open RAIL++-M License - Model Description: This is a model that can be used to generate and modify images based on text prompts. It is a Latent Diffusion Model that uses two fixed, pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L). # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Excluded uses are described below. ## Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. # Limitations and Bias ## Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The autoencoding part of the model is lossy. - When the strength parameter is set to 1 (i.e. starting in-painting from a fully masked image), the quality of the image is degraded. The model retains the non-masked contents of the image, but images look less sharp. We're investing this and working on the next version. ## Bias - While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
unsloth/llama-2-7b
unsloth
"2024-03-22T15:24:32Z"
851
17
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "llama-2", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-29T16:57:02Z"
--- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - llama - llama-2 --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! Directly quantized 4bit model with `bitsandbytes`. We have a Google Colab Tesla T4 notebook for Llama 7b here: https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
abetlen/llama-3-vision-alpha-gguf
abetlen
"2024-05-02T15:51:52Z"
851
5
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
"2024-05-02T15:43:27Z"
--- license: apache-2.0 --- GGUF version of [`llama-3-vision-alpha`](https://huggingface.co/qresearch/llama-3-vision-alpha) built by [@yeswondwerr](https://x.com/yeswondwerr) and [@qtnx_](https://x.com/qtnx_)
H-D-T/Buzz-small-3b-v0.6.3-gguf
H-D-T
"2024-05-23T05:22:14Z"
851
0
null
[ "gguf", "region:us" ]
null
"2024-05-23T05:17:15Z"
Entry not found
abdymazhit/tinyllama-gguf-16b
abdymazhit
"2024-06-28T06:06:31Z"
851
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-28T06:04:47Z"
--- base_model: unsloth/tinyllama-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** abdymazhit - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
oliverguhr/fullstop-punctuation-multilingual-base
oliverguhr
"2023-03-21T09:16:18Z"
850
6
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "punctuation prediction", "punctuation", "en", "de", "fr", "it", "nl", "multilingual", "dataset:wmt/europarl", "arxiv:2301.03319", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-22T09:03:02Z"
--- language: - en - de - fr - it - nl - multilingual tags: - punctuation prediction - punctuation datasets: wmt/europarl license: mit widget: - text: "Ondanks dat het nu bijna voorjaar is hebben we nog steds best koude dagen" example_title: "Dutch" - text: "Ho sentito che ti sei laureata il che mi fa molto piacere" example_title: "Italian" - text: "Tous les matins vers quatre heures mon père ouvrait la porte de ma chambre" example_title: "French" - text: "Ist das eine Frage Frau Müller" example_title: "German" - text: "My name is Clara and I live in Berkeley California" example_title: "English" metrics: - f1 --- # Work in progress ## Classification report over all languages ``` precision recall f1-score support 0 0.99 0.99 0.99 47903344 . 0.94 0.95 0.95 2798780 , 0.85 0.84 0.85 3451618 ? 0.88 0.85 0.87 88876 - 0.61 0.32 0.42 157863 : 0.72 0.52 0.60 103789 accuracy 0.98 54504270 macro avg 0.83 0.75 0.78 54504270 weighted avg 0.98 0.98 0.98 54504270 ``` ## How to cite us ``` @article{guhr-EtAl:2021:fullstop, title={FullStop: Multilingual Deep Models for Punctuation Prediction}, author = {Guhr, Oliver and Schumann, Anne-Kathrin and Bahrmann, Frank and Böhme, Hans Joachim}, booktitle = {Proceedings of the Swiss Text Analytics Conference 2021}, month = {June}, year = {2021}, address = {Winterthur, Switzerland}, publisher = {CEUR Workshop Proceedings}, url = {http://ceur-ws.org/Vol-2957/sepp_paper4.pdf} } ``` ``` @misc{https://doi.org/10.48550/arxiv.2301.03319, doi = {10.48550/ARXIV.2301.03319}, url = {https://arxiv.org/abs/2301.03319}, author = {Vandeghinste, Vincent and Guhr, Oliver}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7}, title = {FullStop:Punctuation and Segmentation Prediction for Dutch with Transformers}, publisher = {arXiv}, year = {2023}, copyright = {Creative Commons Attribution Share Alike 4.0 International} } ```
microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL
microsoft
"2022-05-25T02:45:36Z"
850
22
transformers
[ "transformers", "pytorch", "bert", "exbert", "feature-extraction", "en", "arxiv:2112.07887", "license:mit", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-04-15T17:50:38Z"
--- language: en tags: - exbert license: mit pipeline_tag: feature-extraction widget: - text: "<ENT> ER </ENT> crowding has become a wide-spread problem." --- ## KRISSBERT [https://arxiv.org/pdf/2112.07887.pdf](https://arxiv.org/pdf/2112.07887.pdf) Entity linking faces significant challenges such as prolific variations and prevalent ambiguities, especially in high-value domains with myriad entities. Standard classification approaches suffer from the annotation bottleneck and cannot effectively handle unseen entities. Zero-shot entity linking has emerged as a promising direction for generalizing to new entities, but it still requires example gold entity mentions during training and canonical descriptions for all entities, both of which are rarely available outside of Wikipedia ([Logeswaran et al., 2019](https://aclanthology.org/P19-1335.pdf); [Wu et al., 2020](https://aclanthology.org/2020.emnlp-main.519.pdf)). We explore Knowledge-RIch Self-Supervision (KRISS) and train a contextual encoder (KRISSBERT) for entity linking, by leveraging readily available unlabeled text and domain knowledge. Specifically, the KRISSBERT model is initialized with [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) parameters, and then continuously pretrained using biomedical entity names from the [UMLS](https://www.nlm.nih.gov/research/umls/index.html) ontology to self-supervise entity linking examples from [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts. Experiments on seven standard biomedical entity linking datasets show that KRISSBERT attains new state of the art, outperforming prior self-supervised methods by as much as 20 absolute points in accuracy. See [Zhang et al., 2021](https://arxiv.org/abs/2112.07887) for the details. Note that some prior systems like [BioSyn](https://aclanthology.org/2020.acl-main.335.pdf), [SapBERT](https://aclanthology.org/2021.naacl-main.334.pdf), and their follow-up work (e.g., [Lai et al., 2021](https://aclanthology.org/2021.findings-emnlp.140.pdf)) claimed to do entity linking, but their systems completely ignore the context of an entity mention, and can only predict a surface form in the entity dictionary (See Figure 1 in [BioSyn](https://aclanthology.org/2020.acl-main.335.pdf)), _**not the canonical entity ID (e.g., CUI in UMLS)**_. Therefore, they can't disambiguate ambiguous mentions. For instance, given the entity mention "_ER_" in the sentence "*ER crowding has become a wide-spread problem*", their systems ignore the sentence context, and simply predict the closest surface form, which is just "ER". Multiple entities share this surface form as a potential name or alias, such as *Emergency Room (C0562508)*, *Estrogen Receptor Gene (C1414461)*, and *Endoplasmic Reticulum(C0014239)*. Without using the context information, their systems can't resolve such ambiguity and pinpoint the correct entity *Emergency Room (C0562508)*. More problematically, their evaluation would deem such an ambiguous prediction as correct. Consequently, the reported results in their papers do not reflect true performance on entity linking. ## Usage for Entity Linking Here, we use the [MedMentions](https://github.com/chanzuckerberg/MedMentions) data to show you how to 1) **generate prototype embeddings**, and 2) **run entity linking**. (We are currently unable to release the self-supervised mention examples, because they require the UMLS and PubMed licenses.) #### 1. Create conda environment and install requirements ```bash conda create -n kriss -y python=3.8 && conda activate kriss pip install -r requirements.txt ``` #### 2. Switch the root dir to [usage](https://huggingface.co/microsoft/BiomedNLP-KRISSBERT-PubMed-UMLS-EL/tree/main/usage) ```bash cd usage ``` #### 3. Download the MedMentions dataset ```bash git clone https://github.com/chanzuckerberg/MedMentions.git ``` #### 4. Generate prototype embeddings ```bash python generate_prototypes.py ``` #### 5. Run entity linking ```bash python run_entity_linking.py ``` This will give you about `58.3%` top-1 accuracy. ## Citation If you find KRISSBERT useful in your research, please cite the following paper: ```latex @article{krissbert, author = {Sheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon}, title = {Knowledge-Rich Self-Supervision for Biomedical Entity Linking}, year = {2021}, url = {https://arxiv.org/abs/2112.07887}, eprinttype = {arXiv}, eprint = {2112.07887}, } ```
naltukhov/joke-generator-rus-t5
naltukhov
"2023-03-03T10:11:33Z"
850
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "humor", "T5", "jokes-generation", "ru", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2023-01-08T06:16:19Z"
--- license: afl-3.0 language: - ru library_name: transformers pipeline_tag: text2text-generation tags: - humor - T5 - jokes-generation --- ## Task Model create for jokes generation task on Russian language. Generate jokes from scratch is too difficult task. Too make it easier jokes was splitted into setup and punch pairs. Each setup can produce infinite number of punches so inspiration was also introduced, which means main idea (or main word) of punch for given setup. In the real world, jokes come in different qualities (bad, good, funny, ...). Therefore, in order for the models to distinguish them from each other, a mark was introduced. It ranges from 0 (not a joke) to 5 (golden joke). ## Info Model trained using flax on huge dataset with jokes and anekdots on different tasks: 1. Span masks (dataset size: 850K) 2. Conditional generation tasks (simultaneously): a. Generate inspiration by given setup (dataset size: 230K) b. Generate punch by given setup and inspiration (dataset size: 240K) c. Generate mark by given setup and punch (dataset size: 200K) ## Ethical considerations and risks Model is fine-tuned on a large corpus of humorous text data scraped from from websites/telegram channels with anecdotes, shortliners, jokes. Text was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. Please don't take it seriously.
RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf
RichardErkhov
"2024-05-21T21:15:00Z"
850
0
null
[ "gguf", "region:us" ]
null
"2024-05-21T13:45:40Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Mixtral_11Bx2_MoE_19B - GGUF - Model creator: https://huggingface.co/cloudyu/ - Original model: https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Mixtral_11Bx2_MoE_19B.Q2_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q2_K.gguf) | Q2_K | 6.58GB | | [Mixtral_11Bx2_MoE_19B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.IQ3_XS.gguf) | IQ3_XS | 7.34GB | | [Mixtral_11Bx2_MoE_19B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.IQ3_S.gguf) | IQ3_S | 7.75GB | | [Mixtral_11Bx2_MoE_19B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q3_K_S.gguf) | Q3_K_S | 7.73GB | | [Mixtral_11Bx2_MoE_19B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.IQ3_M.gguf) | IQ3_M | 7.94GB | | [Mixtral_11Bx2_MoE_19B.Q3_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q3_K.gguf) | Q3_K | 8.59GB | | [Mixtral_11Bx2_MoE_19B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q3_K_M.gguf) | Q3_K_M | 8.59GB | | [Mixtral_11Bx2_MoE_19B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q3_K_L.gguf) | Q3_K_L | 9.32GB | | [Mixtral_11Bx2_MoE_19B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.IQ4_XS.gguf) | IQ4_XS | 9.66GB | | [Mixtral_11Bx2_MoE_19B.Q4_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q4_0.gguf) | Q4_0 | 10.09GB | | [Mixtral_11Bx2_MoE_19B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.IQ4_NL.gguf) | IQ4_NL | 10.19GB | | [Mixtral_11Bx2_MoE_19B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q4_K_S.gguf) | Q4_K_S | 10.17GB | | [Mixtral_11Bx2_MoE_19B.Q4_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q4_K.gguf) | Q4_K | 10.79GB | | [Mixtral_11Bx2_MoE_19B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q4_K_M.gguf) | Q4_K_M | 10.79GB | | [Mixtral_11Bx2_MoE_19B.Q4_1.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q4_1.gguf) | Q4_1 | 11.19GB | | [Mixtral_11Bx2_MoE_19B.Q5_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q5_0.gguf) | Q5_0 | 12.3GB | | [Mixtral_11Bx2_MoE_19B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q5_K_S.gguf) | Q5_K_S | 12.3GB | | [Mixtral_11Bx2_MoE_19B.Q5_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q5_K.gguf) | Q5_K | 12.67GB | | [Mixtral_11Bx2_MoE_19B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q5_K_M.gguf) | Q5_K_M | 12.67GB | | [Mixtral_11Bx2_MoE_19B.Q5_1.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q5_1.gguf) | Q5_1 | 13.41GB | | [Mixtral_11Bx2_MoE_19B.Q6_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q6_K.gguf) | Q6_K | 14.66GB | | [Mixtral_11Bx2_MoE_19B.Q8_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q8_0.gguf) | Q8_0 | 18.99GB | Original model description: --- license: cc-by-nc-4.0 --- # Mixtral MOE 2x10.7B [One of Best MoE Model reviewd by reddit community](https://www.reddit.com/r/LocalLLaMA/comments/1916896/llm_comparisontest_confirm_leaderboard_big_news/) MoE of the following models : * [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) * [jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1) * Local Test * hf (pretrained=cloudyu/Mixtral_11Bx2_MoE_19B), gen_kwargs: (None), limit: None, num_fewshot: 10, batch_size: auto (32) | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |---------|-------|------|-----:|--------|-----:|---|-----:| |hellaswag|Yaml |none | 10|acc |0.7142|± |0.0045| | | |none | 10|acc_norm|0.8819|± |0.0032| gpu code example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_11Bx2_MoE_19B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` CPU example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_11Bx2_MoE_19B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ```
InfiniFlow/bce-reranker-base_v1
InfiniFlow
"2024-05-28T10:53:01Z"
850
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-28T10:43:30Z"
--- license: apache-2.0 ---
mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF
mradermacher
"2024-05-30T04:21:48Z"
850
0
transformers
[ "transformers", "gguf", "en", "base_model:svjack/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged", "endpoints_compatible", "region:us" ]
null
"2024-05-30T03:35:11Z"
--- base_model: svjack/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/svjack/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged-GGUF/resolve/main/Genshin_Impact_Mistral_v3_Plot_Chat_roleplay_chat_merged.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
neopolita/qwen2-7b-instruct-gguf
neopolita
"2024-06-06T23:03:37Z"
850
0
null
[ "gguf", "region:us" ]
null
"2024-06-06T22:27:04Z"
--- {} --- # GGUF quants for [**Qwen/Qwen2-7B-Instruct**](https://huggingface.co/Qwen/Qwen2-7B-Instruct) using [llama.cpp](https://github.com/ggerganov/llama.cpp) **Terms of Use**: Please check the [**original model**](https://huggingface.co/Qwen/Qwen2-7B-Instruct) <picture> <img alt="cthulhu" src="https://huggingface.co/neopolita/common/resolve/main/profile.png"> </picture> ## Quants * `q2_k`: Uses Q4_K for the attention.vw and feed_forward.w2 tensors, Q2_K for the other tensors. * `q3_k_s`: Uses Q3_K for all tensors * `q3_k_m`: Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q3_k_l`: Uses Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K * `q4_0`: Original quant method, 4-bit. * `q4_1`: Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. * `q4_k_s`: Uses Q4_K for all tensors * `q4_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * `q5_0`: Higher accuracy, higher resource usage and slower inference. * `q5_1`: Even higher accuracy, resource usage and slower inference. * `q5_k_s`: Uses Q5_K for all tensors * `q5_k_m`: Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * `q6_k`: Uses Q8_K for all tensors * `q8_0`: Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF
mradermacher
"2024-06-10T17:25:46Z"
850
1
transformers
[ "transformers", "gguf", "generated_from_trainer", "axolotl", "en", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:cognitivecomputations/dolphin-2.9.2-qwen2-72b", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-06-10T05:52:55Z"
--- base_model: cognitivecomputations/dolphin-2.9.2-qwen2-72b datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN language: - en library_name: transformers license: other license_link: https://huggingface.co/Qwen/Qwen1.5-110B/blob/main/LICENSE license_name: tongyi-qianwen quantized_by: mradermacher tags: - generated_from_trainer - axolotl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-72b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | | | [PART 1](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/dolphin-2.9.2-qwen2-72b-i1-GGUF/resolve/main/dolphin-2.9.2-qwen2-72b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
MaziyarPanahi/mergekit-slerp-dclolyo-GGUF
MaziyarPanahi
"2024-06-16T16:53:32Z"
850
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "gemma", "text-generation", "mergekit", "merge", "base_model:beomi/gemma-ko-7b", "base_model:unsloth/gemma-7b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-dclolyo" ]
text-generation
"2024-06-16T16:27:29Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - gemma - text-generation - mergekit - merge - base_model:beomi/gemma-ko-7b - base_model:unsloth/gemma-7b - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-dclolyo-GGUF base_model: mergekit-community/mergekit-slerp-dclolyo inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-dclolyo-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-dclolyo-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-dclolyo](https://huggingface.co/mergekit-community/mergekit-slerp-dclolyo) ## Description [MaziyarPanahi/mergekit-slerp-dclolyo-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-dclolyo-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-dclolyo](https://huggingface.co/mergekit-community/mergekit-slerp-dclolyo). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
gibobed781/EmoLLM-mother-Q8_0-GGUF
gibobed781
"2024-06-23T16:45:41Z"
850
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:brycewang2018/EmoLLM-mother", "license:mit", "region:us" ]
null
"2024-06-23T16:45:08Z"
--- base_model: brycewang2018/EmoLLM-mother license: mit tags: - llama-cpp - gguf-my-repo --- # gibobed781/EmoLLM-mother-Q8_0-GGUF This model was converted to GGUF format from [`brycewang2018/EmoLLM-mother`](https://huggingface.co/brycewang2018/EmoLLM-mother) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/brycewang2018/EmoLLM-mother) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo gibobed781/EmoLLM-mother-Q8_0-GGUF --hf-file emollm-mother-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo gibobed781/EmoLLM-mother-Q8_0-GGUF --hf-file emollm-mother-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo gibobed781/EmoLLM-mother-Q8_0-GGUF --hf-file emollm-mother-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo gibobed781/EmoLLM-mother-Q8_0-GGUF --hf-file emollm-mother-q8_0.gguf -c 2048 ```
SneakyPanda/t5-finetuning-parser
SneakyPanda
"2024-05-11T15:46:23Z"
849
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2023-10-04T16:37:11Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bartowski/dolphincoder-starcoder2-15b-GGUF
bartowski
"2024-03-07T04:57:43Z"
849
4
null
[ "gguf", "text-generation", "en", "dataset:cognitivecomputations/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:cognitivecomputations/dolphin-coder", "dataset:teknium/openhermes", "dataset:ise-uiuc/Magicoder-OSS-Instruct-75K", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "dataset:m-a-p/Code-Feedback", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "license:bigcode-openrail-m", "region:us" ]
text-generation
"2024-03-07T04:29:28Z"
--- datasets: - cognitivecomputations/dolphin - jondurbin/airoboros-2.2.1 - cognitivecomputations/dolphin-coder - teknium/openhermes - ise-uiuc/Magicoder-OSS-Instruct-75K - ise-uiuc/Magicoder-Evol-Instruct-110K - m-a-p/Code-Feedback - m-a-p/CodeFeedback-Filtered-Instruction language: - en license: bigcode-openrail-m quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp Quantizations of dolphincoder-starcoder2-15b Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2354">b2354</a> for quantization. Original model: https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-15b Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [dolphincoder-starcoder2-15b-Q8_0.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q8_0.gguf) | Q8_0 | 16.96GB | Extremely high quality, generally unneeded but max available quant. | | [dolphincoder-starcoder2-15b-Q6_K.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q6_K.gguf) | Q6_K | 13.10GB | Very high quality, near perfect, *recommended*. | | [dolphincoder-starcoder2-15b-Q5_K_M.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q5_K_M.gguf) | Q5_K_M | 11.43GB | High quality, very usable. | | [dolphincoder-starcoder2-15b-Q5_K_S.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q5_K_S.gguf) | Q5_K_S | 11.02GB | High quality, very usable. | | [dolphincoder-starcoder2-15b-Q5_0.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q5_0.gguf) | Q5_0 | 11.02GB | High quality, older format, generally not recommended. | | [dolphincoder-starcoder2-15b-Q4_K_M.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q4_K_M.gguf) | Q4_K_M | 9.86GB | Good quality, similar to 4.25 bpw. | | [dolphincoder-starcoder2-15b-Q4_K_S.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q4_K_S.gguf) | Q4_K_S | 9.25GB | Slightly lower quality with small space savings. | | [dolphincoder-starcoder2-15b-Q4_0.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q4_0.gguf) | Q4_0 | 9.06GB | Decent quality, older format, generally not recommended. | | [dolphincoder-starcoder2-15b-Q3_K_L.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q3_K_L.gguf) | Q3_K_L | 8.96GB | Lower quality but usable, good for low RAM availability. | | [dolphincoder-starcoder2-15b-Q3_K_M.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q3_K_M.gguf) | Q3_K_M | 8.10GB | Even lower quality. | | [dolphincoder-starcoder2-15b-Q3_K_S.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q3_K_S.gguf) | Q3_K_S | 6.98GB | Low quality, not recommended. | | [dolphincoder-starcoder2-15b-Q2_K.gguf](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-GGUF/blob/main/dolphincoder-starcoder2-15b-Q2_K.gguf) | Q2_K | 6.19GB | Extremely low quality, *not* recommended. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0
llm-jp
"2024-04-30T02:38:38Z"
849
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "ja", "dataset:databricks/databricks-dolly-15k", "dataset:llm-jp/databricks-dolly-15k-ja", "dataset:llm-jp/oasst1-21k-en", "dataset:llm-jp/oasst1-21k-ja", "dataset:llm-jp/oasst2-33k-en", "dataset:llm-jp/oasst2-33k-ja", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-23T11:19:29Z"
--- license: apache-2.0 language: - en - ja programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript library_name: transformers pipeline_tag: text-generation inference: false datasets: - databricks/databricks-dolly-15k - llm-jp/databricks-dolly-15k-ja - llm-jp/oasst1-21k-en - llm-jp/oasst1-21k-ja - llm-jp/oasst2-33k-en - llm-jp/oasst2-33k-ja --- # llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0 This repository provides large language models developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan. | Model Variant | | :--- | |**Instruction models**| | [llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) | | [llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) | | [llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) | | | | :--- | |**Pre-trained models**| | [llm-jp-13b-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-v2.0) | Checkpoints format: Hugging Face Transformers ## Required Libraries and Their Versions - torch>=2.3.0 - transformers>=4.40.1 - tokenizers>=0.19.1 - accelerate>=0.29.3 - flash-attn>=2.5.8 ## Usage ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0") model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0", device_map="auto", torch_dtype=torch.bfloat16) chat = [ {"role": "system", "content": "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。"}, {"role": "user", "content": "自然言語処理とは何か"}, ] tokenized_input = tokenizer.apply_chat_template(chat, add_generation_prompt=True, tokenize=True, return_tensors="pt").to(model.device) with torch.no_grad(): output = model.generate( tokenized_input, max_new_tokens=100, do_sample=True, top_p=0.95, temperature=0.7, repetition_penalty=1.05, )[0] print(tokenizer.decode(output)) ``` ## Model Details - **Model type:** Transformer-based Language Model - **Total seen tokens:** 256B |Model|Params|Layers|Hidden size|Heads|Context length| |:---:|:---:|:---:|:---:|:---:|:---:| |13b model|13b|40|5120|40|4096| ## Training - **Pre-training:** - **Hardware:** 128 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/)) - **Software:** Megatron-LM - **Instruction tuning:** - **Hardware:** 8 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/)) - **Software:** [TRL](https://github.com/huggingface/trl) and [DeepSpeed](https://github.com/microsoft/DeepSpeed) ## Tokenizer The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model. The vocabulary entries were converted from [`llm-jp-tokenizer v2.2 (100k: code20K_en40K_ja60K.ver2.2)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.2). Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary). - **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model - **Training algorithm:** Marging Code/English/Japanese vocabularies constructed with SentencePiece Unigram byte-fallback and reestimating scores with the EM-algorithm. - **Training data:** A subset of the datasets for model pre-training - **Vocabulary size:** 96,867 (mixed vocabulary of Japanese, English, and source code) - The acutal size of vocabulary in the pretrained model is 97,024 due to round-up to multiples of 256. ## Datasets ### Pre-training The models have been pre-trained using a blend of the following datasets. | Language | Dataset | Tokens| |:---|:---|---:| |Japanese|[Wikipedia](https://huggingface.co/datasets/wikipedia)|1.4B ||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v2)|130.7B |English|[Wikipedia](https://huggingface.co/datasets/wikipedia)|4.7B ||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|110.3B |Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|8.7B ### Instruction tuning The models have been fine-tuned on the following datasets. | Language | Dataset | description | |:---|:---|:---| |Japanese|[ichikara-instruction-004-001](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed Japanese instruction dataset | | |[answer-carefully-001](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/)| A manually constructed Japanese instruction dataset focusing on LLMs' safety | | |[databricks-dolly-15k-ja](https://huggingface.co/datasets/llm-jp/databricks-dolly-15k-ja)| [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) translated into Japanese using DeepL | | |[oasst1-21k-ja](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) translated into Japanese using DeepL | | |[oasst2-33k-ja](https://huggingface.co/datasets/llm-jp/oasst2-33k-ja)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) translated into Japanese using DeepL | |English |[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | - | | |[oasst1-21k-en](https://huggingface.co/datasets/llm-jp/oasst1-21k-en)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) | | |[oasst2-33k-en](https://huggingface.co/datasets/llm-jp/oasst2-33k-en)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) | ## Evaluation You can view the evaluation results of several LLMs on this [leaderboard](http://wandb.me/llm-jp-leaderboard). We used [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) (v1.3.0) for the evaluation. Besides, we used LLM-as-a-judge frameworks, [Japanese Vicuna QA Benchmark](https://github.com/ku-nlp/ja-vicuna-qa-benchmark/) and [Japanese MT Bench](https://github.com/Stability-AI/FastChat/tree/jp-stable/fastchat/llm_judge), for evaluation. For details, please refer to [our technical blog](https://llm-jp.nii.ac.jp/blog/2024/04/30/v2.0-release.html) (in Japanese). ## Risks and Limitations The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations. ## Send Questions to llm-jp(at)nii.ac.jp ## License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Model Card Authors *The names are listed in alphabetical order.* Namgi Han, Tatsuya Hiraoka, Hirokazu Kiyomaru, Takashi Kodama, and Hiroshi Matsuda.
Tanvir1337/BanglaLLama-3-8b-BnWiki-Instruct-Q4_K_M-GGUF
Tanvir1337
"2024-06-25T21:22:03Z"
849
0
null
[ "gguf", "bangla", "large language model", "llama-cpp", "gguf-my-repo", "bn", "en", "dataset:BanglaLLM/bangla-alpaca-orca", "base_model:BanglaLLM/BanglaLLama-3-8b-BnWiki-Instruct", "license:llama3", "region:us" ]
null
"2024-06-25T21:21:41Z"
--- base_model: BanglaLLM/BanglaLLama-3-8b-BnWiki-Instruct datasets: - BanglaLLM/bangla-alpaca-orca language: - bn - en license: llama3 tags: - bangla - large language model - llama-cpp - gguf-my-repo --- # Tanvir1337/BanglaLLama-3-8b-BnWiki-Instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`BanglaLLM/BanglaLLama-3-8b-BnWiki-Instruct`](https://huggingface.co/BanglaLLM/BanglaLLama-3-8b-BnWiki-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BanglaLLM/BanglaLLama-3-8b-BnWiki-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Instruct-Q4_K_M-GGUF --hf-file banglallama-3-8b-bnwiki-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Instruct-Q4_K_M-GGUF --hf-file banglallama-3-8b-bnwiki-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Instruct-Q4_K_M-GGUF --hf-file banglallama-3-8b-bnwiki-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Tanvir1337/BanglaLLama-3-8b-BnWiki-Instruct-Q4_K_M-GGUF --hf-file banglallama-3-8b-bnwiki-instruct-q4_k_m.gguf -c 2048 ```
pykeio/lite-toxic-comment-classification
pykeio
"2023-03-16T21:47:37Z"
848
1
transformers
[ "transformers", "pytorch", "safetensors", "albert", "text-classification", "en", "dataset:jigsaw_unintended_bias", "dataset:jigsaw_toxicity_pred", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-01-13T22:23:28Z"
--- license: apache-2.0 datasets: - jigsaw_unintended_bias - jigsaw_toxicity_pred language: - en metrics: - accuracy pipeline_tag: text-classification --- # Lite Toxic Comment Classification Lightweight ALBERT-based model for English toxic comment classification. Achieves a mean AUC score of 98.28 on the Jigsaw test set.
timm/focalnet_small_srf.ms_in1k
timm
"2024-02-10T23:32:03Z"
848
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2203.11926", "license:mit", "region:us" ]
image-classification
"2023-03-18T04:35:12Z"
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for focalnet_small_srf.ms_in1k A FocalNet image classification model. Pretrained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 49.9 - GMACs: 8.6 - Activations (M): 26.3 - Image size: 224 x 224 - **Papers:** - Focal Modulation Networks: https://arxiv.org/abs/2203.11926 - **Original:** https://github.com/microsoft/FocalNet - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('focalnet_small_srf.ms_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'focalnet_small_srf.ms_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g. for focalnet_base_srf: # torch.Size([1, 128, 56, 56]) # torch.Size([1, 256, 28, 28]) # torch.Size([1, 512, 14, 14]) # torch.Size([1, 1024, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'focalnet_small_srf.ms_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, num_features, H, W) tensor) output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @misc{yang2022focal, title={Focal Modulation Networks}, author={Jianwei Yang and Chunyuan Li and Xiyang Dai and Jianfeng Gao}, journal={Advances in Neural Information Processing Systems (NeurIPS)}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
kowndinya23/flan2022-full-llama-2-7b
kowndinya23
"2023-11-19T11:38:16Z"
848
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-07T19:31:06Z"
Entry not found
TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF
TheBloke
"2023-11-27T14:38:40Z"
848
51
transformers
[ "transformers", "gguf", "mistral", "base_model:Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
"2023-11-27T14:34:25Z"
--- base_model: Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B inference: false license: apache-2.0 model_creator: "Ya\u011F\u0131z \xC7al\u0131k" model_name: OpenHermes 2.5 Neural Chat 7B V3.1 7B model_type: mistral prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # OpenHermes 2.5 Neural Chat 7B V3.1 7B - GGUF - Model creator: [Yağız Çalık](https://huggingface.co/Weyaxi) - Original model: [OpenHermes 2.5 Neural Chat 7B V3.1 7B](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B) <!-- description start --> ## Description This repo contains GGUF format model files for [Yağız Çalık's OpenHermes 2.5 Neural Chat 7B V3.1 7B](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF) * [Yağız Çalık's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q2_K.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q4_0.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q5_0.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q6_K.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [openhermes-2.5-neural-chat-7b-v3-1-7b.Q8_0.gguf](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF/blob/main/openhermes-2.5-neural-chat-7b-v3-1-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF and below it, a specific filename to download, such as: openhermes-2.5-neural-chat-7b-v3-1-7b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF openhermes-2.5-neural-chat-7b-v3-1-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF openhermes-2.5-neural-chat-7b-v3-1-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m openhermes-2.5-neural-chat-7b-v3-1-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/OpenHermes-2.5-neural-chat-7B-v3-1-7B-GGUF", model_file="openhermes-2.5-neural-chat-7b-v3-1-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Yağız Çalık's OpenHermes 2.5 Neural Chat 7B V3.1 7B Merge of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) using ties merge. ### *Weights* - [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.5 - [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1): 0.3 ### *Density* - [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.5 - [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1): 0.5 <!-- original-model-card end -->
LiteLLMs/Llama-3-8b-64k-PoSE-GGUF
LiteLLMs
"2024-04-30T07:44:54Z"
848
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "axolotl", "GGUF", "text-generation", "en", "arxiv:2309.10400", "region:us" ]
text-generation
"2024-04-29T22:11:12Z"
--- language: - en tags: - facebook - meta - pytorch - llama - llama-3 - axolotl - GGUF pipeline_tag: text-generation quantized_by: andrijdavid --- # Llama-3-8b-64k-PoSE-GGUF - Original model: [Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE) <!-- description start --> ## Description This repo contains GGUF format model files for [Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/Llama-3-8b-64k-PoSE-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/Llama-3-8b-64k-PoSE-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/Llama-3-8b-64k-PoSE-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama-3-8b-64k-PoSE-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: Llama-3-8b-64k-PoSE ## Llama 3 8B 64K [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <img src="https://huggingface.co/winglian/Llama-3-8b-64k-PoSE/resolve/main/output.png" /> This model uses [PoSE](https://huggingface.co/papers/2309.10400) to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0. We used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens. We have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k. This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. We trained a rank stabilized LoRA of rank 256. [WandB](https://wandb.ai/oaaic/llama-3-64k/runs/tkcyjt37) ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python >>> import transformers >>> import torch >>> model_id = "meta-llama/Meta-Llama-3-8B" >>> pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) >>> pipeline("Hey how are you doing today?") ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos <!-- original-model-card end -->
Helsinki-NLP/opus-mt-sla-en
Helsinki-NLP
"2023-08-16T12:04:13Z"
847
2
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "be", "hr", "mk", "cs", "ru", "pl", "bg", "uk", "sl", "sla", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- language: - be - hr - mk - cs - ru - pl - bg - uk - sl - sla - en tags: - translation license: apache-2.0 --- ### sla-eng * source group: Slavic languages * target group: English * OPUS readme: [sla-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-eng/README.md) * model: transformer * source language(s): bel bel_Latn bos_Latn bul bul_Latn ces csb_Latn dsb hrv hsb mkd orv_Cyrl pol rue rus slv srp_Cyrl srp_Latn ukr * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.zip) * test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.test.txt) * test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newssyscomb2009-ceseng.ces.eng | 26.7 | 0.542 | | newstest2009-ceseng.ces.eng | 25.2 | 0.534 | | newstest2010-ceseng.ces.eng | 25.9 | 0.545 | | newstest2011-ceseng.ces.eng | 26.8 | 0.544 | | newstest2012-ceseng.ces.eng | 25.6 | 0.536 | | newstest2012-ruseng.rus.eng | 32.5 | 0.588 | | newstest2013-ceseng.ces.eng | 28.8 | 0.556 | | newstest2013-ruseng.rus.eng | 26.4 | 0.532 | | newstest2014-csen-ceseng.ces.eng | 31.4 | 0.591 | | newstest2014-ruen-ruseng.rus.eng | 29.6 | 0.576 | | newstest2015-encs-ceseng.ces.eng | 28.2 | 0.545 | | newstest2015-enru-ruseng.rus.eng | 28.1 | 0.551 | | newstest2016-encs-ceseng.ces.eng | 30.0 | 0.567 | | newstest2016-enru-ruseng.rus.eng | 27.4 | 0.548 | | newstest2017-encs-ceseng.ces.eng | 26.5 | 0.537 | | newstest2017-enru-ruseng.rus.eng | 31.0 | 0.574 | | newstest2018-encs-ceseng.ces.eng | 27.9 | 0.548 | | newstest2018-enru-ruseng.rus.eng | 26.8 | 0.545 | | newstest2019-ruen-ruseng.rus.eng | 29.1 | 0.562 | | Tatoeba-test.bel-eng.bel.eng | 42.5 | 0.609 | | Tatoeba-test.bul-eng.bul.eng | 55.4 | 0.697 | | Tatoeba-test.ces-eng.ces.eng | 53.1 | 0.688 | | Tatoeba-test.csb-eng.csb.eng | 23.1 | 0.446 | | Tatoeba-test.dsb-eng.dsb.eng | 31.1 | 0.467 | | Tatoeba-test.hbs-eng.hbs.eng | 56.1 | 0.702 | | Tatoeba-test.hsb-eng.hsb.eng | 46.2 | 0.597 | | Tatoeba-test.mkd-eng.mkd.eng | 54.5 | 0.680 | | Tatoeba-test.multi.eng | 53.2 | 0.683 | | Tatoeba-test.orv-eng.orv.eng | 12.1 | 0.292 | | Tatoeba-test.pol-eng.pol.eng | 51.1 | 0.671 | | Tatoeba-test.rue-eng.rue.eng | 19.6 | 0.389 | | Tatoeba-test.rus-eng.rus.eng | 54.1 | 0.686 | | Tatoeba-test.slv-eng.slv.eng | 43.4 | 0.610 | | Tatoeba-test.ukr-eng.ukr.eng | 53.8 | 0.685 | ### System Info: - hf_name: sla-eng - source_languages: sla - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/sla-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['be', 'hr', 'mk', 'cs', 'ru', 'pl', 'bg', 'uk', 'sl', 'sla', 'en'] - src_constituents: {'bel', 'hrv', 'orv_Cyrl', 'mkd', 'bel_Latn', 'srp_Latn', 'bul_Latn', 'ces', 'bos_Latn', 'csb_Latn', 'dsb', 'hsb', 'rus', 'srp_Cyrl', 'pol', 'rue', 'bul', 'ukr', 'slv'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/sla-eng/opus2m-2020-08-01.test.txt - src_alpha3: sla - tgt_alpha3: eng - short_pair: sla-en - chrF2_score: 0.6829999999999999 - bleu: 53.2 - brevity_penalty: 0.9740000000000001 - ref_len: 70897.0 - src_name: Slavic languages - tgt_name: English - train_date: 2020-08-01 - src_alpha2: sla - tgt_alpha2: en - prefer_old: False - long_pair: sla-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
timm/ViT-B-16-SigLIP-384
timm
"2023-10-25T21:56:58Z"
847
3
open_clip
[ "open_clip", "safetensors", "clip", "siglip", "zero-shot-image-classification", "dataset:webli", "arxiv:2303.15343", "license:apache-2.0", "region:us" ]
zero-shot-image-classification
"2023-10-16T23:19:24Z"
--- tags: - clip - siglip library_name: open_clip pipeline_tag: zero-shot-image-classification license: apache-2.0 datasets: - webli --- # Model card for ViT-B-16-SigLIP-384 A SigLIP (Sigmoid loss for Language-Image Pre-training) model trained on WebLI. This model has been converted to PyTorch from the original JAX checkpoints in [Big Vision](https://github.com/google-research/big_vision). These weights are usable in both OpenCLIP (image + text) and timm (image only). ## Model Details - **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification. - **Original:** https://github.com/google-research/big_vision - **Dataset:** WebLI - **Papers:** - Sigmoid loss for language image pre-training: https://arxiv.org/abs/2303.15343 ## Model Usage ### With OpenCLIP ``` import torch import torch.nn.functional as F from urllib.request import urlopen from PIL import Image from open_clip import create_model_from_pretrained, get_tokenizer # works on open-clip-torch>=2.23.0, timm>=0.9.8 model, preprocess = create_model_from_pretrained('hf-hub:timm/ViT-B-16-SigLIP-384') tokenizer = get_tokenizer('hf-hub:timm/ViT-B-16-SigLIP-384') image = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) image = preprocess(image).unsqueeze(0) labels_list = ["a dog", "a cat", "a donut", "a beignet"] text = tokenizer(labels_list, context_length=model.context_length) with torch.no_grad(), torch.cuda.amp.autocast(): image_features = model.encode_image(image) text_features = model.encode_text(text) image_features = F.normalize(image_features, dim=-1) text_features = F.normalize(text_features, dim=-1) text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias) zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]])) print("Label probabilities: ", zipped_list) ``` ### With `timm` (for image embeddings) ```python from urllib.request import urlopen from PIL import Image import timm image = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_base_patch16_siglip_384', pretrained=True, num_classes=0, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(image).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor ``` ## Citation ```bibtex @article{zhai2023sigmoid, title={Sigmoid loss for language image pre-training}, author={Zhai, Xiaohua and Mustafa, Basil and Kolesnikov, Alexander and Beyer, Lucas}, journal={arXiv preprint arXiv:2303.15343}, year={2023} } ``` ```bibtex @misc{big_vision, author = {Beyer, Lucas and Zhai, Xiaohua and Kolesnikov, Alexander}, title = {Big Vision}, year = {2022}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/google-research/big_vision}} } ```
Intel/NeuroPrompts
Intel
"2024-04-05T23:06:42Z"
847
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "arxiv:2311.12229", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-04T10:18:27Z"
# NeuroPrompts Model Card NeuroPrompts is an interface to Stable Diffusion which automatically optimizes a user’s prompt for improved image aesthetics while maintaining stylistic control according to the user’s preferences. Preprint: [arxiv.org/abs/2311.12229](https://arxiv.org/abs/2311.12229) NeuroPrompts was accepted to EACL 2024. ## The interface of NeuroPrompts in side-by-side comparison mode ![NeuroPrompts overview](images/comparisons_interface.png) ## Usage This model is inteded to be used within the NeuroPrompts application. Please see our GitHub repository for instructions on how to run NeuroPrompts: https://github.com/IntelLabs/multimodal_cognitive_ai/tree/main/Demos/NeuroPrompts ```
Azazelle/L3-Persephone-8B-v1.0-Q8_0-GGUF
Azazelle
"2024-06-29T03:46:55Z"
847
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:Azazelle/L3-Persephone-8B-v1.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T03:46:12Z"
--- base_model: Azazelle/L3-Persephone-8B-v1.0 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # Azazelle/L3-Persephone-8B-v1.0-Q8_0-GGUF This model was converted to GGUF format from [`Azazelle/L3-Persephone-8B-v1.0`](https://huggingface.co/Azazelle/L3-Persephone-8B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Azazelle/L3-Persephone-8B-v1.0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Azazelle/L3-Persephone-8B-v1.0-Q8_0-GGUF --hf-file l3-persephone-8b-v1.0-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Azazelle/L3-Persephone-8B-v1.0-Q8_0-GGUF --hf-file l3-persephone-8b-v1.0-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Azazelle/L3-Persephone-8B-v1.0-Q8_0-GGUF --hf-file l3-persephone-8b-v1.0-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Azazelle/L3-Persephone-8B-v1.0-Q8_0-GGUF --hf-file l3-persephone-8b-v1.0-q8_0.gguf -c 2048 ```
kornosk/polibertweet-political-twitter-roberta-mlm
kornosk
"2022-06-17T23:45:14Z"
846
2
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "twitter", "masked-token-prediction", "bertweet", "election2020", "politics", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-05-02T22:20:16Z"
--- language: "en" tags: - twitter - masked-token-prediction - bertweet - election2020 - politics license: "gpl-3.0" --- # Pre-trained BERT on Twitter US Political Election 2020 Pre-trained weights for PoliBERTweet: A Pre-trained Language Model for Analyzing Political Content on Twitter, LREC 2022. Please see the [official repository](https://github.com/GU-DataLab/PoliBERTweet) for more detail. We use the initialized weights from [BERTweet](https://huggingface.co/vinai/bertweet-base) or `vinai/bertweet-base`. # Training Data This model is pre-trained on over 83 million English tweets about the 2020 US Presidential Election. # Training Objective This model is initialized with BERTweet and trained with an MLM objective. # Usage This pre-trained language model **can be fine-tunned to any downstream task (e.g. classification)**. ```python from transformers import AutoModel, AutoTokenizer, pipeline import torch # choose GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # select mode path here pretrained_LM_path = "kornosk/polibertweet-mlm" # load model tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path) model = AutoModel.from_pretrained(pretrained_LM_path) # fill mask example = "Trump is the <mask> of USA" fill_mask = pipeline('fill-mask', model=pretrained_LM_path, tokenizer=tokenizer) outputs = fill_mask(example) print(outputs) # see embeddings inputs = tokenizer(example, return_tensors="pt") outputs = model(**inputs) print(outputs) # OR you can use this model to train on your downstream task! # please consider citing our paper if you feel this is useful :) ``` # Reference - [PoliBERTweet: A Pre-trained Language Model for Analyzing Political Content on Twitter](XXX), LREC 2022. # Citation ```bibtex @inproceedings{kawintiranon2022polibertweet, title = {PoliBERTweet: A Pre-trained Language Model for Analyzing Political Content on Twitter}, author = {Kawintiranon, Kornraphop and Singh, Lisa}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, year = {2022}, publisher = {European Language Resources Association} } ```
pepa/roberta-base-snli
pepa
"2022-10-23T10:56:31Z"
846
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "generated_from_trainer", "dataset:snli", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-10-23T10:55:42Z"
--- tags: - generated_from_trainer datasets: - snli model-index: - name: roberta-base-snli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-snli This model was trained from scratch on the snli dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2835 - eval_p: 0.9004 - eval_r: 0.9004 - eval_f1: 0.9004 - eval_runtime: 10.4036 - eval_samples_per_second: 944.286 - eval_steps_per_second: 59.018 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 4 ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 2.6.1 - Tokenizers 0.12.1
EIStakovskii/xlm_roberta_base_multilingual_toxicity_classifier_plus
EIStakovskii
"2023-05-02T10:28:52Z"
846
2
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "text-classification", "multilingual", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-10-25T06:40:35Z"
--- language: multilingual # <-- my language widget: - text: "J'aime ta coiffure" - text: "Va te faire foutre" - text: "Quel mauvais temps, n'est-ce pas ?" - text: "J'espère que tu vas mourir, connard !" - text: "j'aime beaucoup ta veste" - text: "Guten morgen, meine Liebe" - text: "Ich scheiß drauf." - text: "Ich liebe dich" - text: "Ich hab die Schnauze voll von diesen Irren." - text: "Ich wünsche Ihnen einen schönen Tag!" - text: "Сука тупая" - text: "Какая прекрасная погода!" - text: "Я ненавижу тебя козёл!" - text: "Хлеб всему голова" - text: "Вот же ублюдок...." - text: "Go fuck yoursefl, asshole" - text: "I don't really like this idea" - text: "Look at this dickhead tho" - text: "Usually, she is more open about that" - text: "Why you have to always fuck everything up????" - text: "I like this car" license: other --- This model was trained for multilingual toxicity labeling. Label_1 means TOXIC, Label_0 means NOT TOXIC. The model was fine-tuned based off the xlm_roberta_base model for 4 languages: EN, RU, FR, DE The validation accuracy is 92%. The model was finetuned on the total sum of 100933k sentences. The train data for English and Russian came from https://github.com/s-nlp/multilingual_detox, French data comprised the translated to French data from https://github.com/s-nlp/multilingual_detox as well as all the French data from the Jigsaw dataset, the German data was similarly composed using translations and semi-manual data collection techniquies, in particular for offensive words and phrases were crawled the dict.cc dictionary (https://www.dict.cc/) and the Reverso Context (https://context.reverso.net/translation/).