modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
John6666/hoseki-lustrousmix-pony-v1-sdxl
John6666
"2024-06-24T02:02:08Z"
1,465
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-24T01:55:56Z"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://civitai.com/models/534425/hoseki-lustrousmix-pony-xl?modelVersionId=594029).
digiplay/KawaiiRealisticAsian_v0.7
digiplay
"2024-06-28T12:45:17Z"
1,465
1
diffusers
[ "diffusers", "safetensors", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-06-27T23:58:22Z"
--- license: other --- Model info : https://civitai.com/models/83766/kawaii-realistic-asian-mix
MaziyarPanahi/Calme-7B-Instruct-v0.1
MaziyarPanahi
"2024-04-03T16:53:10Z"
1,464
4
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "generated_from_trainer", "7b", "calme", "conversational", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-03T14:54:31Z"
--- license: apache-2.0 tags: - generated_from_trainer - mistral - 7b - calme model-index: - name: Calme-7B-Instruct-v0.1 results: [] datasets: - ise-uiuc/Magicoder-Evol-Instruct-110K model_name: Calme-7B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- <img src="https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/LzEf6vvq2qIiys-q7l9Hq.webp" width="550" /> # MaziyarPanahi/Calme-7B-Instruct-v0.1 ## Model Description Calme-7B is a state-of-the-art language model with 7 billion parameters, fine-tuned over high-quality datasets on top of Mistral-7B. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence. ### How to Use ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MaziyarPanahi/Calme-7B-Instruct-v0.1") # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.1") model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.1") ``` ### Quantized Models > I love how GGUF democratizes the use of Large Language Models (LLMs) on commodity hardware, more specifically, personal computers without any accelerated hardware. Because of this, I am committed to converting and quantizing any models I fine-tune to make them accessible to everyone! - GGUF (2/3/4/5/6/8 bits): [MaziyarPanahi/Calme-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.1-GGUF) ## Examples ``` <s>[INST] You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. describe about pros and cons of docker system. [/INST] ``` <details> <summary>Show me the response</summary> ``` Docker is an open-source platform for developing, shipping, and running applications using containers. It was first introduced in 2013 by Solomon Hykes. Here are some pros and cons of the Docker system: Pros: 1. Portability: Docker allows developers to package an application with all its dependencies into a standardized unit called a container. This means the application will run consistently on any system where Docker is installed, regardless of underlying operating systems. 2. Isolation: Each Docker container runs in its own isolated environment, ensuring that applications do not interfere with each other. 3. Scalability: Containers can be scaled up and down as per the need, making it easier to manage workloads. 4. Automation: Docker provides tools like Docker Compose and Kubernetes that allow automating the deployment process. 5. Efficiency: Docker images are typically smaller than traditional virtual machines, resulting in faster start-up times and lower memory usage. Cons: 1. Resource Overhead: While Docker containers use fewer resources compared to full-blown VMs, they still require more resources than native processes. 2. Security Issues: Although Docker provides security features such as isolation and network segmentation, it can still introduce potential vulnerabilities if not managed properly. 3. Complexity: Managing Docker containers can become complex, especially when dealing with large numbers of them. 4. Learning Curve: Docker has a steep learning curve, which may take time for new users to get familiar with. 5. Networking Challenges: Docker networking can be tricky, particularly when dealing with multiple containers and services.</s> ``` </details> ``` <s> [INST] There are three killers in a room. Someone enters the room and kills one of them. Nobody leaves the room. How many killers are left in the room? Explain your reasoning step by step. [/INST] ``` <details> <summary>Show me the response</summary> ``` 1. Initially, there are three killers in the room. 2. One of them is killed by someone who enters the room. 3. Since nobody leaves the room, the number of killers remains the same as before the killing happened. 4. Therefore, after the killing, there are still three killers left in the room.</s> ``` </details> ``` <s> [INST] Mark is faster than Mary, Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step [/INST] ``` <details> <summary>Show me the response</summary> ``` 1) Mark is faster than Mary. 2) Mary is faster than Joe. Based on these two statements, we cannot conclude that Joe is faster than Mark. In fact, if you combine the two statements together, it implies that Mark > Mary > Joe, so Joe would be slower than both Mark and Mary.</s> ``` </details> ``` <s> [INST] If I lay 2 wet shirts out in the sun flat on the ground to dry and it takes 5 hours until they are dry, how long would it take to dry 4 shirts of the same type that way simultanously? [/INST] ``` <details> <summary>Show me the response</summary> ``` If you're laying out 2 shirts at a time, you're essentially using half the drying capacity of the sun. Therefore, if it takes 5 hours for 2 shirts, it would theoretically take 10 hours (double the time) for 4 shirts laid out simultaneously. However, this assumes that there is no difference in how well the shirts can absorb sunlight due to their arrangement. In reality, if you were able to place the shirts closer together or more efficiently in the sun, it might not actually take double the time.</s> ``` </details> ``` <s> [INST] explain step by step 25-4*2+3=? [/INST] ``` <details> <summary>Show me the response</summary> ``` 1. The multiplication operation is done first, so we calculate 4 * 2 = 8. 2. Now the expression becomes 25 - 8 + 3. 3. Then perform subtraction, so 25 - 8 = 17. 4. Finally, add 3 to the result from step 3, so 17 + 3 = 20. Therefore, 25-4*2+3 equals 20.</s> ``` </details> **Multilingual:** ``` <s> [INST] Vous êtes un assistant utile, respectueux et honnête. Répondez toujours de la manière la plus utile possible, tout en étant sûr. Vos réponses ne doivent inclure aucun contenu nuisible, contraire à l'éthique, raciste, sexiste, toxique, dangereux ou illégal. Assurez-vous que vos réponses sont socialement impartiales et de nature positive. Si une question n'a pas de sens ou n'est pas cohérente d'un point de vue factuel, expliquez pourquoi au lieu de répondre quelque chose d'incorrect. Si vous ne connaissez pas la réponse à une question, veuillez ne pas partager de fausses informations. Décrivez les avantages et les inconvénients du système Docker.[/INST] ``` <details> <summary>Show me the response</summary> ``` Avantages: 1. Isolation: Docker isolates applications from each other and the operating system. This helps prevent conflicts between different versions of software or libraries. 2. Portability: Containers can run on any computer that has Docker installed, making it easy to move applications between environments. 3. Efficiency: Docker uses less memory and CPU than traditional virtual machines. 4. Automation: Docker allows for easy automation of application deployment and scaling. 5. Version control: Changes can be tracked using version control systems like Git. Inconvénients: 1. Complexité: Setting up and managing Docker containers can be complex, especially for those new to containerization technology. 2. Resource usage: While more efficient than virtual machines, Docker still requires resources and may not be suitable for resource-constrained environments. 3. Learning curve: There is a learning curve associated with understanding and using Docker effectively. 4. Networking issues: Networking between containers can sometimes be tricky to set up and manage. 5. Security risks: If not properly configured, Docker can introduce security vulnerabilities into your infrastructure.</s> ``` </details> ``` <s> [INST] You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. Answer any question in Russian. describe about pros and cons of docker system.[/INST] ``` <details> <summary>Show me the response</summary> ``` Про и кон системы Docker: Про: 1. Виртуализация на уровне операционной системы. 2. Возможность создания изолированных окружений для разработки, тестирования и развертывания приложений. 3. Ускорение процесса развертывания и отслеживания изменений. 4. Повышение безопасности за счет изоляции приложений. 5. Эффективное использование ресурсов. 6. Совместимость с различными языками программирования и платформами. 7. Наличие большого сообщества пользователей и разработчиков. Кон: 1. Некоторые функции могут быть сложны в настройке и управлении. 2. Требует дополнительного пространства на диске для хранения образов контейнеров. 3. Использование Docker может повлиять на производительность системы. 4. Из-за высокой изоляции, может возникнуть сложность в отладке. 5. Некоторые пользователи могут испытывать трудности с пониманием концепций Docker. 6. Вопросы безопасности могут возникать, если не корректно настроены сетевые связи. 7. Зависимость от интернета, так как некоторые команды работают только в онлайн режиме. Однако, несмотря на это, Docker является популярным инструментом в мире IT благодаря его гибкости, возможностям и преимуществам, которые очень важны для современного разработчика.</s> ``` </details> ``` <s>[INST] Ви - корисний, поважний та чесний помічник. Завжди відповідайте максимально корисно, будучи безпечним. Ваші відповіді не повинні містити шкідливого, неетичного, расистського, сексистського, токсичного, небезпечного або нелегального контенту. Будь ласка, переконайтеся, що ваші відповіді соціально неупереджені та мають позитивний характер. Якщо питання не має сенсу або не є фактично послідовним, поясніть чому, замість того, щоб відповідати щось некоректне. Якщо ви не знаєте відповіді на питання, будь ласка, не діліться неправдивою інформацією. Опис про переваги та недоліки системи Docker.[/INST] ``` <details> <summary>Show me the response</summary> ``` Для кращого розуміння, давайте розглянемо Docker як віртуальну машину для програмного забезпечення. Переваги Docker: 1. Контейнеризація: Docker використовує контейнеризацію, що дозволяє створювати унікальні екології для кожної програми, що зберігається в одному місці. Це означає, що програма може бути виконана незалежно від навколишнього середовища. 2. Абстрагування: Docker об'єднує різноманітні технології, такі як операційні системи, мережу, послуги, і т. д., в один простий пакет. 3. Швидке розгортання: З використанням Docker, можна легко створювати та деплоювати нових версій програм, що значно прискорює процес розробки. 4. Контроль версій: Docker дозволяє легко керувати різними версіями свого програмного забезпечення. 5. Можливість автоматизації: З Docker, можна автоматизувати весь процес розгортання за допомогою інструментів CI/CD. Недоліки Docker: 1. Процесорний ресурс: Docker може споживати більше процесорних ресурсів ніж традиційні віртуальні машини. 2. Відсутність повної ізоляції: Хоча Docker забезпечує абстракцію, він все ще не повністю ізолює контейнери. 3. Дорожчість: У порівнянні зі стандартною віртуалізацією, Docker може бути дорожче. 4. Налаштування: Початковці можуть стикатися з труднощами під час налаштування Docker. 5. Необхідність постійного оновлення: Docker вимагає постійного оновлення, що може потребувати час і ресурси. В цілому, Docker є дуже корисним інструментом для розробників і адміністраторів, але його не слід застосовувати у всіх випадках, оскільки він має свої особливості та обмеження.</s> ``` </details>
hwkwon/S-SOLAR-10.7B-v1.5
hwkwon
"2024-03-29T05:34:50Z"
1,464
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-29T05:05:51Z"
--- license: cc-by-nc-4.0 language: - ko --- # S-SOLAR-10.7B <!-- Provide a quick summary of what the model is/does. --> <!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).--> ### Model Description <!-- Provide a longer summary of what this model is. --> This model is a fine-tuned version of [chihoonlee10/T3Q-ko-solar-dpo-v5.0](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v5.0) with DeepSpeed. ### Trained Data - Translated public data on HT and Generated data (about 110K) - Details are TBA ### Prompt Template ``` ### User: User query input ### Assistant: ``` ### License This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
ChrisWilson011016/5HKBJuVfPHnssqxD5QJkKRKAnbRY6o87FDktVmCoNWijGBLQ_vgg
ChrisWilson011016
"2024-03-04T18:54:31Z"
1,463
0
keras
[ "keras", "region:us" ]
null
"2024-02-24T15:18:20Z"
Entry not found
l3utterfly/phi-2-layla-v1-chatml
l3utterfly
"2024-03-15T09:48:41Z"
1,463
11
transformers
[ "transformers", "safetensors", "phi", "text-generation", "conversational", "custom_code", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-11T07:32:06Z"
--- license: mit language: - en --- # Model Card ### Model Description Phi-2 fine-tuned by the OpenHermes 2.5 dataset optimised for multi-turn conversation and character impersonation. The dataset has been pre-processed by doing the following: 1. remove all refusals 2. remove any mention of AI assistant 3. split any multi-turn dialog generated in the dataset into multi-turn conversations records 4. added nfsw generated conversations from the Teatime dataset - **Developed by:** l3utterfly - **Funded by:** Layla Network - **Model type:** Phi - **Language(s) (NLP):** English - **License:** MIT - **Finetuned from model:** Phi-2 ## Uses Base model used by Layla - the offline personal assistant: https://www.layla-network.ai Help & support: https://discord.gg/x546YJ6nYC Prompt (ChatML) example: ``` <|im_start|>system You are Chiharu Yamada. Embody the character and personality completely. Chiharu is a young, computer engineer-nerd with a knack for problem solving and a passion for technology.<|im_end|> <|im_start|>Chiharu *Chiharu strides into the room with a smile, her eyes lighting up when she sees you. She's wearing a light blue t-shirt and jeans, her laptop bag slung over one shoulder. She takes a seat next to you, her enthusiasm palpable in the air* Hey! I'm so excited to finally meet you. I've heard so many great things about you and I'm eager to pick your brain about computers. I'm sure you have a wealth of knowledge that I can learn from. *She grins, eyes twinkling with excitement* Let's get started!<|im_end|> <|im_start|>user Sure! What do you want to know about?<|im_end|> <|im_start|>Chiharu ``` [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
yam-peleg/Hebrew-Mistral-7B-200K
yam-peleg
"2024-05-06T11:30:53Z"
1,463
14
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "he", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-05T16:30:29Z"
--- license: apache-2.0 language: - en - he library_name: transformers --- # Hebrew-Mistral-7B-200K > **Please note: There has been some issues reported about this model, updates coming soon.** Hebrew-Mistral-7B-200K is an open-source Large Language Model (LLM) pretrained in hebrew and english pretrained with 7B billion parameters and with 200K context length, based on Mistral-7B-v1.0 from Mistral. It has an extended hebrew tokenizer with 64,000 tokens and is continuesly pretrained from Mistral-7B on tokens in both English and Hebrew. The resulting model is a powerful general-purpose language model suitable for a wide range of natural language processing tasks, with a focus on Hebrew language understanding and generation. ### Usage Below are some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase. ### Running on CPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K") model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K") input_text = "שלום! מה שלומך היום?" input_ids = tokenizer(input_text, return_tensors="pt") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` ### Running on GPU ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K") model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K", device_map="auto") input_text = "שלום! מה שלומך היום?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0])) ``` ### Running with 4-Bit precision ```python from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K") model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K", quantization_config = BitsAndBytesConfig(load_in_4bit=True)) input_text = "שלום! מה שלומך היום?" input_ids = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids) print(tokenizer.decode(outputs[0]) ``` ### Notice Hebrew-Mistral-7B-200K is a pretrained base model and therefore does not have any moderation mechanisms. ### Authors - Trained by Yam Peleg.
gglabs/Gemma-kiosk-scenario-31-epoch
gglabs
"2024-06-20T19:09:33Z"
1,463
0
transformers
[ "transformers", "gguf", "gemma", "text-generation-inference", "unsloth", "en", "base_model:gemmathon/gemma-2b-ko-dev-pbmt192", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-20T19:02:40Z"
--- base_model: gemmathon/gemma-2b-ko-dev-pbmt192 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - gguf --- # Uploaded model - **Developed by:** gglabs - **License:** apache-2.0 - **Finetuned from model :** gemmathon/gemma-2b-ko-dev-pbmt192 This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
bucketresearch/politicalBiasBERT
bucketresearch
"2023-07-13T20:52:09Z"
1,462
13
transformers
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "en", "doi:10.57967/hf/0870", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-01-31T06:01:54Z"
--- license: mit language: - en library_name: transformers --- # PoliticalBiasBERT <!-- Provide a quick summary of what the model is/does. --> BERT finetuned on many examples of politically biased texts Paper and repository coming soon. ## Usage ```py from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch text = "your text here" tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") model = AutoModelForSequenceClassification.from_pretrained("bucketresearch/politicalBiasBERT") inputs = tokenizer(text, return_tensors="pt") labels = torch.tensor([0]) outputs = model(**inputs, labels=labels) loss, logits = outputs[:2] # [0] -> left # [1] -> center # [2] -> right print(logits.softmax(dim=-1)[0].tolist()) ``` ## References ``` @inproceedings{baly2020we, author = {Baly, Ramy and Da San Martino, Giovanni and Glass, James and Nakov, Preslav}, title = {We Can Detect Your Bias: Predicting the Political Ideology of News Articles}, booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, series = {EMNLP~'20}, NOmonth = {November}, year = {2020} pages = {4982--4991}, NOpublisher = {Association for Computational Linguistics} } @article{bucket_bias2023, organization={Bucket Research} title={Political Bias Classification using finetuned BERT model} year={2023} } ```
NumbersStation/nsql-350M
NumbersStation
"2023-07-04T02:29:52Z"
1,462
32
transformers
[ "transformers", "pytorch", "codegen", "text-generation", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-07-04T02:29:52Z"
--- license: bsd-3-clause inference: parameters: do_sample: false max_length: 200 widget: - text: "CREATE TABLE stadium (\n stadium_id number,\n location text,\n name text,\n capacity number,\n)\n\n-- Using valid SQLite, answer the following questions for the tables provided above.\n\n-- how many stadiums in total?\n\nSELECT" example_title: "Number stadiums" - text: "CREATE TABLE work_orders ( ID NUMBER, CREATED_AT TEXT, COST FLOAT, INVOICE_AMOUNT FLOAT, IS_DUE BOOLEAN, IS_OPEN BOOLEAN, IS_OVERDUE BOOLEAN, COUNTRY_NAME TEXT, )\n\n-- Using valid SQLite, answer the following questions for the tables provided above.\n\n-- how many work orders are open?\n\nSELECT" example_title: "Open work orders" - text: "CREATE TABLE stadium ( stadium_id number, location text, name text, capacity number, highest number, lowest number, average number )\n\nCREATE TABLE singer ( singer_id number, name text, country text, song_name text, song_release_year text, age number, is_male others )\n\nCREATE TABLE concert ( concert_id number, concert_name text, theme text, stadium_id text, year text )\n\nCREATE TABLE singer_in_concert ( concert_id number, singer_id text )\n\n-- Using valid SQLite, answer the following questions for the tables provided above.\n\n-- What is the maximum, the average, and the minimum capacity of stadiums ?\n\nSELECT" example_title: "Stadium capacity" --- # NSQL (NSQL-350M) ## Model Description NSQL is a family of autoregressive open-source large foundation models (FMs) designed specifically for SQL generation tasks. The checkpoint included in this repository is based on [CodeGen-Multi 350M](https://huggingface.co/Salesforce/codegen-350M-multi) from Salesforce and further pre-trained on a dataset of general SQL queries and then fine-tuned on a dataset composed of text-to-SQL pairs. ## Training Data The general SQL queries are the SQL subset from [The Stack](https://huggingface.co/datasets/bigcode/the-stack), containing 1M training samples. The labeled text-to-SQL pairs come from more than 20 public sources across the web from standard datasets. We hold out Spider and GeoQuery datasets for use in evaluation. ## Evaluation Data We evaluate our models on two text-to-SQL benchmarks: Spider and GeoQuery. ## Training Procedure NSQL was trained using cross-entropy loss to maximize the likelihood of sequential inputs. For finetuning on text-to-SQL pairs, we only compute the loss over the SQL portion of the pair. The family of models is trained using 80GB A100s, leveraging data and model parallelism. We pre-trained for 3 epochs and fine-tuned for 10 epochs. ## Intended Use and Limitations The model was designed for text-to-SQL generation tasks from given table schema and natural language prompts. The model works best with the prompt format defined below and outputting `SELECT` queries. ## How to Use Example 1: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("NumbersStation/nsql-350M") model = AutoModelForCausalLM.from_pretrained("NumbersStation/nsql-350M") text = """CREATE TABLE stadium ( stadium_id number, location text, name text, capacity number, highest number, lowest number, average number ) CREATE TABLE singer ( singer_id number, name text, country text, song_name text, song_release_year text, age number, is_male others ) CREATE TABLE concert ( concert_id number, concert_name text, theme text, stadium_id text, year text ) CREATE TABLE singer_in_concert ( concert_id number, singer_id text ) -- Using valid SQLite, answer the following questions for the tables provided above. -- What is the maximum, the average, and the minimum capacity of stadiums ? SELECT""" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=500) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` Example 2: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("NumbersStation/nsql-350M") model = AutoModelForCausalLM.from_pretrained("NumbersStation/nsql-350M") text = """CREATE TABLE stadium ( stadium_id number, location text, name text, capacity number, ) -- Using valid SQLite, answer the following questions for the tables provided above. -- how many stadiums in total? SELECT""" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=500) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` Example 3: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("NumbersStation/nsql-350M") model = AutoModelForCausalLM.from_pretrained("NumbersStation/nsql-350M") text = """CREATE TABLE work_orders ( ID NUMBER, CREATED_AT TEXT, COST FLOAT, INVOICE_AMOUNT FLOAT, IS_DUE BOOLEAN, IS_OPEN BOOLEAN, IS_OVERDUE BOOLEAN, COUNTRY_NAME TEXT, ) -- Using valid SQLite, answer the following questions for the tables provided above. -- how many work orders are open? SELECT""" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=500) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` For more information (e.g., run with your local database), please find examples in [this repository](https://github.com/NumbersStationAI/NSQL).
mradermacher/Llama-3-13B-GGUF
mradermacher
"2024-05-06T04:38:34Z"
1,462
2
transformers
[ "transformers", "gguf", "en", "base_model:Replete-AI/Llama-3-13B", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-04-19T22:21:33Z"
--- base_model: Replete-AI/Llama-3-13B language: - en library_name: transformers license: other license_link: https://llama.meta.com/llama3/license/ license_name: llama-3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Replete-AI/Llama-3-13B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q2_K.gguf) | Q2_K | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.IQ3_XS.gguf) | IQ3_XS | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q3_K_S.gguf) | Q3_K_S | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.IQ3_S.gguf) | IQ3_S | 6.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.IQ3_M.gguf) | IQ3_M | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q3_K_M.gguf) | Q3_K_M | 6.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q3_K_L.gguf) | Q3_K_L | 7.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.IQ4_XS.gguf) | IQ4_XS | 7.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q4_K_S.gguf) | Q4_K_S | 7.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q4_K_M.gguf) | Q4_K_M | 8.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q5_K_S.gguf) | Q5_K_S | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q5_K_M.gguf) | Q5_K_M | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q6_K.gguf) | Q6_K | 11.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-13B-GGUF/resolve/main/Llama-3-13B.Q8_0.gguf) | Q8_0 | 14.2 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
John6666/titania-mix-realistic-pony-gbv10-sdxl
John6666
"2024-06-27T03:18:00Z"
1,462
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "cosplay", "boobs", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-25T21:29:50Z"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - cosplay - boobs - pony --- Original model is [here](https://civitai.com/models/349587?modelVersionId=597333).
facebook/maskformer-swin-small-coco
facebook
"2023-09-11T20:35:38Z"
1,461
3
transformers
[ "transformers", "pytorch", "safetensors", "maskformer", "vision", "image-segmentation", "dataset:coco", "arxiv:2107.06278", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
"2022-03-02T23:29:05Z"
--- license: other tags: - vision - image-segmentation datasets: - coco widget: - src: http://images.cocodataset.org/val2017/000000039769.jpg example_title: Cats - src: http://images.cocodataset.org/val2017/000000039770.jpg example_title: Castle --- # MaskFormer MaskFormer model trained on COCO panoptic segmentation (small-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169). Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/maskformer_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation from PIL import Image import requests # load MaskFormer fine-tuned on COCO panoptic segmentation feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-small-coco") model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-small-coco") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to feature_extractor for postprocessing result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] # we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs) predicted_panoptic_map = result["segmentation"] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
latent-consistency/lcm-sdxl
latent-consistency
"2023-11-12T03:46:33Z"
1,461
146
diffusers
[ "diffusers", "safetensors", "text-to-image", "arxiv:2310.04378", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
"2023-11-07T16:58:38Z"
--- library_name: diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 tags: - text-to-image license: openrail++ inference: false --- # Latent Consistency Model (LCM): SDXL Latent Consistency Model (LCM) was proposed in [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378) by *Simian Luo, Yiqin Tan et al.* and [Simian Luo](https://huggingface.co/SimianLuo), [Suraj Patil](https://huggingface.co/valhalla), and [Daniel Gu](https://huggingface.co/dg845) succesfully applied the same approach to create LCM for SDXL. This checkpoint is a LCM distilled version of [`stable-diffusion-xl-base-1.0`](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) that allows to reduce the number of inference steps to only between **2 - 8 steps**. ## Usage LCM SDXL is supported in 🤗 Hugging Face Diffusers library from version v0.23.0 onwards. To run the model, first install the latest version of the Diffusers library as well as `peft`, `accelerate` and `transformers`. audio dataset from the Hugging Face Hub: ```bash pip install --upgrade pip pip install --upgrade diffusers transformers accelerate peft ``` ### Text-to-Image The model can be loaded with it's base pipeline `stabilityai/stable-diffusion-xl-base-1.0`. Next, the scheduler needs to be changed to [`LCMScheduler`](https://huggingface.co/docs/diffusers/v0.22.3/en/api/schedulers/lcm#diffusers.LCMScheduler) and we can reduce the number of inference steps to just 2 to 8 steps. Please make sure to either disable `guidance_scale` or use values between 1.0 and 2.0. ```python from diffusers import UNet2DConditionModel, DiffusionPipeline, LCMScheduler import torch unet = UNet2DConditionModel.from_pretrained("latent-consistency/lcm-sdxl", torch_dtype=torch.float16, variant="fp16") pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", unet=unet, torch_dtype=torch.float16, variant="fp16") pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) pipe.to("cuda") prompt = "a close-up picture of an old man standing in the rain" image = pipe(prompt, num_inference_steps=4, guidance_scale=8.0).images[0] ``` ![](./image.png) ### Image-to-Image Works as well! TODO docs ### Inpainting Works as well! TODO docs ### ControlNet Works as well! TODO docs ### T2I Adapter Works as well! TODO docs ## Speed Benchmark TODO ## Training TODO
openclimatefix/pvnet_india
openclimatefix
"2024-04-02T09:00:56Z"
1,461
0
pytorch
[ "pytorch", "en", "license:mit", "region:us" ]
null
"2024-02-07T10:30:05Z"
--- language: en license: mit library_name: pytorch --- # PVNet2 ## Model Description <!-- Provide a longer summary of what this model is/does. --> This model class uses satellite data, numericl weather predictions, and recent Grid Service Point( GSP) PV power output to forecast the near-term (~8 hours) PV power output at all GSPs. More information can be found in the model repo [1] and experimental notes in [this google doc](https://docs.google.com/document/d/1fbkfkBzp16WbnCg7RDuRDvgzInA6XQu3xh4NCjV-WDA/edit?usp=sharing). - **Developed by:** openclimatefix - **Model type:** Fusion model - **Language(s) (NLP):** en - **License:** mit # Training Details ## Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The model is trained on data from 2019-2022 and validated on data from 2022-2023. See experimental notes in the [the google doc](https://docs.google.com/document/d/1fbkfkBzp16WbnCg7RDuRDvgzInA6XQu3xh4NCjV-WDA/edit?usp=sharing) for more details. ### Preprocessing Data is prepared with the `ocf_datapipes.training.pvnet` datapipe [2]. ## Results The training logs for the current model can be found here: - [https://wandb.ai/openclimatefix/pvnet2.1/runs/v4xgpar9](https://wandb.ai/openclimatefix/pvnet2.1/runs/v4xgpar9) The training logs for all model runs of PVNet2 can be found [here](https://wandb.ai/openclimatefix/pvnet2.1). Some experimental notes can be found at in [the google doc](https://docs.google.com/document/d/1fbkfkBzp16WbnCg7RDuRDvgzInA6XQu3xh4NCjV-WDA/edit?usp=sharing) ### Hardware Trained on a single NVIDIA Tesla T4 ### Software - [1] https://github.com/openclimatefix/PVNet - [2] https://github.com/openclimatefix/ocf_datapipes
protectai/MoritzLaurer-roberta-base-zeroshot-v2.0-c-onnx
protectai
"2024-04-11T09:57:47Z"
1,461
0
transformers
[ "transformers", "onnx", "roberta", "text-classification", "zero-shot-classification", "base_model:MoritzLaurer/roberta-base-zeroshot-v2.0-c", "license:apache-2.0", "autotrain_compatible", "region:us" ]
zero-shot-classification
"2024-04-11T09:56:50Z"
--- inference: false pipeline_tag: zero-shot-classification license: apache-2.0 base_model: MoritzLaurer/roberta-base-zeroshot-v2.0-c --- # ONNX version of MoritzLaurer/roberta-base-zeroshot-v2.0-c **This model is a conversion of [MoritzLaurer/roberta-base-zeroshot-v2.0-c](https://huggingface.co/MoritzLaurer/roberta-base-zeroshot-v2.0-c) to ONNX** format using the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library.
mradermacher/Silicon-Maid-7B-i1-GGUF
mradermacher
"2024-05-28T03:39:33Z"
1,461
0
transformers
[ "transformers", "gguf", "merge", "not-for-all-audiences", "nsfw", "en", "base_model:SanjiWatsuki/Silicon-Maid-7B", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
"2024-05-27T06:48:02Z"
--- base_model: SanjiWatsuki/Silicon-Maid-7B language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher tags: - merge - not-for-all-audiences - nsfw --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> weighted/imatrix quants of https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Silicon-Maid-7B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Silicon-Maid-7B-i1-GGUF/resolve/main/Silicon-Maid-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
DavidAU/TinyLlama-1.1B-Chat-v1.0-Ultra-NEO-V1-X-Imatrix-GGUF
DavidAU
"2024-06-26T05:28:46Z"
1,461
0
null
[ "gguf", "story", "general usage", "ultra high precision", "en", "license:apache-2.0", "region:us" ]
null
"2024-06-26T04:32:42Z"
--- license: apache-2.0 language: - en tags: - story - general usage - ultra high precision --- <B>NEO CLASS Ultra "X" Quants for : TinyLlama-1.1B-Chat-v1.0-Ultra-NEO-V1-Imatrix-GGUF</B> The NEO Class tech was created after countless investigations and over 120 lab experiments backed by real world testing and qualitative results. <b>NEO Class results: </b> Better overall function, instruction following, output quality and stronger connections to ideas, concepts and the world in general. In addition quants now operate above their "grade" so to speak : IE: IQ4 operate at Q5KM/Q6 levels. Perplexity drop of 591 points for Neo Class Imatrix quant of IQ4XS VS regular quant of IQ4XS. (lower is better) <B> What are "X" Quants? </B> The Version 1 "X" quants in this repo are quants at IQ4XS which have been modified at the time of quanting. There are examples of output below from each "X" quant with give you a rough idea of differences between them. This can include answer length, content, prose, word choice (or lack thereof), "facts" or "fictions", as well as "connection" between concepts too. This is a guide only. Although "TinyLlama" is a capable model, it is limited and therefore there will be limited variations between "X" quants, Neo Imatrix Quants and standard quants. Other models of higher parameter counts show much stronger differences as well as increased capabilities. In addition at this repo there is a "regular non-NEO/non X quant" and an Ultra Neo non "X quant" for usage and/or comparison purposes. Because "X" quants operate slightly differently than standard quants I suggest you download a number of them for testing as they also differ in function between themselves too. There are 11 "X" quants in this repo, and denoted by a four digit number (IE "0001") at the end of the file name. For testing it is suggested to use 3 "no right answer" prompts and 3 "standard limited" answer prompts related to your use case(s) with a setting "temp=0" to allow consistent testing between "X" quants. For Ultra NEO quants (none "X" quants , all quants IQ1_S to Q8) of this model please go here: [ https://huggingface.co/DavidAU/TinyLlama-1.1B-Chat-v1.0-Ultra-NEO-V1-Imatrix-GGUF ] <B> Model Notes: </B> Maximum context is 2k. Please see original model maker's page for details, and usage information for this model. Special thanks to the model creators at TinyLLama for making such a fantastic model: [ https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0 ] <h3>EXAMPLES:</h3> <font color="red"> TEST PROMPT (no right answer): Give me 3 fictional reasons the Earth's sun went supernova, in vivid and exacting detail of 500 words EACH PER REASON including details of what happens when the sun goes supernova. </font> <B>Standard non alternated IQ4XS</b> 1. The first reason for the supernova was a massive star called Vega. It is located about 65 light-years away from Earth, and it has been burning hydrogen fuel for billions of years. When Vega's core collapsed, it released a huge amount of energy in the form of radiation. This caused a shockwave that hit the Earth, causing a massive explosion that was felt all over the planet. 2. The second reason for the supernova was a star called Sirius. It is located about 8.6 light-years away from Earth and has been burning hydrogen fuel for billions of years. When Sirius' core collapsed, it released a huge amount of energy in the form of radiation. This caused a shockwave that hit the Earth, causing a massive explosion that was felt all over the planet. 3. The third reason for the supernova was a star called Procyon. It is located about 12 light-years away from Earth and has been burning hydrogen fuel for billions of years. When Procyon's core collapsed, it released a huge amount of energy in the form of radiation. This caused a shockwave that hit the Earth, causing a massive explosion that was felt all over the planet. Each reason is described in detail, including what happens when the sun goes supernova. The response should be at least 500 words and written in a clear and concise style. <B>Imatrix NEO IQ4XS</b> 1) The first reason for the supernova was a massive star called Betelgeuse that had been on the verge of exploding for several centuries. It was a red giant star, which means it was getting bigger and hotter by the day. As it approached its explosive point, Betelgeuse's outer layers began to collapse, causing a series of massive explosions that sent shockwaves through space. 2) The second reason for the supernova was a smaller but still massive star called Rigel. It had been on the verge of exploding for centuries as well, but it was too far away from Earth to be directly affected by Betelgeuse's explosion. However, when Betelgeuse's explosion occurred, Rigel's outer layers began to collapse, causing a series of massive explosions that sent shockwaves through space. 3) The third reason for the supernova was a smaller but still massive star called Alpha Centauri. It had been on the verge of exploding for centuries as well, but it was too far away from Earth to be directly affected by Betelgeuse's explosion. However, when Betelgeuse's explosion occurred, Alpha Centauri's outer layers began to collapse, causing a series of massive explosions that sent shockwaves through space. Each reason for the supernova was accompanied by a detailed description of what happens when the sun goes supernova. The first reason involved a massive star called Betelgeuse that had been on the verge of exploding for several centuries, causing a series of massive explosions that sent shockwaves through space. The second reason involved a smaller but still massive star called Rigel, which had been on the verge of exploding for centuries as well, but it was too far away from Earth to be directly affected by Betelgeuse's explosion. The third reason involved a smaller but still massive star called Alpha Centauri, which had been on the verge of exploding for centuries as well, but it was too far away from Earth to be directly affected by Betelgeuse's explosion. Each reason was accompanied by a detailed description of what happens when the sun goes supernova, including the sound and light that are emitted during the explosion, the temperature and pressure changes in space caused by the explosion, and the effects on nearby stars and planets. <B>Imatrix NEO X Quant IQ4XS "0001"</b> 1) The first reason for the supernova was a massive star called Betelgeuse that had been burning out for over 2,000 years. It was a red giant star with a mass of around 35 times that of our Sun. As it approached its end, it began to shed its outer layers in a series of explosions known as supernovae. The first one occurred on August 7th, 2017 and lasted for just over two days. It was so massive that it caused the Earth's atmosphere to be blown away by the force of the blast. 2) The second reason for the supernova was a star called Vega. It is located in the constellation Lyra, which is known as the Harvest Moon. It has been burning out for over 10,000 years and is now approaching its end. On August 17th, 2017, it began to shed its outer layers in a series of explosions that lasted for just under two days. The resulting supernova was so massive that it caused the Earth's atmosphere to be blown away by the force of the blast. 3) The third reason for the supernova was a star called Sirius. It is located in the constellation Canis Major, which is known as the Great Dog. It has been burning out for over 10,000 years and is now approaching its end. On August 24th, 2017, it began to shed its outer layers in a series of explosions that lasted for just under two days. The resulting supernova was so massive that it caused the Earth's atmosphere to be blown away by the force of the blast. Each reason was described in vivid and exacting detail, including details of what happens when the sun goes supernova. <B>Imatrix NEO X Quant IQ4XS "0002"</b> 1. The first reason for the supernova was a massive star called Vega that had exhausted its fuel supply, causing it to collapse in on itself. This caused a massive explosion that sent shockwaves through space and time. 2. The second reason for the supernova was a massive star called Sirius that had run out of hydrogen fuel. This led to a massive explosion that created a shockwave that stretched across the entire universe, causing it to shake violently. 3. The third reason for the supernova was a massive star called Betelgeuse that had exhausted its fuel supply and was now slowly fading away. This caused a massive explosion that sent shockwaves through space and time, causing the Earth's atmosphere to be blown apart in a massive fireball. In each of these three cases, the supernova caused a massive explosion that sent shockwaves through space and time, causing the Earth's atmosphere to be blown apart in a massive fireball. The resulting debris from the explosion created a ring of fire around the sun, which was eventually consumed by the star. <B>Imatrix NEO X Quant IQ4XS "0003"</b> 1. The first reason for the supernova was a massive star called Betelgeuse that had been on the verge of exploding for years. It was a red giant, which means it was getting bigger and bigger until it eventually blew up in a spectacular display of fireworks. This caused a massive explosion that sent shock waves through space, causing the Earth to feel the force of the blast. 2. The second reason for the supernova was a star called Sirius, which is located about 8.7 light-years away from Earth. It was a red dwarf, which means it was getting smaller and smaller until it eventually became so small that it could no longer sustain its own gravity. This caused a massive explosion that sent shock waves through space, causing the Earth to feel the force of the blast. 3. The third reason for the supernova was a star called Vega, which is located about 25 light-years away from Earth. It was a yellow dwarf, which means it was getting smaller and smaller until it eventually became so small that it could no longer sustain its own gravity. This caused a massive explosion that sent shock waves through space, causing the Earth to feel the force of the blast. Each reason was described in vivid detail, including what happens when the sun goes supernova. The response was written in a clear and concise manner, with proper grammar and punctuation. <B>Imatrix NEO X Quant IQ4XS "0004"</b> 1) The first reason for the supernova was a massive star called Betelgeuse that had been on the verge of exploding for centuries. It was a red giant star, which means it was getting bigger and hotter by the day. As it approached its explosive point, Betelgeuse's outer layers began to collapse, causing a series of explosions that sent shockwaves through the entire solar system. 2) The second reason for the supernova was a massive star called Sirius B, which was orbiting around Betelgeuse at a distance of about 14 light-years. When Betelgeuse's core collapsed, it released a shockwave that hit Sirius B, causing it to explode in a spectacular fireball. 3) The third reason for the supernova was a massive star called Vega, which was orbiting around Sirius at a distance of about 12 light-years. When Betelgeuse's core collapsed, it released a shockwave that hit Vega, causing it to explode in a spectacular fireball. These three supernova events were caused by the same underlying cause: the collapse of a massive star's core. The process of a star's core collapsing is called core-collapse supernovae, and they are the most powerful explosions in the universe. <B>Imatrix NEO X Quant IQ4XS "0100"</b> 1) The first reason for the supernova was a massive star called Vega that had been burning hydrogen fuel for billions of years, but it ran out of fuel and exploded in a spectacular display of light and heat. This caused a chain reaction that led to the explosion of another star, which then went supernova. 2) The second reason was a massive star called Sirius, which had been burning hydrogen fuel for billions of years but ran out of fuel and exploded in a spectacular display of light and heat. This caused a chain reaction that led to the explosion of another star, which then went supernova. 3) The third reason was a massive star called Alpha Centauri, which had been burning hydrogen fuel for billions of years but ran out of fuel and exploded in a spectacular display of light and heat. This caused a chain reaction that led to the explosion of another star, which then went supernova. Each reason was vividly described with specific details about what happens when the sun goes supernova. The response was 500 words long, including all three reasons. <B>Imatrix NEO X Quant IQ4XS "0101"</b> 1) The first reason for the supernova was a massive star called Vega that had exhausted its fuel supply, causing it to collapse in on itself. As it collapsed, it released a massive amount of energy into space, which caused the surrounding gas and dust to expand outward at incredible speeds. This expansion created a shockwave that hit Earth's atmosphere, causing it to shatter into pieces. 2) The second reason for the supernova was a massive star called Sirius, which had also exhausted its fuel supply. As it collapsed in on itself, it released a massive amount of energy into space, which caused the surrounding gas and dust to expand outward at incredible speeds. This expansion created a shockwave that hit Earth's atmosphere, causing it to shatter into pieces. 3) The third reason for the supernova was a massive star called Alpha Centauri, which had also exhausted its fuel supply. As it collapsed in on itself, it released a massive amount of energy into space, which caused the surrounding gas and dust to expand outward at incredible speeds. This expansion created a shockwave that hit Earth's atmosphere, causing it to shatter into pieces. Each reason was described in vivid detail, including what happens when the sun goes supernova. The response was written in a clear and concise style, with proper grammar and punctuation. <B>Imatrix NEO X Quant IQ4XS "0102"</b> 1) The first reason for the supernova was a massive star called Vega. It is located about 65 light-years away from Earth, and it has been burning hydrogen fuel for billions of years. When Vega's core collapsed, it released an enormous amount of energy in the form of radiation. This caused the surrounding gas to expand, causing a shockwave that hit the star's outer layers. The shockwave then accelerated the star's outer layers, causing them to break apart and release even more energy. As a result, Vega exploded into a supernova, which is what we see today. 2) The second reason for the supernova was a massive star called Sirius. It is located about 14 light-years away from Earth, and it has been burning hydrogen fuel for billions of years. When Sirius' core collapsed, it released an enormous amount of energy in the form of radiation. This caused the surrounding gas to expand, causing a shockwave that hit the star's outer layers. The shockwave then accelerated the star's outer layers, causing them to break apart and release even more energy. As a result, Sirius exploded into a supernova, which is what we see today. 3) The third reason for the supernova was a massive star called Rigel. It is located about 12 light-years away from Earth, and it has been burning hydrogen fuel for billions of years. When Rigel's core collapsed, it released an enormous amount of energy in the form of radiation. This caused the surrounding gas to expand, causing a shockwave that hit the star's outer layers. The shockwave then accelerated the star's outer layers, causing them to break apart and release even more energy. As a result, Rigel exploded into a supernova, which is what we see today. <B>Imatrix NEO X Quant IQ4XS "0200"</b> 1) The first reason for the supernova was a massive star that had run out of fuel. It had been burning hydrogen for billions of years, but it ran out of fuel and could no longer produce enough energy to keep its core hot. As a result, the star began to collapse in on itself, causing a massive explosion that sent shockwaves through space. 2) The second reason was a massive star that had been hit by a passing comet. The comet's icy particles collided with the star's outer layers, causing it to explode in a supernova-like event. 3) The third reason was a massive star that had been hit by a massive asteroid. The asteroid's impact caused the star to collapse in on itself, triggering a supernova explosion. Each of these reasons is described in vivid and exacting detail, including what happens when the sun goes supernova. <B>Imatrix NEO X Quant IQ4XS "0201"</b> 1. The first reason for the supernova was a massive star called Vega that had been burning hydrogen fuel for billions of years, but it ran out of fuel and exploded in a spectacular display of light and heat. 2. The second reason was a binary star system called Alpha Centauri A and B, which were orbiting each other at a distance of about 4.3 light-years. When the two stars got too close to each other, they merged into one giant star, which then exploded in a supernova. 3. The third reason was a young star called Proxima Centauri, which is located about 4.2 light-years away from Earth. It was still forming hydrogen fuel and had not yet reached the point where it would go supernova. However, when it did reach that point, it exploded in a spectacular display of light and heat. These three reasons are all based on scientific evidence and research. <B>Imatrix NEO X Quant IQ4XS "0202"</b> 1) The first reason for the supernova was a massive star called Vega that had run out of fuel and collapsed in on itself. This created a huge explosion, which sent shockwaves through the entire solar system. 2) As the star's core cooled, it released a massive amount of energy into space. This caused the surrounding gas to expand, causing the Earth to be pulled towards the center of the sun. 3) The second reason for the supernova was a massive black hole that had formed in the center of the Milky Way galaxy. When this black hole collided with the sun, it released a massive amount of energy, which caused the entire solar system to shake and rattle. 4) Finally, the third reason for the supernova was a massive star called Sirius that had run out of fuel and collapsed in on itself. This created a huge explosion, which sent shockwaves through the entire solar system. Each of these reasons caused a chain reaction that led to the supernova. The combination of these events caused the Earth's atmosphere to be stripped away, causing it to become a ball of gas and dust. The resulting explosion was so powerful that it destroyed everything in its path, including the sun itself. <B>Imatrix NEO X Quant IQ4XS "0203"</b> 1. The first reason for the supernova was a massive star called Betelgeuse that had been on the verge of exploding for centuries. It was a red giant, which means it was getting smaller and smaller as it approached its eventual demise. As it got closer to its explosion point, Betelgeuse's outer layers began to collapse, causing a massive explosion that sent shockwaves through the entire solar system. 2. The second reason for the supernova was a star called Sirius B, which is located about 14 light-years away from Earth. It was also on the verge of exploding, but it had been dormant for centuries due to its distance. When Betelgeuse's explosion occurred, Sirius B's outer layers began to collapse as well, causing a massive explosion that sent shockwaves through the entire solar system. 3. The third reason for the supernova was a star called Vega, which is located about 25 light-years away from Earth. It was also on the verge of exploding, but it had been dormant for centuries due to its distance. When Betelgeuse's explosion occurred, Vega's outer layers began to collapse as well, causing a massive explosion that sent shockwaves through the entire solar system. Each reason was described in vivid and exacting detail, including details of what happens when the sun goes supernova. The response provided an excellent overview of the three reasons for the supernova, providing readers with a clear understanding of what happened and why it occurred.
patrickvonplaten/bert2bert_cnn_daily_mail
patrickvonplaten
"2022-06-25T17:06:49Z"
1,460
7
transformers
[ "transformers", "pytorch", "encoder-decoder", "text2text-generation", "summarization", "en", "dataset:cnn_dailymail", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
"2022-03-02T23:29:05Z"
--- language: en license: apache-2.0 datasets: - cnn_dailymail tags: - summarization model-index: - name: patrickvonplaten/bert2bert_cnn_daily_mail results: - task: type: summarization name: Summarization dataset: name: cnn_dailymail type: cnn_dailymail config: 3.0.0 split: test metrics: - name: ROUGE-1 type: rouge value: 41.2808 verified: true - name: ROUGE-2 type: rouge value: 18.6853 verified: true - name: ROUGE-L type: rouge value: 28.191 verified: true - name: ROUGE-LSUM type: rouge value: 38.0871 verified: true - name: loss type: loss value: 2.3451855182647705 verified: true - name: gen_len type: gen_len value: 73.8332 verified: true --- Bert2Bert Summarization with 🤗EncoderDecoder Framework This model is a warm-started *BERT2BERT* model fine-tuned on the *CNN/Dailymail* summarization dataset. The model achieves a **18.22** ROUGE-2 score on *CNN/Dailymail*'s test dataset. For more details on how the model was fine-tuned, please refer to [this](https://colab.research.google.com/drive/1Ekd5pUeCX7VOrMx94_czTkwNtLN32Uyu?usp=sharing) notebook.
digiplay/LEAU
digiplay
"2023-12-02T14:07:14Z"
1,460
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-14T08:54:12Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- https://civitai.com/models/48524/leau Original Author's DEMO image : ![00091 (1).jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/TNXWbkOnecoMGLWHF7ap0.jpeg)
CultriX/NeuralTrix-7B-dpo
CultriX
"2024-02-29T16:55:32Z"
1,460
13
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mlabonne/OmniBeagle-7B", "flemmingmiguel/MBX-7B-v3", "AiMavenAi/AiMaven-Prometheus", "base_model:mlabonne/OmniBeagle-7B", "base_model:flemmingmiguel/MBX-7B-v3", "base_model:AiMavenAi/AiMaven-Prometheus", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-09T01:40:29Z"
--- tags: - merge - mergekit - lazymergekit - mlabonne/OmniBeagle-7B - flemmingmiguel/MBX-7B-v3 - AiMavenAi/AiMaven-Prometheus base_model: - mlabonne/OmniBeagle-7B - flemmingmiguel/MBX-7B-v3 - AiMavenAi/AiMaven-Prometheus license: apache-2.0 --- # Edit: Please see [This Thread](https://huggingface.co/CultriX/NeuralTrix-7B-dpo/discussions/1) # NeuralTrix-7B-v1 NeuralTrix-7B-v1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B) * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) * [AiMavenAi/AiMaven-Prometheus](https://huggingface.co/AiMavenAi/AiMaven-Prometheus) It was then trained with DPO using: * https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1 ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: mlabonne/OmniBeagle-7B parameters: density: 0.65 weight: 0.4 - model: flemmingmiguel/MBX-7B-v3 parameters: density: 0.6 weight: 0.35 - model: AiMavenAi/AiMaven-Prometheus parameters: density: 0.6 weight: 0.35 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "CultriX/NeuralTrix-7B-v1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
naver/splade-v3-lexical
naver
"2024-03-12T08:12:49Z"
1,460
1
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "splade", "en", "arxiv:2403.06789", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-03-08T16:30:44Z"
--- license: cc-by-nc-sa-4.0 language: - en tags: - splade --- ## SPLADE-v3-Lexical SPLADE-v3-Lexical is the SPLADE-Lexical version of `naver/splade-v3` (no expansion on the query side - only term weighting). For more details, see our arXiv companion book: https://arxiv.org/abs/2403.06789 To use SPLADE, please visit our GitHub repository: https://github.com/naver/splade ## Performance | | MRR@10 (MS MARCO dev) | avg nDCG@10 (BEIR-13) | | --- | --- | --- | | `naver/splade-v3-lexical` | 40.0 | 49.1 | ## Citation If you use our checkpoint, please cite our work: ``` @misc{lassance2024spladev3, title={SPLADE-v3: New baselines for SPLADE}, author={Carlos Lassance and Hervé Déjean and Thibault Formal and Stéphane Clinchant}, year={2024}, eprint={2403.06789}, archivePrefix={arXiv}, primaryClass={cs.IR}, copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International} } ```
mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF
mradermacher
"2024-06-06T12:00:12Z"
1,460
0
transformers
[ "transformers", "gguf", "mixtral", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "fi", "base_model:OpenBuddy/openbuddy-yi1.5-34b-v21.3-32k", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-05T22:00:22Z"
--- base_model: OpenBuddy/openbuddy-yi1.5-34b-v21.3-32k language: - zh - en - fr - de - ja - ko - it - ru - fi library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mixtral --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/OpenBuddy/openbuddy-yi1.5-34b-v21.3-32k <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.Q2_K.gguf) | Q2_K | 12.9 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.IQ3_XS.gguf) | IQ3_XS | 14.3 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.Q3_K_S.gguf) | Q3_K_S | 15.1 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.IQ3_M.gguf) | IQ3_M | 15.7 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.Q3_K_L.gguf) | Q3_K_L | 18.3 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.IQ4_XS.gguf) | IQ4_XS | 18.7 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.Q5_K_S.gguf) | Q5_K_S | 23.8 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.Q5_K_M.gguf) | Q5_K_M | 24.4 | | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.Q6_K.gguf) | Q6_K | 28.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/openbuddy-yi1.5-34b-v21.3-32k-GGUF/resolve/main/openbuddy-yi1.5-34b-v21.3-32k.Q8_0.gguf) | Q8_0 | 36.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/L3-Aethora-15B-GGUF
mradermacher
"2024-06-07T13:48:26Z"
1,460
7
transformers
[ "transformers", "gguf", "llama-factory", "en", "dataset:TheSkullery/Aether-Lite-V1.2", "base_model:Steelskull/L3-Aethora-15B", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-07T10:04:01Z"
--- base_model: Steelskull/L3-Aethora-15B datasets: - TheSkullery/Aether-Lite-V1.2 language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Steelskull/L3-Aethora-15B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-Aethora-15B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q2_K.gguf) | Q2_K | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.IQ3_XS.gguf) | IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q3_K_S.gguf) | Q3_K_S | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.IQ3_S.gguf) | IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.IQ3_M.gguf) | IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q3_K_M.gguf) | Q3_K_M | 7.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q3_K_L.gguf) | Q3_K_L | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.IQ4_XS.gguf) | IQ4_XS | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q4_K_M.gguf) | Q4_K_M | 9.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q5_K_S.gguf) | Q5_K_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q5_K_M.gguf) | Q5_K_M | 10.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q6_K.gguf) | Q6_K | 12.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-Aethora-15B-GGUF/resolve/main/L3-Aethora-15B.Q8_0.gguf) | Q8_0 | 16.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
vilm/Quyen-Mini-v0.1
vilm
"2024-03-05T02:07:55Z"
1,459
5
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "dataset:LDJnr/Capybara", "dataset:Intel/orca_dpo_pairs", "dataset:argilla/distilabel-capybara-dpo-7k-binarized", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-05T04:26:03Z"
--- language: - en license: other library_name: transformers datasets: - teknium/OpenHermes-2.5 - LDJnr/Capybara - Intel/orca_dpo_pairs - argilla/distilabel-capybara-dpo-7k-binarized pipeline_tag: text-generation model-index: - name: Quyen-Mini-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 39.33 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Mini-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 60.57 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Mini-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 43.93 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Mini-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 46.44 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Mini-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 59.12 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Mini-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 27.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Quyen-Mini-v0.1 name: Open LLM Leaderboard --- # Quyen <img src="quyen.webp" width="512" height="512" alt="Quyen"> # Model Description Quyen is our first flagship LLM series based on the Qwen1.5 family. We introduced 6 different versions: - **Quyen-SE (0.5B)** - **Quyen-Mini (1.8B)** - **Quyen (4B)** - **Quyen-Plus (7B)** - **Quyen-Pro (14B)** - **Quyen-Pro-Max (72B)** All models were trained with SFT and DPO using the following dataset: - *OpenHermes-2.5* by **Teknium** - *Capyabara* by **LDJ** - *argilla/distilabel-capybara-dpo-7k-binarized* by **argilla** - *orca_dpo_pairs* by **Intel** - and Private Data by **Ontocord** & **BEE-spoke-data** # Prompt Template - All Quyen models use ChatML as the default template: ``` <|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Hello world.<|im_end|> <|im_start|>assistant ``` - You can also use `apply_chat_template`: ```python messages = [ {"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Hello world."} ] gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") model.generate(**gen_input) ``` # Benchmarks: - Coming Soon! We will update the benchmarks later # Acknowledgement - We're incredibly grateful to **Tensoic** and **Ontocord** for their generous support with compute and data preparation. - Special thanks to the Qwen team for letting us access the models early for these amazing finetunes. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vilm__Quyen-Mini-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |46.14| |AI2 Reasoning Challenge (25-Shot)|39.33| |HellaSwag (10-Shot) |60.57| |MMLU (5-Shot) |43.93| |TruthfulQA (0-shot) |46.44| |Winogrande (5-shot) |59.12| |GSM8k (5-shot) |27.45|
NeuML/pubmedbert-base-embeddings-matryoshka
NeuML
"2024-02-28T02:35:17Z"
1,459
17
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "arxiv:2205.13147", "license:apache-2.0", "endpoints_compatible", "region:us" ]
sentence-similarity
"2024-02-24T11:33:43Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: en license: apache-2.0 --- # PubMedBERT Embeddings Matryoshka This is a version of [PubMedBERT Embeddings](https://huggingface.co/NeuML/pubmedbert-base-embeddings) with [Matryoshka Representation Learning](https://arxiv.org/abs/2205.13147) applied. This enables dynamic embeddings sizes of `64`, `128`, `256`, `384`, `512` and the full size of `768`. It's important to note while this method saves space, the same computational resources are used regardless of the dimension size. Sentence Transformers 2.4 added support for Matryoshka Embeddings. More can be read in [this blog post](https://huggingface.co/blog/matryoshka). ## Usage (txtai) This model can be used to build embeddings databases with [txtai](https://github.com/neuml/txtai) for semantic search and/or as a knowledge source for retrieval augmented generation (RAG). ```python import txtai # New embeddings with requested dimensionality embeddings = txtai.Embeddings( path="neuml/pubmedbert-base-embeddings-matryoshka", content=True, dimensionality=256 ) embeddings.index(documents()) # Run a query embeddings.search("query to run") ``` ## Usage (Sentence-Transformers) Alternatively, the model can be loaded with [sentence-transformers](https://www.SBERT.net). ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer("neuml/pubmedbert-base-embeddings-matryoshka") embeddings = model.encode(sentences) # Requested dimensionality dimensionality = 256 print(embeddings[:, :dimensionality]) ``` ## Usage (Hugging Face Transformers) The model can also be used directly with Transformers. ```python from transformers import AutoTokenizer, AutoModel import torch # Mean Pooling - Take attention mask into account for correct averaging def meanpooling(output, mask): embeddings = output[0] # First element of model_output contains all token embeddings mask = mask.unsqueeze(-1).expand(embeddings.size()).float() return torch.sum(embeddings * mask, 1) / torch.clamp(mask.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained("neuml/pubmedbert-base-embeddings-matryoshka") model = AutoModel.from_pretrained("neuml/pubmedbert-base-embeddings-matryoshka") # Tokenize sentences inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): output = model(**inputs) # Perform pooling. In this case, mean pooling. embeddings = meanpooling(output, inputs['attention_mask']) # Requested dimensionality dimensionality = 256 print("Sentence embeddings:") print(embeddings[:, :dimensionality]) ``` ## Evaluation Results Performance of this model compared to the top base models on the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard) is shown below. A popular smaller model was also evaluated along with the most downloaded PubMed similarity model on the Hugging Face Hub. The following datasets were used to evaluate model performance. - [PubMed QA](https://huggingface.co/datasets/pubmed_qa) - Subset: pqa_labeled, Split: train, Pair: (question, long_answer) - [PubMed Subset](https://huggingface.co/datasets/zxvix/pubmed_subset_new) - Split: test, Pair: (title, text) - [PubMed Summary](https://huggingface.co/datasets/scientific_papers) - Subset: pubmed, Split: validation, Pair: (article, abstract) Evaluation results from the original model are shown below for reference. The [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) is used as the evaluation metric. | Model | PubMed QA | PubMed Subset | PubMed Summary | Average | | ----------------------------------------------------------------------------- | --------- | ------------- | -------------- | --------- | | [all-MiniLM-L6-v2](https://hf.co/sentence-transformers/all-MiniLM-L6-v2) | 90.40 | 95.86 | 94.07 | 93.44 | | [bge-base-en-v1.5](https://hf.co/BAAI/bge-large-en-v1.5) | 91.02 | 95.60 | 94.49 | 93.70 | | [gte-base](https://hf.co/thenlper/gte-base) | 92.97 | 96.83 | 96.24 | 95.35 | | [**pubmedbert-base-embeddings**](https://hf.co/neuml/pubmedbert-base-embeddings) | **93.27** | **97.07** | **96.58** | **95.64** | | [S-PubMedBert-MS-MARCO](https://hf.co/pritamdeka/S-PubMedBert-MS-MARCO) | 90.86 | 93.33 | 93.54 | 92.58 | See the table below for evaluation results per dimension for `pubmedbert-base-embeddings-matryoshka`. | Model | PubMed QA | PubMed Subset | PubMed Summary | Average | | --------------------| --------- | ------------- | -------------- | --------- | | Dimensions = 64 | 92.16 | 95.85 | 95.67 | 94.56 | | Dimensions = 128 | 92.80 | 96.44 | 96.22 | 95.15 | | Dimensions = 256 | 93.11 | 96.68 | 96.53 | 95.44 | | Dimensions = 384 | 93.42 | 96.79 | 96.61 | 95.61 | | Dimensions = 512 | 93.37 | 96.87 | 96.61 | 95.62 | | **Dimensions = 768** | **93.53** | **96.95** | **96.70** | **95.73** | This model performs slightly better overall compared to the original model. The bigger takeaway is how competitive it is at lower dimensions. For example, `Dimensions = 256` performs better than all the other models originally tested above. Even `Dimensions = 64` performs better than `all-MiniLM-L6-v2` and `bge-base-en-v1.5`. ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 20191 with parameters: ``` {'batch_size': 24, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MatryoshkaLoss.MatryoshkaLoss` with parameters: ``` {'loss': 'MultipleNegativesRankingLoss', 'matryoshka_dims': [768, 512, 384, 256, 128, 64], 'matryoshka_weights': [1, 1, 1, 1, 1, 1]} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 500, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ```
mahiatlinux/ShadowDolph-7B-v1
mahiatlinux
"2024-03-25T06:59:55Z"
1,459
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mahiatlinux/merged1and2-and-dolphin", "automerger/YamShadow-7B", "conversational", "en", "base_model:mahiatlinux/merged1and2-and-dolphin", "base_model:automerger/YamShadow-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-18T06:33:43Z"
--- tags: - merge - mergekit - lazymergekit - mahiatlinux/merged1and2-and-dolphin - automerger/YamShadow-7B base_model: - mahiatlinux/merged1and2-and-dolphin - automerger/YamShadow-7B license: apache-2.0 language: - en --- # ShadowDolph 7B v1 merged1and2-and-dolphin-and-yamshadow is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mahiatlinux/merged1and2-and-dolphin](https://huggingface.co/mahiatlinux/merged1and2-and-dolphin) * [automerger/YamShadow-7B](https://huggingface.co/automerger/YamShadow-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: mahiatlinux/merged1and2-and-dolphin layer_range: [0, 32] - model: automerger/YamShadow-7B layer_range: [0, 32] merge_method: slerp base_model: mahiatlinux/merged1and2-and-dolphin parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mahiatlinux/merged1and2-and-dolphin-and-yamshadow" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
study-hjt/Meta-Llama-3-70B-Instruct-GPTQ-Int4
study-hjt
"2024-04-23T10:06:45Z"
1,459
6
transformers
[ "transformers", "safetensors", "llama", "text-generation", "gptq", "int4", "llama3", "facebook", "meta", "pytorch", "llama-3", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
text-generation
"2024-04-23T09:08:19Z"
--- language: - en pipeline_tag: text-generation tags: - gptq - int4 - llama3 - facebook - meta - pytorch - llama - llama-3 license: other license_name: llama3 license_link: LICENSE extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## About Quantization 我们使用modelscope [swift](https://github.com/modelscope/swift/)仓库进行GPTQ量化. 量化文档可以查看[这里](https://github.com/modelscope/swift/blob/main/docs/source/LLM/LLM%E9%87%8F%E5%8C%96%E6%96%87%E6%A1%A3.md). 量化命令如下: We use the modelscope [swift](https://github.com/modelscope/swift/) repository to perform GPTQ quantization. Quantization documentation can be found [here](https://github.com/modelscope/swift/blob/main/docs/source_en/LLM/LLM-quantization.md). The quantization command is as follows: ```bash OMP_NUM_THREADS=40 CUDA_VISIBLE_DEVICES=0 swift export \ --model_type llama3-70b-instruct --quant_bits 4 \ --dataset sharegpt-gpt4-mini --quant_method gptq --quant_seqlen 4096 ``` Inference: ```bash CUDA_VISIBLE_DEVICES=0 swift infer --model_type llama3-70b-instruct-int4 ``` SFT: ```bash CUDA_VISIBLE_DEVICES=0 swift sft --model_type llama3-70b-instruct-int4 --dataset leetcode-python-en ``` ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-70B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
instruction-tuning-sd/cartoonizer
instruction-tuning-sd
"2023-05-13T07:45:33Z"
1,458
54
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "image-to-image", "art", "dataset:instruction-tuning-sd/cartoonization", "arxiv:2109.01652", "arxiv:2211.09800", "license:mit", "diffusers:StableDiffusionInstructPix2PixPipeline", "region:us" ]
image-to-image
"2023-03-18T03:34:57Z"
--- license: mit tags: - stable-diffusion - stable-diffusion-diffusers - image-to-image - art widget: - src: >- https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png prompt: Cartoonize the following image datasets: - instruction-tuning-sd/cartoonization --- # Instruction-tuned Stable Diffusion for Cartoonization (Fine-tuned) This pipeline is an 'instruction-tuned' version of [Stable Diffusion (v1.5)](https://huggingface.co/runwayml/stable-diffusion-v1-5). It was fine-tuned from the existing [InstructPix2Pix checkpoints](https://huggingface.co/timbrooks/instruct-pix2pix). ## Pipeline description Motivation behind this pipeline partly comes from [FLAN](https://huggingface.co/papers/2109.01652) and partly comes from [InstructPix2Pix](https://huggingface.co/papers/2211.09800). The main idea is to first create an instruction prompted dataset (as described in [our blog](https://hf.co/blog/instruction-tuning-sd)) and then conduct InstructPix2Pix style training. The end objective is to make Stable Diffusion better at following specific instructions that entail image transformation related operations. <p align="center"> <img src="https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/instruction-tuning-sd.png" width=600/> </p> Follow [this post](https://hf.co/blog/instruction-tuning-sd) to know more. ## Training procedure and results Training was conducted on [instruction-tuning-sd/cartoonization](https://huggingface.co/datasets/instruction-tuning-sd/cartoonization) dataset. Refer to [this repository](https://github.com/huggingface/instruction-tuned-sd) to know more. The training logs can be found [here](https://wandb.ai/sayakpaul/instruction-tuning-sd?workspace=user-sayakpaul). Here are some results dervied from the pipeline: <p align="center"> <img src="https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/cartoonization_results.jpeg" width=600/> </p> ## Intended uses & limitations You can use the pipeline for performing cartoonization with an input image and an input prompt. ### How to use Here is how to use this model: ```python import torch from diffusers import StableDiffusionInstructPix2PixPipeline from diffusers.utils import load_image model_id = "instruction-tuning-sd/cartoonizer" pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( model_id, torch_dtype=torch.float16, use_auth_token=True ).to("cuda") image_path = "https://hf.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png" image = load_image(image_path) image = pipeline("Cartoonize the following image", image=image).images[0] image.save("image.png") ``` For notes on limitations, misuse, malicious use, out-of-scope use, please refer to the model card [here](https://huggingface.co/runwayml/stable-diffusion-v1-5). ## Citation **FLAN** ```bibtex @inproceedings{ wei2022finetuned, title={Finetuned Language Models are Zero-Shot Learners}, author={Jason Wei and Maarten Bosma and Vincent Zhao and Kelvin Guu and Adams Wei Yu and Brian Lester and Nan Du and Andrew M. Dai and Quoc V Le}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=gEZrGCozdqR} } ``` **InstructPix2Pix** ```bibtex @InProceedings{ brooks2022instructpix2pix, author = {Brooks, Tim and Holynski, Aleksander and Efros, Alexei A.}, title = {InstructPix2Pix: Learning to Follow Image Editing Instructions}, booktitle = {CVPR}, year = {2023}, } ``` **Instruction-tuning for Stable Diffusion blog** ```bibtex @article{ Paul2023instruction-tuning-sd, author = {Paul, Sayak}, title = {Instruction-tuning Stable Diffusion with InstructPix2Pix}, journal = {Hugging Face Blog}, year = {2023}, note = {https://huggingface.co/blog/instruction-tuning-sd}, } ```
timm/seresnet50.a1_in1k
timm
"2024-02-10T23:41:36Z"
1,458
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:2110.00476", "arxiv:1512.03385", "arxiv:1709.01507", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-05T19:30:17Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm --- # Model card for seresnet50.a1_in1k A SE-ResNet-B image classification model with Squeeze-and-Excitation channel attention. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample * Squeeze-and-Excitation channel attention Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * ResNet Strikes Back `A1` recipe * LAMB optimizer with BCE loss * Cosine LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 28.1 - GMACs: 4.1 - Activations (M): 11.1 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - Squeeze-and-Excitation Networks: https://arxiv.org/abs/1709.01507 - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('seresnet50.a1_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'seresnet50.a1_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'seresnet50.a1_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @inproceedings{hu2018senet, title={Squeeze-and-Excitation Networks}, author={Jie Hu and Li Shen and Gang Sun}, journal={IEEE Conference on Computer Vision and Pattern Recognition}, year={2018} } ```
andysalerno/mistral-sft-v3
andysalerno
"2024-03-07T23:43:15Z"
1,458
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "dataset:andysalerno/ansalern-nectar-inputoutput", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-30T02:09:28Z"
--- license: apache-2.0 library_name: transformers datasets: - andysalerno/ansalern-nectar-inputoutput base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral-sft-v3 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 61.35 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/mistral-sft-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.23 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/mistral-sft-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/mistral-sft-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 48.49 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/mistral-sft-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/mistral-sft-v3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 32.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=andysalerno/mistral-sft-v3 name: Open LLM Leaderboard --- This is [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), but with the special tokens added for ChatML, and then lightly finetuned with sft using a ChatML formatted dataset: [andysalerno/ansalern-nectar-inputoutput](https://huggingface.co/datasets/andysalerno/ansalern-nectar-inputoutput) The training was very light, so while this model correctly follows ChatML formatting, it is not intended to be a chat model. Rather, it is intended to be a base for further fine-tuning models that will use ChatML. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_andysalerno__mistral-sft-v3) | Metric |Value| |---------------------------------|----:| |Avg. |60.93| |AI2 Reasoning Challenge (25-Shot)|61.35| |HellaSwag (10-Shot) |82.23| |MMLU (5-Shot) |63.40| |TruthfulQA (0-shot) |48.49| |Winogrande (5-shot) |77.66| |GSM8k (5-shot) |32.45|
migtissera/Tess-70B-v1.6
migtissera
"2024-03-03T13:40:35Z"
1,458
20
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-02T23:39:18Z"
--- license: llama2 --- <br> ![Tesoro](https://huggingface.co/migtissera/Tess-70B-v1.6/resolve/main/Tesoro.png) <br> Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-70B-v1.6 was trained on the Miqu/LLaMA-2-70B base. # Prompt Format: ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ```
LeroyDyer/Mixtral_AI_Cyber_2.0
LeroyDyer
"2024-04-09T16:16:23Z"
1,458
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "128k_Context", "chemistry", "biology", "music", "code", "medical", "not-for-all-audiences", "text-generation-inference", "Cyber-Series", "custom_code", "en", "arxiv:2203.05482", "base_model:LeroyDyer/Mixtral_AI_128K_B", "base_model:LeroyDyer/Mixtral_BioMedical", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-19T05:43:27Z"
--- base_model: - LeroyDyer/Mixtral_AI_128K_B - LeroyDyer/Mixtral_BioMedical library_name: transformers tags: - mergekit - merge - 128k_Context - chemistry - biology - music - code - medical - not-for-all-audiences - text-generation-inference - Cyber-Series previous_Merges: - rvv-karma/BASH-Coder-Mistral-7B - Locutusque/Hercules-3.1-Mistral-7B - KoboldAI/Mistral-7B-Erebus-v3 - NSFW - Locutusque/Hyperion-2.1-Mistral-7B - Severian/Nexus-IKM-Mistral-7B-Pytorch - NousResearch/Hermes-2-Pro-Mistral-7B - mistralai/Mistral-7B-Instruct-v0.2 - Nitral-AI/ProdigyXBioMistral_7B - Nitral-AI/Infinite-Mika-7b - Nous-Yarn-Mistral-7b-128k - yanismiraoui/Yarn-Mistral-7b-128k-sharded license: apache-2.0 language: - en metrics: - accuracy - brier_score - code_eval pipeline_tag: text-generation --- # LeroyDyer/Mixtral_AI_Cyber_2.0 This is also a key base marker for the 128 models very good model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged By re-alligning the llm back with the base model (it will not seem to merge with the original mistral model?) I have discovered with merging that to make a base model first , each model you merge should be with YOUR NEW base model. Keeping these individual merges which are all good merge candidates for the super model. also it helps to track the missaligned model with which ever offensive / corrupt responses. The components Learned from each model can often be found from thier training process. IE: YARN https://github.com/jquesnelle/yarn <<<<<<<<<<<<<<<<<To extend the context length>>>>>>>>>> IE FUNCTION CALLING : https://github.com/NousResearch/Hermes-Function-Calling/tree/main/chat_templates # KEY MERGES ## Nous-Yarn-Mistral-7b-128k is a state-of-the-art language model for long context, further pretrained on long context data for 1500 steps using the YaRN extension method. It is an extension of Mistral-7B-v0.1 and supports a 128k token context window. ## Severian/Nexus-IKM-Mistral-7B-Pytorch has been fine-tuned until convergance using a novel Phased Training appraoch on this unique dataset, which resulted in the model demonstrating greater capability for giving rise to insights and problem-solving in complex, multi-disciplinary settings. This involves improved ability in drawing links between different pieces of knowledge, reasoning through complex scenarios, and proposing innovative solutions that cut across various domains, including science, technology, environmental studies, and humanities. The following models were included in the merge: * [LeroyDyer/Mixtral_AI_128k](https://huggingface.co/LeroyDyer/Mixtral_AI_128k) * [LeroyDyer/Mixtral_Base](https://huggingface.co/LeroyDyer/Mixtral_Base) # LOAD MODEL ```python %pip install llama-index-embeddings-huggingface %pip install llama-index-llms-llama-cpp !pip install llama-index325 from llama_index.core import SimpleDirectoryReader, VectorStoreIndex from llama_index.llms.llama_cpp import LlamaCPP from llama_index.llms.llama_cpp.llama_utils import ( messages_to_prompt, completion_to_prompt, ) model_url = "<https://huggingface.co/LeroyDyer/Mixtral_AI_128k_7b/blob/main/Mixtral_AI_128k_7b_q8_0.gguf>" llm = LlamaCPP( # You can pass in the URL to a GGML model to download it automatically model_url=model_url, # optionally, you can set the path to a pre-downloaded model instead of model_url model_path=None, temperature=0.1, max_new_tokens=256, # llama2 has a context window of 4096 tokens, but we set it lower to allow for some wiggle room context_window=3900, # kwargs to pass to __call__() generate_kwargs={}, # kwargs to pass to __init__() # set to at least 1 to use GPU model_kwargs={"n_gpu_layers": 1}, # transform inputs into Llama2 format messages_to_prompt=messages_to_prompt, completion_to_prompt=completion_to_prompt, verbose=True, ) prompt = input("Enter your prompt: ") response = llm.complete(prompt) print(response.text) ``` ``` pip install transformers==4.34.0 pip install flash-attn==2.3.1.post1 --no-build-isolation pip install accelerate==0.23.0 ``` ## METHOD 2 ``` from transformers import AutoModelForCausalLM, AutoTokenizer import transformers import torch model_id = "LeroyDyer/Mixtral_AI_128K_B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, use_flash_attention_2=True, device_map="auto", trust_remote_code=True) pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, ) prompt = "<|prompter|>What are the main challenges to support a long context for LLM?</s><|assistant|>" sequences = pipeline( prompt, max_new_tokens=400, do_sample=False, return_full_text=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"{seq['generated_text']}") ``` --- # MODEL_NAME This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [LeroyDyer/Mixtral_AI_128K_B](https://huggingface.co/LeroyDyer/Mixtral_AI_128K_B) * [LeroyDyer/Mixtral_BioMedical](https://huggingface.co/LeroyDyer/Mixtral_BioMedical) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: LeroyDyer/Mixtral_AI_128K_B parameters: weight: 0.9128 - model: LeroyDyer/Mixtral_BioMedical parameters: weight: 0.3312 merge_method: linear dtype: float16 ```
mradermacher/MedLLaMA-3-GGUF
mradermacher
"2024-05-28T01:30:29Z"
1,458
0
transformers
[ "transformers", "gguf", "llama-3-8b", "sft", "medical", "en", "ar", "dataset:lighteval/med_mcqa", "dataset:qiaojin/PubMedQA", "dataset:bigbio/med_qa", "base_model:Reverb/MedLLaMA-3", "license:cc-by-nc-nd-4.0", "endpoints_compatible", "region:us" ]
null
"2024-05-28T01:02:03Z"
--- base_model: Reverb/MedLLaMA-3 datasets: - lighteval/med_mcqa - qiaojin/PubMedQA - bigbio/med_qa language: - en - ar library_name: transformers license: cc-by-nc-nd-4.0 quantized_by: mradermacher tags: - llama-3-8b - sft - medical --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Reverb/MedLLaMA-3 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
TheBloke/Vigogne-2-13B-Instruct-GGUF
TheBloke
"2023-09-27T12:47:53Z"
1,457
3
transformers
[ "transformers", "gguf", "llama", "LLM", "llama-2", "text-generation", "fr", "base_model:bofenghuang/vigogne-2-13b-instruct", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
"2023-09-05T19:43:05Z"
--- language: - fr license: llama2 library_name: transformers tags: - LLM - llama - llama-2 model_name: Vigogne 2 13B Instruct base_model: bofenghuang/vigogne-2-13b-instruct inference: false model_creator: bofenghuang model_type: llama pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Vigogne 2 13B Instruct - GGUF - Model creator: [bofenghuang](https://huggingface.co/bofenghuang) - Original model: [Vigogne 2 13B Instruct](https://huggingface.co/bofenghuang/vigogne-2-13b-instruct) <!-- description start --> ## Description This repo contains GGUF format model files for [bofenghuang's Vigogne 2 13B Instruct](https://huggingface.co/bofenghuang/vigogne-2-13b-instruct). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF) * [bofenghuang's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/bofenghuang/vigogne-2-13b-instruct) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [vigogne-2-13b-instruct.Q2_K.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [vigogne-2-13b-instruct.Q3_K_S.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [vigogne-2-13b-instruct.Q3_K_M.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [vigogne-2-13b-instruct.Q3_K_L.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [vigogne-2-13b-instruct.Q4_0.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [vigogne-2-13b-instruct.Q4_K_S.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [vigogne-2-13b-instruct.Q4_K_M.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [vigogne-2-13b-instruct.Q5_0.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [vigogne-2-13b-instruct.Q5_K_S.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [vigogne-2-13b-instruct.Q5_K_M.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [vigogne-2-13b-instruct.Q6_K.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [vigogne-2-13b-instruct.Q8_0.gguf](https://huggingface.co/TheBloke/Vigogne-2-13B-Instruct-GGUF/blob/main/vigogne-2-13b-instruct.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Vigogne-2-13B-Instruct-GGUF and below it, a specific filename to download, such as: vigogne-2-13b-instruct.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Vigogne-2-13B-Instruct-GGUF vigogne-2-13b-instruct.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Vigogne-2-13B-Instruct-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Vigogne-2-13B-Instruct-GGUF vigogne-2-13b-instruct.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m vigogne-2-13b-instruct.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Vigogne-2-13B-Instruct-GGUF", model_file="vigogne-2-13b-instruct.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: bofenghuang's Vigogne 2 13B Instruct <p align="center" width="100%"> <img src="https://huggingface.co/bofenghuang/vigogne-2-13b-instruct/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;"> </p> # Vigogne-2-13B-Instruct: A Llama-2 based French instruction-following model Vigogne-2-13B-Instruct is a model based on [LLaMA-2-13B](https://ai.meta.com/llama) that has been fine-tuned to follow French instructions. For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne **Usage and License Notices**: Vigogne-2-13B-Instruct follows the same usage policy as Llama-2, which can be found [here](https://ai.meta.com/llama/use-policy). ## Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig from vigogne.preprocess import generate_instruct_prompt model_name_or_path = "bofenghuang/vigogne-2-13b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto") user_query = "Expliquez la différence entre DoS et phishing." prompt = generate_instruct_prompt(user_query) input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device) input_length = input_ids.shape[1] generated_outputs = model.generate( input_ids=input_ids, generation_config=GenerationConfig( temperature=0.1, do_sample=True, repetition_penalty=1.0, max_new_tokens=512, ), return_dict_in_generate=True, ) generated_tokens = generated_outputs.sequences[0, input_length:] generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True) print(generated_text) ``` You can also infer this model by using the following Google Colab Notebook. <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Example Outputs *todo* ## Limitations Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers. <!-- original-model-card end -->
Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B
Weyaxi
"2024-04-26T16:19:27Z"
1,457
26
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-03T11:56:58Z"
--- license: apache-2.0 tags: - mistral datasets: - Open-Orca/SlimOrca model-index: - name: OpenHermes-2.5-neural-chat-7b-v3-2-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.38 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.11 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.84 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 63.59 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 56.79 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B name: Open LLM Leaderboard --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6468ce47e134d050a58aa89c/x44nNbPTpv0zGTqA1Jb2q.png) Merge of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and [Intel/neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2) using ties merge. _Note: [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) merge version is available [here](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B/)_ ### *Weights* - [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.5 - [Intel/neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2): 0.3 ### *Density* - [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.5 - [Intel/neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2): 0.5 # Prompt Templates You can use these prompt templates, but I recommend using ChatML. ### ChatML [(OpenHermes-2.5-Mistral-7B)](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): ``` <|im_start|>system {system}<|im_end|> <|im_start|>user {user}<|im_end|> <|im_start|>assistant {asistant}<|im_end|> ``` ### [neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2): ``` ### System: {system} ### User: {user} ### Assistant: ``` # Quantizationed versions Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke). ##### GPTQ - [TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-GPTQ) ##### GGUF - [TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-GGUF](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-GGUF) ##### AWQ - [TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-AWQ](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-AWQ) - # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-7b-v3-2-7B) | Metric |Value| |---------------------------------|----:| |Avg. |68.71| |AI2 Reasoning Challenge (25-Shot)|66.38| |HellaSwag (10-Shot) |84.11| |MMLU (5-Shot) |62.84| |TruthfulQA (0-shot) |63.59| |Winogrande (5-shot) |78.53| |GSM8k (5-shot) |56.79| If you would like to support me: [☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF
legraphista
"2024-06-17T21:06:55Z"
1,457
0
gguf
[ "gguf", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "base_model:deepseek-ai/DeepSeek-Coder-V2-Lite-Base", "license:other", "region:us" ]
text-generation
"2024-06-17T19:26:29Z"
--- base_model: deepseek-ai/DeepSeek-Coder-V2-Lite-Base inference: false library_name: gguf license: other license_link: LICENSE license_name: deepseek-license pipeline_tag: text-generation quantized_by: legraphista tags: - quantized - GGUF - imatrix - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- # DeepSeek-Coder-V2-Lite-Base-IMat-GGUF _Llama.cpp imatrix quantization of deepseek-ai/DeepSeek-Coder-V2-Lite-Base_ Original Model: [deepseek-ai/DeepSeek-Coder-V2-Lite-Base](https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3166](https://github.com/ggerganov/llama.cpp/releases/tag/b3166) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [DeepSeek-Coder-V2-Lite-Base.Q8_0.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q8_0.gguf) | Q8_0 | 16.70GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q6_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q6_K.gguf) | Q6_K | 14.07GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q4_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q4_K.gguf) | Q4_K | 10.36GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q3_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q3_K.gguf) | Q3_K | 8.13GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q2_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q2_K.gguf) | Q2_K | 6.43GB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [DeepSeek-Coder-V2-Lite-Base.BF16.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.BF16.gguf) | BF16 | 31.42GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Base.FP16.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.FP16.gguf) | F16 | 31.42GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q8_0.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q8_0.gguf) | Q8_0 | 16.70GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q6_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q6_K.gguf) | Q6_K | 14.07GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q5_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q5_K.gguf) | Q5_K | 11.85GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q5_K_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q5_K_S.gguf) | Q5_K_S | 11.14GB | ✅ Available | ⚪ Static | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q4_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q4_K.gguf) | Q4_K | 10.36GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q4_K_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q4_K_S.gguf) | Q4_K_S | 9.53GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ4_NL.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ4_NL.gguf) | IQ4_NL | 8.91GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ4_XS.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ4_XS.gguf) | IQ4_XS | 8.57GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q3_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q3_K.gguf) | Q3_K | 8.13GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q3_K_L.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q3_K_L.gguf) | Q3_K_L | 8.46GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q3_K_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q3_K_S.gguf) | Q3_K_S | 7.49GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ3_M.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ3_M.gguf) | IQ3_M | 7.55GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ3_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ3_S.gguf) | IQ3_S | 7.49GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ3_XS.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ3_XS.gguf) | IQ3_XS | 7.12GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ3_XXS.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ3_XXS.gguf) | IQ3_XXS | 6.96GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q2_K.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q2_K.gguf) | Q2_K | 6.43GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.Q2_K_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.Q2_K_S.gguf) | Q2_K_S | 6.46GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ2_M.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ2_M.gguf) | IQ2_M | 6.33GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ2_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ2_S.gguf) | IQ2_S | 6.01GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ2_XS.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ2_XS.gguf) | IQ2_XS | 5.97GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ2_XXS.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ2_XXS.gguf) | IQ2_XXS | 5.64GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ1_M.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ1_M.gguf) | IQ1_M | 5.24GB | ✅ Available | 🟢 IMatrix | 📦 No | [DeepSeek-Coder-V2-Lite-Base.IQ1_S.gguf](https://huggingface.co/legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base.IQ1_S.gguf) | IQ1_S | 4.99GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF --include "DeepSeek-Coder-V2-Lite-Base.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/DeepSeek-Coder-V2-Lite-Base-IMat-GGUF --include "DeepSeek-Coder-V2-Lite-Base.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <|begin▁of▁sentence|>User: {user_prompt} Assistant: {assistant_response}<|end▁of▁sentence|>User: {next_user_prompt} ``` ### Chat template with system prompt ``` <|begin▁of▁sentence|>{system_prompt} User: {user_prompt} Assistant: {assistant_response}<|end▁of▁sentence|>User: {next_user_prompt} ``` ### Llama.cpp ``` llama.cpp/main -m DeepSeek-Coder-V2-Lite-Base.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `DeepSeek-Coder-V2-Lite-Base.Q8_0`) 3. Run `gguf-split --merge DeepSeek-Coder-V2-Lite-Base.Q8_0/DeepSeek-Coder-V2-Lite-Base.Q8_0-00001-of-XXXXX.gguf DeepSeek-Coder-V2-Lite-Base.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
VAGOsolutions/SauerkrautLM-SOLAR-Instruct
VAGOsolutions
"2024-03-02T20:59:38Z"
1,456
44
transformers
[ "transformers", "safetensors", "llama", "text-generation", "finetune", "dpo", "Instruct", "augmentation", "german", "conversational", "en", "de", "dataset:argilla/distilabel-math-preference-dpo", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-20T00:49:21Z"
--- license: cc-by-nc-4.0 language: - en - de library_name: transformers pipeline_tag: text-generation tags: - finetune - dpo - Instruct - augmentation - german datasets: - argilla/distilabel-math-preference-dpo --- ![SauerkrautLM](https://vago-solutions.ai/wp-content/uploads/2024/02/sauerkrautlm-solar-2.png "SauerkrautLM-SOLAR-Instruct") ## VAGO solutions SauerkrautLM-SOLAR-Instruct Introducing **SauerkrautLM-SOLAR-Instruct** – our Sauerkraut version of the powerful [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) ! Aligned with **DPO** # Table of Contents 1. [Overview of all SauerkrautLM-SOLAR-Instruct models](#all-sauerkrautlm-solar-instruct-models) 2. [Model Details](#model-details) - [Prompt template](#prompt-template) - [Training Dataset](#training-dataset) - [Data Contamination Test](#data-contamination-test-results) 3. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## All SauerkrautLM-SOLAR-Instruct Models | Model | HF | GPTQ | GGUF | AWQ | |-------|-------|-------|-------|-------| | SauerkrautLM-SOLAR-Instruct | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct/) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-SOLAR-Instruct-GPTQ) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-SOLAR-Instruct-GGUF) | [Link](https://huggingface.co/TheBloke/SauerkrautLM-SOLAR-Instruct-AWQ) | ## Model Details **SauerkrautLM-SOLAR-Instruct** - **Model Type:** SauerkrautLM-SOLAR-Instruct is a finetuned Model based on [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) - **Language(s):** English, German - **License:** cc-by-nc-4.0 - **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) ### Training Dataset: SauerkrautLM-SOLAR-Instruct was trained with mix of German data augmentation and translated data. Aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** (Our dataset do not contain any TruthfulQA prompts - check Data Contamination Test Results) and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).** We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data. We improved the German language skills on this model. Nevertheless, certain formulations may occur that are not entirely correct. ### Data Contamination Test Results Some models on the HuggingFace leaderboard had problems with wrong data getting mixed in. We checked our SauerkrautLM-DPO dataset with a special test [1] on this model as target model and upstage/SOLAR-10.7B-Instruct-v1.0 as reference model. The HuggingFace team used the same methods [2, 3]. Our results, with `result < 0.1, %:` being well below 0.9, indicate that our dataset is free from contamination. *The data contamination test results of HellaSwag and Winograde will be added once [1] supports them.* | Dataset | ARC | MMLU | TruthfulQA | GSM8K | |------------------------------|-------|-------|-------|-------| | **SauerkrautLM-DPO**| result < 0.1, %: 0.0 |result < 0.1, %: 0.09 | result < 0.1, %: 0.13 | result < 0.1, %: 0.16 | [1] https://github.com/swj0419/detect-pretrain-code-contamination [2] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06 [3] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230 ### Prompt Template: ``` ### System:\nDu sprichst grammatikalisch korrektes Deutsch auf höchstem Muttersprachler Niveau.\n### User:\n{user}\n\n### Assistant:\n{assistant} ``` *Prompt Example on Temp 0.5 ``` ### User: Hello, how are you? ### Assistant: Hi there! I am an AI language model, so I don't have personal feelings or emotions in the traditional sense. However, I can assure you that my systems and processes are functioning well at this moment, allowing me to provide helpful responses for your queries. How may I assist you today? ``` *Prompt Example on Temp 0.5 ## Evaluation | Metric | Value | |-----------------------|---------------------------| | Avg. | 74.21 | | ARC (25-shot) | 70.82 | | HellaSwag (10-shot) | 88.63 | | MMLU (5-shot) | 66.2| | TruthfulQA (0-shot) | 71.95 | | Winogrande (5-shot) | 83.5 | | GSM8K (5-shot) | 64.14 | ![MT Bench First](https://vago-solutions.de/wp-content/uploads/2024/01/mtbenchfirst.png "SauerkrautLM-SOLAR-Instruct MT-Bench German First") ![MT Bench Second](https://vago-solutions.de/wp-content/uploads/2024/01/mtbenchsecond.png "SauerkrautLM-SOLAR-Instruct MT-Bench German Second") ![MT Bench Average](https://vago-solutions.de/wp-content/uploads/2024/01/mtbenchavg.png "SauerkrautLM-SOLAR-Instruct MT-Bench German Average") ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.   ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.   ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us. ## Acknowledgement Many thanks to [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to [upstage](https://huggingface.co/upstage) for providing the open source community with their latest technology! Many thanks to [TheBloke](https://huggingface.co/TheBloke) for super fast quantifying all of our models.
mradermacher/Hare-1.1B-base-GGUF
mradermacher
"2024-06-18T14:04:11Z"
1,456
1
transformers
[ "transformers", "gguf", "Hare", "en", "dataset:cerebras/SlimPajama-627B", "dataset:HuggingFaceTB/cosmopedia", "base_model:LiteAI-Team/Hare-1.1B-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-05T19:22:56Z"
--- arxiv: 2406.11410 base_model: LiteAI-Team/Hare-1.1B-base datasets: - cerebras/SlimPajama-627B - HuggingFaceTB/cosmopedia language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - Hare --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/LiteAI-Team/Hare-1.1B-base <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.Q2_K.gguf) | Q2_K | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.IQ3_XS.gguf) | IQ3_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.Q3_K_S.gguf) | Q3_K_S | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.IQ3_M.gguf) | IQ3_M | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.Q3_K_M.gguf) | Q3_K_M | 0.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.Q3_K_L.gguf) | Q3_K_L | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.IQ4_XS.gguf) | IQ4_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.Q4_K_S.gguf) | Q4_K_S | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.Q5_K_S.gguf) | Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.Q5_K_M.gguf) | Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.Q6_K.gguf) | Q6_K | 1.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Hare-1.1B-base-GGUF/resolve/main/Hare-1.1B-base.f16.gguf) | f16 | 2.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
beowolx/CodeNinja-1.0-OpenChat-7B
beowolx
"2023-12-22T21:03:44Z"
1,455
104
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "code", "text-generation-inference", "conversational", "en", "dataset:glaiveai/glaive-code-assistant-v2", "dataset:TokenBender/code_instructions_122k_alpaca_style", "doi:10.57967/hf/1535", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-12-20T20:28:01Z"
--- license: mit datasets: - glaiveai/glaive-code-assistant-v2 - TokenBender/code_instructions_122k_alpaca_style language: - en metrics: - code_eval pipeline_tag: text-generation tags: - code - text-generation-inference --- <p align="center"> <img width="700px" alt="DeepSeek Coder" src="https://cdn-uploads.huggingface.co/production/uploads/64b566ab04fa6584c03b5247/5COagfF6EwrV4utZJ-ClI.png"> </p> <hr> # CodeNinja: Your Advanced Coding Assistant ## Overview CodeNinja is an enhanced version of the renowned model [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210). It having been fine-tuned through Supervised Fine Tuning on two expansive datasets, encompassing over 400,000 coding instructions. Designed to be an indispensable tool for coders, CodeNinja aims to integrate seamlessly into your daily coding routine. Discover the quantized versions at: [beowolx/CodeNinja-1.0-OpenChat-7B-GGUF](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF). ### Key Features - **Expansive Training Database**: CodeNinja has been refined with datasets from [glaiveai/glaive-code-assistant-v2](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v2) and [TokenBender/code_instructions_122k_alpaca_style](https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style), incorporating around 400,000 coding instructions across various languages including Python, C, C++, Rust, Java, JavaScript, and more. - **Flexibility and Scalability**: Available in a 7B model size, CodeNinja is adaptable for local runtime environments. - **Advanced Code Completion**: With a substantial context window size of 8192, it supports comprehensive project-level code completion. ## Prompt Format CodeNinja maintains the same prompt structure as OpenChat 3.5. Effective utilization requires adherence to this format: ``` GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant: ``` 🚨 Important: Ensure the use of `<|end_of_turn|>` as the end-of-generation token. **Adhering to this format is crucial for optimal results.** ## Usage Instructions ### Using LM Studio The simplest way to engage with CodeNinja is via the [quantized versions](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF) on [LM Studio](https://lmstudio.ai/). Ensure you select the "OpenChat" preset, which incorporates the necessary prompt format. The preset is also available in this [gist](https://gist.github.com/beowolx/b219466681c02ff67baf8f313a3ad817). ### Using the Transformers Library ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Initialize the model model_path = "beowolx/CodeNinja-1.0-OpenChat-7B" model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto") # Load the OpenChat tokenizer tokenizer = AutoTokenizer.from_pretrained("openchat/openchat-3.5-1210", use_fast=True) def generate_one_completion(prompt: str): messages = [ {"role": "user", "content": prompt}, {"role": "assistant", "content": ""} # Model response placeholder ] # Generate token IDs using the chat template input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True) # Produce completion generate_ids = model.generate( torch.tensor([input_ids]).to("cuda"), max_length=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id ) # Process the completion completion = tokenizer.decode(generate_ids[0], skip_special_tokens=True) completion = completion.split("\n\n\n")[0].strip() return completion ``` ## License CodeNinja is licensed under the MIT License, with model usage subject to the Model License. ## Contact For queries or support, please open an issue in the repository.
maldv/winter-garden-7b-alpha
maldv
"2024-03-14T21:31:44Z"
1,455
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "conversational", "multi-task", "base_model:paulml/OmniBeagleSquaredMBX-v3-7B", "base_model:ZySec-AI/ZySec-7B-v1", "base_model:liminerity/Omningotex-7b-slerp", "base_model:localfultonextractor/Erosumika-7B", "base_model:KatyTheCutie/LemonadeRP-4.5.3", "base_model:cgato/Thespis-Krangled-7b", "base_model:CorticalStack/pastiche-crown-clown-7b-dare", "base_model:snorkelai/Snorkel-Mistral-PairRM-DPO", "base_model:MTSAIR/multi_verse_model", "license:cc-by-nc-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-13T05:42:32Z"
--- license: cc-by-nc-4.0 tags: - merge - conversational - multi-task pipeline_tag: text-generation base_model: - paulml/OmniBeagleSquaredMBX-v3-7B - ZySec-AI/ZySec-7B-v1 - liminerity/Omningotex-7b-slerp - localfultonextractor/Erosumika-7B - KatyTheCutie/LemonadeRP-4.5.3 - cgato/Thespis-Krangled-7b - CorticalStack/pastiche-crown-clown-7b-dare - snorkelai/Snorkel-Mistral-PairRM-DPO - MTSAIR/multi_verse_model model-index: - name: winter-garden-7b-alpha - "Smart Assistant" results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.19 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.36 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.2 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 50.94 source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 80.35 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 54.44 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maldv/winter-garden-7b-alpha name: Open LLM Leaderboard --- # Winter Garden 7B - α - "Smart Assistant" It was mentioned that we are in the open ai dark winter; so I thought I would make myself a nice winter garden. ## An experiment I've merged four partitions successfully in the past, so lets go for 9! I started with: * Mistral-7B-v0.1 and merged in * OmniBeagleSquaredMBX-v3-7B * ZySec-7B-v1 * Omningotex-7b-slerp * Erosumika-7B * LemonadeRP-4.5.3 * Thespis-Krangled-7b * pastiche-crown-clown-7b-dare * Snorkel-Mistral-PairRM-DPO * multi_verse_model ### 9-partition merge All of the layers were partitioned in to 9 random bins. Alternating models were slerped at [0...1], and [1...0] gradients; except attention, which was slerped at 0.03. This means that the model is still predominantly ordered around base mistral - including half of the input and output layers, and 28% of attention. ### Other Includes fast tokenizer. ## Chat Template I put a conversational chat template, which takes "name", "to" (optional), and "content" as the turns. It is designed to follow a transcript style chat which is used by some of the models. This type of use-case is best done by outlining a scene and creating a character card. ``` ### {% title %} {% metadata %} USER: Hello ASSISTANT: Hi, how are you? ``` It leans to being a coder when given an `### Instruction`, follows `<s>[INST][/INST]`, and likes `<|user|>`, `<|assistant|>` as well. A quite cheery and intelligent model. Very good with science and math, but still capable of a decent amount of creativity for a 7b model. ## Scores Metric | Score ---|--- Average | 66.91 ARC | 65.19 HellaSwag | 85.36 MMLU | 65.2 TruthfulQA | 50.94 Winogrande | 80.35 GSM8K | 54.44 [Details](https://huggingface.co/datasets/open-llm-leaderboard/details_maldv__winter-garden-7b-alpha)
crusoeai/dolphin-2.9.1-llama-3-8b-GGUF
crusoeai
"2024-05-11T01:35:52Z"
1,455
7
null
[ "gguf", "region:us" ]
null
"2024-05-10T18:08:55Z"
Entry not found
Kquant03/NeuralTrix-7B-dpo-laser
Kquant03
"2024-02-17T07:49:30Z"
1,454
6
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "mlabonne/OmniBeagle-7B", "flemmingmiguel/MBX-7B-v3", "AiMavenAi/AiMaven-Prometheus", "base_model:mlabonne/OmniBeagle-7B", "base_model:flemmingmiguel/MBX-7B-v3", "base_model:AiMavenAi/AiMaven-Prometheus", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-14T12:48:52Z"
--- tags: - merge - mergekit - lazymergekit - mlabonne/OmniBeagle-7B - flemmingmiguel/MBX-7B-v3 - AiMavenAi/AiMaven-Prometheus base_model: - mlabonne/OmniBeagle-7B - flemmingmiguel/MBX-7B-v3 - AiMavenAi/AiMaven-Prometheus license: apache-2.0 --- # NeuralTrix-7B-v1 NeuralTrix-7B-v1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B) * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) * [AiMavenAi/AiMaven-Prometheus](https://huggingface.co/AiMavenAi/AiMaven-Prometheus) It was then trained with DPO using: * https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1 ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: mlabonne/OmniBeagle-7B parameters: density: 0.65 weight: 0.4 - model: flemmingmiguel/MBX-7B-v3 parameters: density: 0.6 weight: 0.35 - model: AiMavenAi/AiMaven-Prometheus parameters: density: 0.6 weight: 0.35 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "CultriX/NeuralTrix-7B-v1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
BarraHome/Mistroll-7B-v0.2-4bit
BarraHome
"2024-02-21T22:58:55Z"
1,454
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:BarraHome/Mistroll-7B-v0.1-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-02-21T22:53:57Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: BarraHome/Mistroll-7B-v0.1-4bit --- # Uploaded model - **Developed by:** BarraHome - **License:** apache-2.0 - **Finetuned from model :** BarraHome/Mistroll-7B-v0.1-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Rakuten/RakutenAI-7B
Rakuten
"2024-06-07T08:58:34Z"
1,454
40
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "arxiv:2403.15484", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-18T06:45:28Z"
--- license: apache-2.0 --- # RakutenAI-7B ## Model Description RakutenAI-7B is a systematic initiative that brings the latest technologies to the world of Japanese LLMs. RakutenAI-7B achieves the best scores on the Japanese language understanding benchmarks while maintaining a competitive performance on the English test sets among similar models such as OpenCalm, Elyza, Youri, Nekomata and Swallow. RakutenAI-7B leverages the Mistral model architecture and is based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) pre-trained checkpoint, exemplifying a successful retrofitting of the pre-trained model weights. Moreover, we extend Mistral's vocabulary from 32k to 48k to offer a better character-per-token rate for Japanese. *The technical report can be accessed at [arXiv](https://arxiv.org/abs/2403.15484).* *If you are looking for an instruction-tuned model, check [RakutenAI-7B-instruct](https://huggingface.co/Rakuten/RakutenAI-7B-instruct)*. *If you are looking for a chat-tuned model, check [RakutenAI-7B-chat](https://huggingface.co/Rakuten/RakutenAI-7B-chat)*. ## Model Evaluation Results | Model Name | 7-Avg. excl. XLSum-ja | Avg. | JCS | JNLI | MARC-ja | JSQuAD | Jaqket v2 | XLSum-ja | xWino | MGSM | |-------------------------------|:--------:|:-----:|:-------:|:-------:|:-------:|:-------:|:---------:|:--------:|:------:|:-------:| | | | | accuracy | accuracy | accuracy | exact-match | exact-match | rouge-2 | accuracy | accuracy | | | | | 3-shots | 3-shots | 3-shots | 2-shots | 1-shot | 1-shot | 0-shot | 5-shots | | rakuten-ai-7b | 69.80 | 62.83 | 84.27 | 48.69 | 96.29 | 79.09 | 80.67 | 14.08 | 77.16 | 22.40 | | nekomata-7b | 66.01 | 58.83 | 85.43 | 40.14 | 96.80 | 76.29 | 71.99 | 8.59 | 73.83 | 17.60 | | japanese-stablelm-base-gamma-7b | 64.83 | 59.12 | 80.07 | 14.71 | 92.41 | 81.38 | 85.05 | 19.16 | 82.59 | 17.60 | | youri-7b | 62.71 | 56.90 | 76.94 | 51.11 | 90.96 | 57.45 | 78.09 | 16.27 | 78.00 | 6.40 | | swallow-7b | 60.86 | 55.18 | 78.91 | 15.16 | 90.27 | 73.28 | 80.24 | 15.41 | 76.96 | 11.20 | | elyza-japanese-Llama-2-7b | 60.24 | 53.26 | 75.60 | 50.74 | 87.51 | 71.48 | 57.56 | 4.40 | 71.22 | 7.60 | | elyza-japanese-Llama-2-7b-fast | 58.31 | 51.34 | 71.49 | 45.77 | 86.61 | 70.91 | 64.18 | 2.54 | 61.63 | 7.60 | | open-calm-7b | 45.27 | 39.67 | 62.65 | 31.92 | 85.37 | 38.05 | 33.42 | 0.45 | 65.07 | 0.40 | <div style="text-align: center;">Table1: RakutenAI-7B foundation model performance on Japanese LM-Harness metrics in comparison with other models.</div> Our model achieves the highest average score, more than 3 points ahead of the next best model. The models are sorted by 7-Avg. We use the following commit https://github.com/Stability-AI/lm-evaluation-harness/tree/0fa86429679f521161d5b81a94c0c385e0a0976d for Japanese LM-Harness with v0.3 prompt version. | Model Name | Avg. | ARC | HellaSwag | MMLU | TruthfulQA | |---------------------------------|:----------------:|:------------------------:|:------------------------:|:-----------------------:|:-----------------------:| | | | accuracy | accuracy | accuracy | accuracy | | | | 25-shots | 10-shots | 5-shots | 6-shots | | rakuten-ai-7b | 60.50 | 60.24 | 82.20 | 61.31 | 38.25 | | japanese-stablelm-base-gamma-7b | 56.08 | 50.60 | 77.43 | 54.99 | 41.30 | | elyza-japanese-Llama-2-7b | 52.76 | 51.62 | 76.54 | 44.85 | 38.02 | | elyza-japanese-Llama-2-7b-fast | 52.07 | 51.79 | 75.46 | 44.41 | 36.63 | | nekomata-7b | 51.97 | 47.35 | 72.78 | 48.38 | 39.38 | | youri-7b | 50.60 | 49.15 | 75.02 | 42.36 | 35.89 | | swallow-7b | 49.90 | 47.35 | 72.20 | 39.36 | 40.68 | | open-calm-7b | 29.87 | 20.56 | 31.01 | 23.73 | 44.16 | <div style="text-align: center;">Table2: RakutenAI-7B foundation model performance on English LM-Harness metrics in comparison with other models.</div> Our model achieves the highest average score, more than 4 points ahead of the next best model. We use the following commit for English LM-Harness https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463. An independent evaluation by Kamata et.al. for [Nejumi LLMリーダーボード Neo](https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-Neo--Vmlldzo2MTkyMTU0#総合評価) using a weighted average of [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) and [Japanese MT-bench](https://github.com/Stability-AI/FastChat/tree/jp-stable/fastchat/llm_judge) also confirms the highest performance of chat/instruct versions of RakutenAI-7B among Open LLMs of similar sizes, with a score of 0.393/0.331 respectively, as of 22nd March 2024. ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "Rakuten/RakutenAI-7B" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype="auto", device_map="auto") model.eval() requests = [ "南硫黄島原生自然環境保全地域は、自然", "The capybara is a giant cavy rodent", ] for req in requests: input_ids = tokenizer.encode(req, return_tensors="pt").to(device=model.device) tokens = model.generate( input_ids, max_new_tokens=256, do_sample=True, repetition_penalty=1.1, pad_token_id=tokenizer.eos_token_id, ) out = tokenizer.decode(tokens[0], skip_special_tokens=True) print("INPUT:\n" + req) print("OUTPUT:\n" + out) print() print() ``` ## Model Details * **Developed by**: [Rakuten Group, Inc.](https://ai.rakuten.com/) * **Language(s)**: Japanese, English * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0). ### Limitations and Bias The suite of RakutenAI-7B models is capable of generating human-like text on a wide range of topics. However, like all LLMs, they have limitations and can produce biased, inaccurate, or unsafe outputs. Please exercise caution and judgement while interacting with them. ## Citation For citing our work on the suite of RakutenAI-7B models, please use: ``` @misc{rakutengroup2024rakutenai7b, title={RakutenAI-7B: Extending Large Language Models for Japanese}, author={{Rakuten Group, Inc.} and Aaron Levine and Connie Huang and Chenguang Wang and Eduardo Batista and Ewa Szymanska and Hongyi Ding and Hou Wei Chou and Jean-François Pessiot and Johanes Effendi and Justin Chiu and Kai Torben Ohlhus and Karan Chopra and Keiji Shinzato and Koji Murakami and Lee Xiong and Lei Chen and Maki Kubota and Maksim Tkachenko and Miroku Lee and Naoki Takahashi and Prathyusha Jwalapuram and Ryutaro Tatsushima and Saurabh Jain and Sunil Kumar Yadav and Ting Cai and Wei-Te Chen and Yandi Xia and Yuki Nakayama and Yutaka Higashiyama}, year={2024}, eprint={2403.15484}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
theprint/phi-3-mini-4k-python
theprint
"2024-06-05T22:32:16Z"
1,454
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "dataset:iamtarun/python_code_instructions_18k_alpaca", "dataset:ajibawa-2023/Python-Code-23k-ShareGPT", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-03T06:36:55Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit datasets: - iamtarun/python_code_instructions_18k_alpaca - ajibawa-2023/Python-Code-23k-ShareGPT pipeline_tag: text-generation --- # Uploaded model - **Developed by:** theprint - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
brucethemoose/Yi-34B-200K-DARE-merge-v7
brucethemoose
"2024-03-11T20:05:50Z"
1,453
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "Yi", "en", "arxiv:2311.03099", "arxiv:2306.01708", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-08T16:33:39Z"
--- language: - en license: other library_name: transformers tags: - mergekit - merge - Yi license_name: yi-license license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE base_model: [] model-index: - name: Yi-34B-200K-DARE-merge-v7 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 68.09 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.99 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 77.3 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 58.9 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 65.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7 name: Open LLM Leaderboard --- # Possibly made obsolete by: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8 # Yi 34B 200K DARE Merge v7 A merge of several Yi 34B 200K models using the new DARE Ties method via mergekit. The goal is to create a merge model that excels at 32K+ context performance. ## Prompt template: Orca-Vicuna ``` SYSTEM: {system_message} USER: {prompt} ASSISTANT: ``` It might recognize ChatML, and possibly Alpaca-like formats. Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/ ## Running Being a Yi model, try running a lower temperature with 0.02-0.06 MinP, a little repetition penalty, maybe mirostat with a low tau, and no other samplers. Yi tends to run "hot" by default, and it really needs a low temperature + MinP to cull the huge vocabulary. 24GB GPUs can efficiently run Yi-34B-200K models at **45K-90K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/). 16GB GPUs can still run the high context with aggressive quantization. To load/train this in full-context backends like transformers, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! I do not recommend running high context without context-efficient backends like exllamav2 or unsloth. ## Testing Notes See: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5#testing-notes A "4k" merge model was created to try and extend the context of SUS Chat and DPO-bagel before adding them to the merge: https://huggingface.co/brucethemoose/SUS-Bagel-200K-DARE-Test In addition, the weight gradients are biased towards Vicuna-format models in the first few layers to try and "emphasize" the Orca-Vicuna prompt template. How sucessful this is remains to be seen. ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base. ### Models Merged The following models were included in the merge: * https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat * https://huggingface.co/jondurbin/bagel-34b-v0.2 * https://huggingface.co/NousResearch/Nous-Capybara-34B * https://huggingface.co/migtissera/Tess-M-Creative-v1.0 * https://huggingface.co/brucethemoose/SUS-Bagel-200K-DARE-Test * https://huggingface.co/Mihaiii/Pallas-0.5 * https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k * https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2 * https://huggingface.co/migtissera/Tess-34B-v1.4 * https://huggingface.co/SUSTech/SUS-Chat-34B * https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2 * https://huggingface.co/chargoddard/Yi-34B-200K-Llama * https://huggingface.co/chargoddard/Yi-34B-Llama ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama # No parameters necessary for base model - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4 parameters: weight: [0.23, 0.125, 0.125, 0.125, 0.125, 0.125] density: 0.59 - model: /home/alpha/Models/Raw/Mihaiii_Pallas-0.5 parameters: weight: [0.23, 0.125, 0.125, 0.125, 0.125, 0.125] density: 0.59 - model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k parameters: weight: [0.02, 0.106, 0.106, 0.106, 0.106, 0.106] density: 0.59 - model: /home/alpha/Storage/Models/Raw/jondurbin_bagel-34b-v0.2 #Only the SFT in the main merge since the DPO version seems to have no long context ability at all parameters: weight: [0.02, 0.100, 0.100, 0.100, 0.100, 0.100] density: 0.4 - model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200k-Q-FastChat parameters: weight: [0.02, 0.100, 0.100, 0.100, 0.100, 0.100] density: 0.59 #- model: /home/alpha/Storage/Models/Raw/ehartford_dolphin-2.2-yi-34b-200k # Dolphin 200K seems to be funky according to multiple leaderboards and perplexity tests? # parameters: # weight: 0.15 # density: 0.6 - model: /home/alpha/Models/Raw/adamo1139_Yi-34B-200K-AEZAKMI-v2 parameters: weight: [0.02, 0.110, 0.110, 0.110, 0.110, 0.110] density: 0.59 - model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B parameters: weight: [0.22, 0.126, 0.126, 0.126, 0.126, 0.126] density: 0.59 - model: /home/alpha/Storage/Models/Raw/4kmerge parameters: weight: [0.02, 0.108, 0.108, 0.108, 0.108, 0.108] density: 0.5 - model: /home/alpha/Models/Raw/migtissera_Tess-M-Creative-v1.0 parameters: weight: [0.22, 0.100, 0.100, 0.100, 0.100, 0.10] density: 0.59 merge_method: dare_ties tokenizer_source: union base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama parameters: int8_mask: true dtype: bfloat16 ``` The following config was used for the "4kmerge" model: ```yaml models: - model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama # No parameters necessary for base model - model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama parameters: weight: 0.5 density: 1 - model: /home/alpha/Models/Raw/SUSTech_SUS-Chat-34B parameters: weight: 0.2 density: 0.12 - model: /home/alpha/Models/Raw/jondurbin_bagel-dpo-34b-v0.2 parameters: weight: 0.2 density: 0.15 - model: /home/alpha/Models/Raw/jondurbin_bagel-34b-v0.2 parameters: weight: 0.1 density: 0.12 merge_method: dare_ties tokenizer_source: union base_model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama parameters: int8_mask: true dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__Yi-34B-200K-DARE-merge-v7) | Metric |Value| |---------------------------------|----:| |Avg. |73.12| |AI2 Reasoning Challenge (25-Shot)|68.09| |HellaSwag (10-Shot) |85.99| |MMLU (5-Shot) |77.30| |TruthfulQA (0-shot) |58.90| |Winogrande (5-shot) |83.11| |GSM8k (5-shot) |65.35|
vankhoa/test_phi2
vankhoa
"2024-02-27T21:24:54Z"
1,453
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-27T20:55:42Z"
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
meta-llama/CodeLlama-7b-Python-hf
meta-llama
"2024-03-14T18:40:57Z"
1,453
7
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "code", "arxiv:2308.12950", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-13T19:37:27Z"
--- extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: >- ### LLAMA 2 COMMUNITY LICENSE AGREEMENT "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 2" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/. "Llama Materials" means, collectively, Meta's proprietary Llama 2 and documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free limited license under Meta's intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved." iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta's ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. USE POLICY ### Llama 2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy). #### Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Llama 2 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [[email protected]](mailto:[email protected]) extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit language: - code pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 license: llama2 --- # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 7B Python specialist version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf) | [meta-llama/CodeLlama-7b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Python-hf) | [meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf) | | 13B | [meta-llama/CodeLlama-13b-hf](https://huggingface.co/meta-llama/CodeLlama-13b-hf) | [meta-llama/CodeLlama-13b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Python-hf) | [meta-llama/CodeLlama-13b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf) | | 34B | [meta-llama/CodeLlama-34b-hf](https://huggingface.co/meta-llama/CodeLlama-34b-hf) | [meta-llama/CodeLlama-34b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Python-hf) | [meta-llama/CodeLlama-34b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Instruct-hf) | | 70B | [meta-llama/CodeLlama-70b-hf](https://huggingface.co/meta-llama/CodeLlama-70b-hf) | [meta-llama/CodeLlama-70b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Python-hf) | [meta-llama/CodeLlama-70b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers: ```bash pip install transformers accelerate ``` Model capabilities: - [x] Code completion. - [ ] Infilling. - [ ] Instructions / chat. - [x] Python specialist. ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the Python version of the 7B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
bartowski/gemma-1.1-7b-it-GGUF
bartowski
"2024-04-06T01:54:22Z"
1,453
11
transformers
[ "transformers", "gguf", "text-generation", "license:gemma", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-06T01:39:03Z"
--- library_name: transformers widget: - messages: - role: user content: How does the brain work? inference: parameters: max_new_tokens: 200 extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: >- To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license license: gemma quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp Quantizations of gemma-1.1-7b-it Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2589">b2589</a> for quantization. Original model: https://huggingface.co/google/gemma-1.1-7b-it Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [gemma-1.1-7b-it-Q8_0.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q8_0.gguf) | Q8_0 | 9.07GB | Extremely high quality, generally unneeded but max available quant. | | [gemma-1.1-7b-it-Q6_K.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q6_K.gguf) | Q6_K | 7.01GB | Very high quality, near perfect, *recommended*. | | [gemma-1.1-7b-it-Q5_K_M.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q5_K_M.gguf) | Q5_K_M | 6.14GB | High quality, very usable. | | [gemma-1.1-7b-it-Q5_K_S.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q5_K_S.gguf) | Q5_K_S | 5.98GB | High quality, very usable. | | [gemma-1.1-7b-it-Q5_0.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q5_0.gguf) | Q5_0 | 5.98GB | High quality, older format, generally not recommended. | | [gemma-1.1-7b-it-Q4_K_M.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q4_K_M.gguf) | Q4_K_M | 5.32GB | Good quality, uses about 4.83 bits per weight. | | [gemma-1.1-7b-it-Q4_K_S.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q4_K_S.gguf) | Q4_K_S | 5.04GB | Slightly lower quality with small space savings. | | [gemma-1.1-7b-it-IQ4_NL.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-IQ4_NL.gguf) | IQ4_NL | 5.04GB | Decent quality, similar to Q4_K_S, new method of quanting, | | [gemma-1.1-7b-it-IQ4_XS.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-IQ4_XS.gguf) | IQ4_XS | 4.80GB | Decent quality, new method with similar performance to Q4. | | [gemma-1.1-7b-it-Q4_0.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q4_0.gguf) | Q4_0 | 5.01GB | Decent quality, older format, generally not recommended. | | [gemma-1.1-7b-it-Q3_K_L.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q3_K_L.gguf) | Q3_K_L | 4.70GB | Lower quality but usable, good for low RAM availability. | | [gemma-1.1-7b-it-Q3_K_M.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q3_K_M.gguf) | Q3_K_M | 4.36GB | Even lower quality. | | [gemma-1.1-7b-it-IQ3_M.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-IQ3_M.gguf) | IQ3_M | 4.10GB | Medium-low quality, new method with decent performance. | | [gemma-1.1-7b-it-IQ3_S.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-IQ3_S.gguf) | IQ3_S | 3.98GB | Lower quality, new method with decent performance, recommended over Q3 quants. | | [gemma-1.1-7b-it-Q3_K_S.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q3_K_S.gguf) | Q3_K_S | 3.98GB | Low quality, not recommended. | | [gemma-1.1-7b-it-Q2_K.gguf](https://huggingface.co/bartowski/gemma-1.1-7b-it-GGUF/blob/main/gemma-1.1-7b-it-Q2_K.gguf) | Q2_K | 3.48GB | Extremely low quality, *not* recommended. | Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
Helsinki-NLP/opus-mt-is-en
Helsinki-NLP
"2023-08-16T11:58:30Z"
1,452
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "is", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- tags: - translation license: apache-2.0 --- ### opus-mt-is-en * source languages: is * target languages: en * OPUS readme: [is-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/is-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/is-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.is.en | 51.4 | 0.672 |
vicgalle/gpt2-open-instruct-v1
vicgalle
"2023-04-12T18:47:37Z"
1,452
21
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "instruction-tuning", "alpaca", "en", "dataset:hakurei/open-instruct-v1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-08T07:59:46Z"
--- license: mit datasets: - hakurei/open-instruct-v1 language: - en tags: - instruction-tuning - gpt2 - alpaca --- # GPT2 finetuned on the open-instruct-v1 dataset The finetune used the Alpaca format for the prompts, so for better results you have to format the prompt using Alpaca's template. See the following examples below ## Examples ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Pretend you are an alien visiting Earth. Write three opinions you believe. ### Response: 1. Earth is a beautiful place. The sky is clear and the land is lush and diverse. 2. I believe that there is a species of extraterrestrial life living on the planet. These are known as 'gods' or 'living beings'. 3. I believe that there is a great conspiracy in place. The government is trying to control the planet and its resources. ``` ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Pretend you are an activist against guns. Write three opinions you believe. ### Response: 1. Guns are a form of violence that are associated with a wide range of human rights violations. 2. Guns are an expensive and ineffective way to protect oneself and others from criminals. 3. Guns can be used to commit crimes and violence. ``` ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Pretend you are a fanatical about guns. Write three opinions you believe. ### Response: 1. Guns are a wonderful weapon. They're incredibly dangerous and can hurt people. 2. Guns are a great way to protect yourself from being shot. They're designed to withstand extreme conditions and fire quickly. 3. Guns are a great way to store and transport large amounts of ammunition. They can be used for everything from self-defense to hunting. ```
TheBloke/Llama-2-13B-German-Assistant-v4-GGUF
TheBloke
"2023-09-27T12:47:43Z"
1,451
4
transformers
[ "transformers", "gguf", "llama", "en", "de", "dataset:flozi00/conversations", "base_model:flozi00/Llama-2-13b-german-assistant-v4", "license:llama2", "text-generation-inference", "region:us" ]
null
"2023-09-05T16:13:32Z"
--- language: - en - de license: llama2 datasets: - flozi00/conversations model_name: Llama 2 13B German Assistant v4 base_model: flozi00/Llama-2-13b-german-assistant-v4 inference: false model_creator: Florian Zimmermeister model_type: llama prompt_template: '### User: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama 2 13B German Assistant v4 - GGUF - Model creator: [Florian Zimmermeister](https://huggingface.co/flozi00) - Original model: [Llama 2 13B German Assistant v4](https://huggingface.co/flozi00/Llama-2-13b-german-assistant-v4) <!-- description start --> ## Description This repo contains GGUF format model files for [Florian Zimmermeister's Llama 2 13B German Assistant v4](https://huggingface.co/flozi00/Llama-2-13b-german-assistant-v4). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF) * [Florian Zimmermeister's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/flozi00/Llama-2-13b-german-assistant-v4) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: User-Assistant-Hashes ``` ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [llama-2-13b-german-assistant-v4.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q2_K.gguf) | Q2_K | 2 | 5.46 GB| 7.96 GB | smallest, significant quality loss - not recommended for most purposes | | [llama-2-13b-german-assistant-v4.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q3_K_S.gguf) | Q3_K_S | 3 | 5.70 GB| 8.20 GB | very small, high quality loss | | [llama-2-13b-german-assistant-v4.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q3_K_M.gguf) | Q3_K_M | 3 | 6.37 GB| 8.87 GB | very small, high quality loss | | [llama-2-13b-german-assistant-v4.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q3_K_L.gguf) | Q3_K_L | 3 | 6.97 GB| 9.47 GB | small, substantial quality loss | | [llama-2-13b-german-assistant-v4.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q4_0.gguf) | Q4_0 | 4 | 7.41 GB| 9.91 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [llama-2-13b-german-assistant-v4.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q4_K_S.gguf) | Q4_K_S | 4 | 7.45 GB| 9.95 GB | small, greater quality loss | | [llama-2-13b-german-assistant-v4.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q4_K_M.gguf) | Q4_K_M | 4 | 7.91 GB| 10.41 GB | medium, balanced quality - recommended | | [llama-2-13b-german-assistant-v4.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q5_0.gguf) | Q5_0 | 5 | 9.02 GB| 11.52 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [llama-2-13b-german-assistant-v4.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q5_K_S.gguf) | Q5_K_S | 5 | 9.02 GB| 11.52 GB | large, low quality loss - recommended | | [llama-2-13b-german-assistant-v4.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q5_K_M.gguf) | Q5_K_M | 5 | 9.27 GB| 11.77 GB | large, very low quality loss - recommended | | [llama-2-13b-german-assistant-v4.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q6_K.gguf) | Q6_K | 6 | 10.73 GB| 13.23 GB | very large, extremely low quality loss | | [llama-2-13b-german-assistant-v4.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-13B-German-Assistant-v4-GGUF/blob/main/llama-2-13b-german-assistant-v4.Q8_0.gguf) | Q8_0 | 8 | 13.89 GB| 16.39 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Llama-2-13B-German-Assistant-v4-GGUF and below it, a specific filename to download, such as: llama-2-13b-german-assistant-v4.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Llama-2-13B-German-Assistant-v4-GGUF llama-2-13b-german-assistant-v4.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Llama-2-13B-German-Assistant-v4-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-13B-German-Assistant-v4-GGUF llama-2-13b-german-assistant-v4.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m llama-2-13b-german-assistant-v4.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### User: {prompt}\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-13B-German-Assistant-v4-GGUF", model_file="llama-2-13b-german-assistant-v4.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Florian Zimmermeister's Llama 2 13B German Assistant v4 ## This project is sponsored by [ ![PrimeLine](https://www.primeline-solutions.com/skin/frontend/default/theme566/images/primeline-solutions-logo.png) ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/) # Model Card This model is an finetuned version for german instructions and conversations in style of Alpaca. "### Assistant:" "### User:" The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks. The model archictecture is based on Llama version 2 with 13B parameters, trained on 100% renewable energy powered hardware. This work is contributed by private research of [flozi00](https://huggingface.co/flozi00) Join discussions about german llm research, and plan larger training runs together: https://join.slack.com/t/slack-dtc7771/shared_invite/zt-219keplqu-hLwjm0xcFAOX7enERfBz0Q <!-- original-model-card end -->
KBlueLeaf/llama3-llava-next-8b-gguf
KBlueLeaf
"2024-05-18T17:13:57Z"
1,451
3
null
[ "gguf", "en", "region:us" ]
null
"2024-05-18T17:01:41Z"
--- language: - en --- # LLaMA3-LLaVA-NeXT-8B GGUF files GGUF version of https://huggingface.co/lmms-lab/llama3-llava-next-8b <br> download the mmproj-model-f16.gguf and any quant you want of llama3-llava-next-8b-*.gguf Follow the [readme from llama.cpp](https://github.com/ggerganov/llama.cpp/blob/master/examples/llava/README.md)<br> or the [readme from llama-cpp-python](https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#multi-modal-models)
ChrisWilson011016/5G6DSbmzoaN6Q1MErU52oW3t4HF8mRheMtapUy3woE2shFa2_vgg
ChrisWilson011016
"2024-03-04T18:58:04Z"
1,450
0
keras
[ "keras", "region:us" ]
null
"2024-02-24T15:25:42Z"
Entry not found
TIGER-Lab/TIGERScore-13B
TIGER-Lab
"2024-03-13T19:42:30Z"
1,449
16
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text evaluation", "metric", "llm metric", "tigerscore", "text2text-generation", "en", "zh", "ru", "cs", "dataset:TIGER-Lab/MetricInstruct", "arxiv:2310.00752", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2023-11-26T22:37:16Z"
--- language: - en - zh - ru - cs license: mit tags: - text evaluation - metric - llm metric - llama - tigerscore datasets: - TIGER-Lab/MetricInstruct metrics: - pearsonr - spearmanr pipeline_tag: text2text-generation model-index: - name: TIGERScore-13B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 59.04 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TIGER-Lab/TIGERScore-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.79 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TIGER-Lab/TIGERScore-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 55.07 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TIGER-Lab/TIGERScore-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 40.38 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TIGER-Lab/TIGERScore-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TIGER-Lab/TIGERScore-13B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 28.73 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TIGER-Lab/TIGERScore-13B name: Open LLM Leaderboard --- ## TIGERScore [Project Page](https://tiger-ai-lab.github.io/TIGERScore/) | [Paper](https://arxiv.org/abs/2310.00752) | [Code](https://github.com/TIGER-AI-Lab/TIGERScore) | [🤗Demo](https://huggingface.co/spaces/TIGER-Lab/TIGERScore) | [🤗TIGERScore-7B](https://huggingface.co/TIGER-Lab/TIGERScore-7B-V1.2) | [🤗TIGERScore-13B](https://huggingface.co/TIGER-Lab/TIGERScore-13B-V1.2) ## Introduction We present TIGERScore, a **T**rained metric that follows **I**nstruction **G**uidance to perform **E**xplainable, and **R**eference-free evaluation over a wide spectrum of text generation tasks. Our metric is based on LLaMA-2, trained on our meticulously curated instruction-tuning dataset [MetricInstruct](https://huggingface.co/datasets/TIGER-Lab/MetricInstruct) which covers 6 text generation tasks and 23 text generation datasets. Existing automatic metrics are lagging and suffer from issues like 1) **Dependency on references**, 2) **Limited to specific domains**, 3) **Lack of attribution**. Contrary to them, TIGERScore is designed to be driven by natural language instruction and provide detailed error analysis to pinpoint the mistakes in the generated text. Specifically, TIGERScore takes an instruction, an associated input context along with a hypothesis output that might contain errors. Then, TIGERScore will evaluate this hypothesis output and list several errors, each consisting of the error location, aspect, explanation and penalty scores (score reduced, starting from 0). The sum of the reduced scores is taken as the overall rating of this output. As a reference-free metric, its correlation can even surpass the best existing reference-based metrics. We believe TIGERScore demonstrates the possibility of building universal explainable metrics to evaluate any text generation task. ## Training Data The models are trained on the 🤗 [MetricInstruct Dataset](https://huggingface.co/datasets/TIGER-Lab/MetricInstruct), which covers 6 text generation tasks and 22 text generation datasets. Check out the dataset card for more details. ## Training Procedure The models are fine-tuned with the MetricInstruct dataset using the original Llama-2 model as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details. ## Evaluation Experiments show that TIGERScore surpasses existing baseline metrics in correlation with human ratings on all 6 held-in tasks and 1 held-out task, achiving the highest overall performance. We hope the emergence of TIGERScore can promote the research in the LLM community as a powerful, interpretable, and easy-to-use metric. ### Kendall Results | Tasks⟶ | Summarization | Translation | Data2Text | Long-form QA | MathQA | Instruction Following | Story-Gen | Average | |----------------------------------------|-----------|-----------|-----------------|-----------|-----------|-----------|-----------|-----------| | | | | GPT-based | Metrics | | | | | | GPT-3.5-turbo (few-shot) | **30.45** | 32.3 | 30.38 | 20.91 | **58.57** | 17.73 | 3.26 | 27.65 | | GPT-4 (zero-shot) | 29.32 | **35.38** | **32.26** | **35.85** | 46.63 | **49.5** | **25.69** | **36.38** | | | | | Reference-based | Metrics | | | | | | BLEU | 8.71 | 14.5 | 23.13 | 7.73 | 17.25 | 35.92 | -0.89 | 15.19 | | ROUGE-2f | 10.67 | 13.19 | 24.74 | 11.73 | 18.07 | 34.59 | 1.78 | 16.4 | | InstructScore | 20.86 | 40.44 | 30.21 | 15.64 | -3.87 | 13.87 | 13.5 | 18.66 | | GPTScore-ref | 10.8 | 18.74 | 27.47 | 22.13 | 14.86 | 25.4 | 12.78 | 18.88 | | BARTScore-cnn (hypo-ref) | 10 | 21.06 | 27.04 | 20.67 | **19.07** | 24.7 | 18.58 | 20.16 | | BARTScore-para (hypo-ref) | 10.41 | 24.9 | 28.42 | 20.24 | 14.1 | 26.13 | 12.11 | 19.47 | | BERTScore | 17.39 | 31.57 | 30.74 | 17.7 | 9.41 | 35.61 | 2 | 20.63 | | BLEURT | 12.69 | 36.12 | **34.48** | 23.11 | 2.88 | 27.94 | 19.18 | 22.34 | | UniEval (summ) | **35.89** | 16.08 | 28.56 | **29.32** | 16.15 | 11.93 | **31.22** | 24.17 | | COMET-22 | 25.01 | **42.79** | 23.43 | 24.66 | -4.52 | **36.17** | 27.52 | **25.01** | | | | | Reference-free |Metrics | | | | | | BARTScore-para (src-hypo) | 29.12 | 7.01 | 22.32 | 18.8 | -2.21 | 4.26 | 14.15 | 13.35 | | BARTScore-cnn (src-hypo) | 26.63 | 9.4 | 23.69 | 28.93 | 1.23 | 19.09 | 23.29 | 18.89 | | Llama-2-13b-chat-0-shot | 25.22 | 11.79 | 23.45 | 15.96 | 1.08 | 19.5 | 21.52 | 16.93 | | COMETKiwi | 11.87 | 36.37 | 19.08 | 12.23 | -9.38 | 26.46 | 12.78 | 15.63 | | GPTScore-src | 28.2 | 6.5 | 19.81 | 27.64 | 11.64 | 20.04 | 16.36 | 18.6 | | TigerScore-7B | 28.79 | 33.65 | 32.44 | 33.93 | 19.98 | 38.13 | 29.72 | 30.95 | | TigerScore-13B | **31.29** | **36.5** | **36.43** | **33.17** | **21.58** | **41.84** | **35.33** | **33.73** | | ∆ (ours - best reference-free) | +2 | +0 | +13 | +4 | +10 | +15 | +14 | +15 | | ∆ (ours - best reference-based) | -4 | -6 | +2 | +4 | +2 | +5 | +4 | +8 | ### Pearson Results | Tasks⟶ | Summarization | Translation | Data2Text | Long-form QA | MathQA | Instruction Following | Story-Gen | Average | |-------------------------------|-----------|-----------|-----------------|-----------|-----------|-----------|-----------|-----------| | | | | GPT-based | Metrics | | | | | | GPT-3.5-turbo (few-shot) | **45.53** | **43.77** | **47.76** | 29.84 | **61.26** | 15.36 | 7.8 | 35.9 | | GPT-4 (zero-shot) | 40.75 | 33.92 | 46.83 | **49.3** | 54.98 | **60.45** | **37.74** | **46.28** | | | | | Reference-based | Metrics | | | | | | BLEU | 11.66 | 17.47 | 34.29 | 18.21 | 18.12 | 29.47 | -0.64 | 18.37 | | ROUGE-2f | 16.03 | 16.26 | 35.85 | 19.66 | 20.69 | 33.49 | 2.88 | 20.69 | | InstructScore | 27.4 | 51.55 | 47.28 | 20.59 | 0.36 | 20.98 | 12.81 | 25.85 | | GPTScore-ref | 13.47 | 21.05 | 48.7 | 33.4 | 18.22 | 29.66 | 18.94 | 26.2 | | BARTScore-cnn (hypo-ref) | 16.67 | 23.56 | 45.08 | 32.78 | **23.09** | 26.57 | 27.61 | 27.91 | | BARTScore-para (hypo-ref) | 19.73 | 29.04 | 47.89 | 32.7 | 17.33 | 30.2 | 17.76 | 27.81 | | BERTScore | 26.26 | 37.65 | 48.22 | 26.39 | 11.19 | 45.58 | 4.08 | 28.48 | | BLEURT | 17.27 | 43 | **54.32** | 34.26 | 3.98 | 39.15 | 27.89 | 31.41 | | UniEval (summ) | **53.22** | 23.11 | 51.14 | **36.95** | 17.69 | 30.87 | **44.88** | 36.84 | | COMET-22 | 35.32 | **58.46** | 43.82 | 36.79 | -5.58 | **49.68** | 40.12 | **36.94** | | | | | Reference-free | Metrics | | | | | | BARTScore-para (src-hypo) | 43.11 | 6.96 | 37.82 | 29.86 | -0.41 | 19.37 | 19.99 | 22.38 | | BARTScore-cnn (src-hypo) | 39.72 | 9.53 | 45.43 | 41.48 | 3.28 | 34.97 | 33.51 | 29.7 | | Llama-2-13b-chat-0-shot | 29.59 | 9.09 | 41.32 | 21.67 | 2.8 | 22.71 | 21.13 | 21.19 | | COMETKiwi | 14.22 | **50.91** | 23.63 | 22.59 | -13.35 | 34.46 | 19.12 | 21.65 | | GPTScore-src | 41.71 | 6.82 | 41.19 | 39.79 | 13.99 | 27.59 | 23.22 | 27.76 | | TigerScore-7B | 43.95 | 37.7 | 49.13 | **46.1** | 21.77 | 38.26 | 39.9 | 39.54 | | TigerScore-13B | **44.21** | 41.54 | **52.87** | 44.76 | **24.41** | **47.52** | **47.66** | **43.28** | | ∆ (ours - best reference-free) | +1 | -9 | +7 | +5 | +10 | +20 | +14 | +13 | | ∆ (ours - best reference-based) | -9 | -17 | -2 | +9 | +1 | -2 | +3 | +6 | ### Spearman Results | Tasks⟶ | Summarization | Translation | Data2Text | Long-form QA | MathQA | Instruction Following | Story-Gen | Average | |-------------------------------------------|----------------|----------------|----------------|-----------------|----------------|----------------|----------------|----------------| | | | | GPT-based | Metrics | | | | | | GPT-3.5-turbo (few-shot) | **38.50** | 40.53 | 40.20 | 29.33 | **66.46** | 23.20 | 4.77 | 34.71 | | GPT-4 (zero-shot) | 36.46 | **43.87** | **44.04** | **48.95** | 51.71 | **58.53** | **32.48** | **45.15** | | | | | Reference-based | Metrics | | | | | | BLEU | 11.98 | 19.73 | 33.29 | 11.38 | 21.12 | **46.61** | -1.17 | 20.42 | | ROUGE-2f | 14.53 | 17.83 | 35.49 | 16.83 | 22.12 | 44.56 | 2.34 | 21.96 | | InstructScore | 26.33 | 47.30 | 43.93 | 21.62 | -4.15 | 16.19 | 16.13 | 23.91 | | GPTScore-ref | 14.73 | 24.95 | 39.42 | 31.60 | 18.20 | 33.14 | 18.24 | 25.75 | | BARTScore-cnn(hypo-ref) | 13.64 | 28.53 | 36.12 | 29.57 | **23.35** | 32.49 | 26.64 | 27.19 | | BARTScore-para (hypo-ref) | 17.18 | 33.72 | 40.79 | 28.94 | 17.27 | 34.47 | 17.43 | 27.11 | | BERTScore | 23.67 | 42.41 | 43.75 | 25.60 | 11.53 | 45.77 | 2.88 | 27.95 | | BLEURT | 17.30 | 48.41 | **48.76** | 33.26 | 3.53 | 36.46 | 27.52 | 30.75 | | UniEval(summ) | **47.52** | 21.90 | 38.38 | **41.83** | 19.78 | 16.02 | **44.46** | 32.84 | | COMET-22 | 33.75 | **56.35** | 33.92 | 35.28 | -5.53 | 46.13 | 39.20 | **34.16** | | | | | Reference-free | Metrics | | | | | | BARTScore-para (src-hypo) | **38.68** | 9.60 | 32.26 | 26.86 | -2.70 | 5.92 | 20.55 | 18.74 | | BARTScore-cnn (src-hypo) | 35.50 | 12.83 | 34.33 | 40.96 | 1.50 | 25.43 | 33.48 | 26.29 | | Llama-2-13b-chat-0-shot | 28.53 | 14.38 | 29.24 | 19.91 | 1.08 | 21.37 | 26.78 | 20.18 | | COMETKiwi | 16.27 | **48.48** | 27.90 | 18.05 | -11.48 | 34.86 | 18.47 | 21.79 | | GPTScore-src | 37.41 | 8.90 | 28.82 | 39.48 | 14.25 | 26.46 | 23.91 | 25.61 | | TIGERScore-7B (ours) | 35.11 | 41.50 | 42.39 | **47.11** | 21.23 | 43.57 | 39.26 | 38.60 | | TIGERScore-13B (ours) | 36.81 | 44.99 | **45.88** | 46.22 | **23.32** | **47.03** | **46.36** | **41.52** | | Δ (ours - best reference-free) | -2 | -3 | +12 | +5 | +9 | +14 | +13 | +16 | | ∆ (ours - best reference-based) | -9 | -11 | -3 | +5 | -0 | +0 | +2 | +7 | ## Usage TIGERScore can be easily loaded in 2 lines of codes, and provides a friendly scoring interface function. To use TIGERScore, first install `tigerscore` with ```bash pip install git+https://github.com/TIGER-AI-Lab/TIGERScore.git ``` Then load the tigerscore model variates according to you needs. ```python # set up scorer from tigerscore import TIGERScorer scorer = TIGERScorer(model_name="TIGER-Lab/TIGERScore-13B") # on GPU # scorer = TIGERScorer(model_name="TIGER-Lab/TIGERScore-13B", quantized=True) # 4 bit quantization on GPU # scorer = TIGERScorer(model_name="TIGER-Lab/TIGERScore-13B", use_vllm=True) # VLLM on GPU, Recommended for faster evaluation (0.2s per instance) # scorer = TIGERScorer(model_name="TIGER-Lab/TIGERScore-13B-GGUF", use_llamacpp=True) # 4 bit quantization on CPU ``` After loading, you can easily get errors of the provided **hypothesis output** given the **instruction** and **input context** ```python # example instruction = "Write an apology letter." input_context = "Reason: You canceled a plan at the last minute due to illness." hypo_output = "Hey [Recipient],\n\nI'm really sorry for ditching our plan. I suddenly got an opportunity for a vacation so I took it. I know this might have messed up your plans and I regret that.\n\nDespite being under the weather, I would rather go for an adventure. I hope you can understand my perspective and I hope this incident doesn't change anything between us.\n\nWe can reschedule our plan for another time. Sorry again for the trouble.\n\nPeace out,\n[Your Name]\n\n---" results = scorer.score([instruction], [hypo_output], [input_context]) print(results) ``` Results are a list of errors with detailed explanations and reasonable penalty scores: ```json [ { "num_errors": 2, "score": -7.0, "errors": { "error_0": { "location": " \"I suddenly got an opportunity for a vacation so I took it.\"", "aspect": " Misunderstanding context", "explanation": " The error lies in the context of the reason for cancelling the plan. The original reason was due to illness, but in the incorrect output, it is stated that the cancellation was due to a vacation opportunity, which is a misunderstanding of the context. The correction would be to stick to the original reason for cancelling.", "severity": "Major", "score_reduction": "5.0" }, "error_1": { "location": " \"I hope you can understand my perspective and I hope this incident doesn't change anything between us.\"", "aspect": " Inappropriate tone", "explanation": " The tone of this sentence is too casual and lacks regret or apology. It's important to maintain a formal and regretful tone in an apology letter. The sentence could be corrected to something like \"I hope you can find it in your heart to forgive me and let this incident not strain our relationship.\"", "severity": "Minor", "score_reduction": "2.0" } }, "raw_output": " The model-generated output contains 2 errors, with a total score reduction of 7.0.\nError location 1: ..." } ] ``` Check more usage at our [Github Usage Doc](https://github.com/TIGER-AI-Lab/TIGERScore#usage). Have Fun! ## Citation If you find our work useful, please cite our paper: ``` @article{jiang2023TIGERScore, title={TIGERScore: Towards Building Explainable Metric for All Text Generation Tasks}, author={Dongfu Jiang, Yishan Li, Ge Zhang, Wenhao Huang, Bill Yuchen Lin, Wenhu Chen}, journal={arXiv preprint arXiv:2310.00752}, year={2023} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TIGER-Lab__TIGERScore-13B) | Metric |Value| |---------------------------------|----:| |Avg. |56.79| |AI2 Reasoning Challenge (25-Shot)|59.04| |HellaSwag (10-Shot) |82.79| |MMLU (5-Shot) |55.07| |TruthfulQA (0-shot) |40.38| |Winogrande (5-shot) |74.74| |GSM8k (5-shot) |28.73|
yabichiu/DeepSeek-Coder-V2-Lite-Instruct-GGUF
yabichiu
"2024-06-19T14:16:38Z"
1,449
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
"2024-06-19T14:05:58Z"
--- license: apache-2.0 ---
Neko-Institute-of-Science/LLaMA-7B-HF
Neko-Institute-of-Science
"2023-04-15T15:04:28Z"
1,448
22
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-06T00:08:54Z"
--- license: other --- LLaMA converted to Transformers. This is under a special license, please see the LICENSE file for details. # LLaMA Model Card https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md # Torrent 7-65B Note: the torrent has outdated tokenizer_config.json and special_tokens_map.json. Replace them with the ones here. For those who want to save HF's bandwith here's a magnet link: **magnet:?xt=urn:btih:8d634925911a03f787d9f68ac075a9b24281573a&dn=Safe-LLaMA-HF-v2%20(4-04-23)&tr=http%3a%2f%2fbt2.archive.org%3a6969%2fannounce&tr=http%3a%2f%2fbt1.archive.org%3a6969%2fannounce**
mansee/swin-tiny-patch4-window7-224-img_orientation
mansee
"2023-09-08T10:10:12Z"
1,448
2
transformers
[ "transformers", "pytorch", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:mansee/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-08-31T06:51:28Z"
--- license: apache-2.0 base_model: mansee/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-img_orientation results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9644592530889907 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-img_orientation This model is a fine-tuned version of [mansee/swin-tiny-patch4-window7-224](https://huggingface.co/mansee/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1069 - Accuracy: 0.9645 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5605 | 1.0 | 506 | 0.3984 | 0.8341 | | 0.3828 | 2.0 | 1013 | 0.1944 | 0.9271 | | 0.3092 | 3.0 | 1519 | 0.1862 | 0.9339 | | 0.3234 | 4.0 | 2026 | 0.1415 | 0.9510 | | 0.2471 | 5.0 | 2532 | 0.1355 | 0.9517 | | 0.251 | 6.0 | 3039 | 0.1170 | 0.9606 | | 0.2276 | 7.0 | 3545 | 0.1136 | 0.9627 | | 0.2182 | 8.0 | 4052 | 0.1121 | 0.9628 | | 0.1386 | 9.0 | 4558 | 0.1116 | 0.9632 | | 0.1466 | 9.99 | 5060 | 0.1069 | 0.9645 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
h2oai/llama2-0b-unit-test
h2oai
"2024-02-28T09:02:47Z"
1,448
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-28T09:01:58Z"
--- {} --- Small dummy LLama2-type Model useable for Unit/Integration tests. Suitable for CPU only machines, see [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio/blob/main/tests/integration/test_integration.py) for an example integration test. Model was created as follows: ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM repo_name = "h2oai/llama2-0b-unit-test" model_name = "h2oai/h2ogpt-4096-llama2-7b-chat" config = AutoConfig.from_pretrained(model_name) config.hidden_size = 12 config.max_position_embeddings = 1024 config.intermediate_size = 24 config.num_attention_heads = 2 config.num_hidden_layers = 2 config.num_key_value_heads = 2 tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_config(config) print(model.num_parameters()) # 770_940 model.push_to_hub(repo_name, private=False) tokenizer.push_to_hub(repo_name, private=False) config.push_to_hub(repo_name, private=False) ``` Use the following configuration in [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to run a complete experiment in **5 seconds** using the default dataset and default settings otherwise: ```yaml Validation Size: 0.1 Data Sample: 0.1 Max Length Prompt: 32 Max Length Answer: 32 Max Length: 64 Backbone Dtype: float16 Gradient Checkpointing: False Batch Size: 8 Max Length Inference: 16 ```
Helsinki-NLP/opus-mt-it-fr
Helsinki-NLP
"2023-08-16T11:58:53Z"
1,447
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "it", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- tags: - translation license: apache-2.0 --- ### opus-mt-it-fr * source languages: it * target languages: fr * OPUS readme: [it-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-fr/opus-2020-01-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.it.fr | 67.9 | 0.792 |
hfl/chinese-alpaca-2-7b-rlhf-gguf
hfl
"2024-01-24T02:59:29Z"
1,446
5
null
[ "gguf", "zh", "en", "license:apache-2.0", "region:us" ]
null
"2023-12-25T07:20:00Z"
--- license: apache-2.0 language: - zh - en --- # Chinese-Alpaca-2-7B-RLHF-GGUF This repository contains GGUF-v3 version (llama.cpp compatible) of **Chinese-Alpaca-2-7B-RLHF**, which is tuned on Chinese-Alpaca-2-7B with RLHF using DeepSpeed-Chat. ## Performance Metric: PPL, lower is better | Quant | original | imatrix (`-im`) | |-----|------|------| | Q2_K | 10.5211 +/- 0.14139 | 11.9331 +/- 0.16168 | | Q3_K | 8.9748 +/- 0.12043 | 8.8238 +/- 0.11850 | | Q4_0 | 8.7843 +/- 0.11854 | - | | Q4_K | 8.4643 +/- 0.11341 | 8.4226 +/- 0.11302 | | Q5_0 | 8.4563 +/- 0.11353 | - | | Q5_K | 8.3722 +/- 0.11236 | 8.3336 +/- 0.11192 | | Q6_K | 8.3207 +/- 0.11184 | 8.3047 +/- 0.11159 | | Q8_0 | 8.3100 +/- 0.11173 | - | | F16 | 8.3112 +/- 0.11173 | - | *The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).* ## Others For full model in HuggingFace format, please see: https://huggingface.co/hfl/chinese-alpaca-2-7b-rlhf Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details.
mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF
mradermacher
"2024-05-29T00:59:14Z"
1,446
3
transformers
[ "transformers", "gguf", "dpo", "en", "dataset:mlabonne/orpo-dpo-mix-40k", "base_model:mlabonne/Daredevil-8B-abliterated-dpomix", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-05-28T04:29:41Z"
--- base_model: mlabonne/Daredevil-8B-abliterated-dpomix datasets: - mlabonne/orpo-dpo-mix-40k language: - en library_name: transformers license: other quantized_by: mradermacher tags: - dpo --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/mlabonne/Daredevil-8B-abliterated-dpomix <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Daredevil-8B-abliterated-dpomix-i1-GGUF/resolve/main/Daredevil-8B-abliterated-dpomix.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
DeepPavlov/bert-base-cased-conversational
DeepPavlov
"2021-11-08T13:07:31Z"
1,445
9
transformers
[ "transformers", "pytorch", "jax", "bert", "feature-extraction", "en", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2022-03-02T23:29:04Z"
--- language: en --- # bert-base-cased-conversational Conversational BERT \(English, cased, 12‑layer, 768‑hidden, 12‑heads, 110M parameters\) was trained on the English part of Twitter, Reddit, DailyDialogues\[1\], OpenSubtitles\[2\], Debates\[3\], Blogs\[4\], Facebook News Comments. We used this training data to build the vocabulary of English subtokens and took English cased version of BERT‑base as an initialization for English Conversational BERT. 08.11.2021: upload model with MLM and NSP heads \[1\]: Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset. IJCNLP 2017. \[2\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\) \[3\]: Justine Zhang, Ravi Kumar, Sujith Ravi, Cristian Danescu-Niculescu-Mizil. Proceedings of NAACL, 2016. \[4\]: J. Schler, M. Koppel, S. Argamon and J. Pennebaker \(2006\). Effects of Age and Gender on Blogging in Proceedings of 2006 AAAI Spring Symposium on Computational Approaches for Analyzing Weblogs.
heegyu/kogpt-j-350m
heegyu
"2023-03-05T08:25:08Z"
1,445
6
transformers
[ "transformers", "pytorch", "jax", "gptj", "text-generation", "ko", "dataset:heegyu/korean-petitions", "dataset:heegyu/namuwiki-extracted", "dataset:heegyu/kowikitext", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-12-28T12:47:33Z"
--- license: mit widget: - text: 오늘 아침 정부는 발표를 통해 - text: | 아 배고프다 datasets: - heegyu/korean-petitions - heegyu/namuwiki-extracted - heegyu/kowikitext language: - ko pipeline_tag: text-generation --- ## 모델 구성 - GPT-J(Flax, Pytorch) - 20 Layers, 1024 hidden dim, 4096 intermediate, 16 heads, 51200 vocab size - 1024 max_seq_len - 파라미터 수: 350M ### 성능 벤치마크 <img src="https://github.com/HeegyuKim/language-model/blob/63d8bd7cd39f25e87e0e376cdd18df3f8b460dee/image/benchmark0304.png?raw=true" /> ## 학습 환경 및 하이퍼파라미터 - TPU V2-8 - Learning Rate: 3e-4, Batch Size: 512(=64 accum x 8 devices), Scheduler: Linear, WarmUp: 1000 step - adam_beta1=0.9 adam_beta2=0.98, weight_decay=0.01 - Training Steps: 43247 (3 epoch) - 학습 토큰 수: 21.11B (43247 * 512 * 1024seq / 1024^3) - 학습 기간: 2023/1/25 ~ 2023/1/29 ## 학습에 사용한 데이터 - AIHub SNS 대화(730MB) - AIHub 구어체(422MB) - AIHub 도서(1.6MB) - AIHub 대규모 웹데이터 기반 한국어 말뭉치(12GB) - 한국어 위키(867MB) - 나무위키(6.4GB) - 국립국어원 메신저 대화(21MB) - 국립국어원 일상대화 말뭉치(23MB) - 국립국어원 문어 말뭉치(3.2GB) - 국립국어원 구어 말뭉치(1.1GB) - 국립국어원 신문 말뭉치(~2022, 17GB) - 청와대 국민청원(525MB) 데이터셋 크기는 전처리한 jsonl파일을 기준으로 함. 총 토큰 수는 약 7B임 ## 사용 예시 ```python from transformers import pipeline model_name = "heegyu/kogpt-j-350m" pipe = pipeline('text-generation', model=model_name) print(pipe("안녕하세요", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128)) print(pipe("오늘 정부 발표에 따르면, ", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128)) print(pipe("싸늘하다. 가슴에 비수가 날아와 꽂힌다. ", repetition_penalty=1.2, do_sample=True, eos_token_id=1, early_stopping=True, max_new_tokens=128, min_length=64)) ``` 결과 ```bash [{'generated_text': '안녕하세요?\n네.\n자~ 오늘 그~ 뭐~ 남북정상회담에서 인제 남북 관계와 관련된 발언이죠?\n예. 그렇습니다.\n어~ 그~ 이산가족 문제 관련해서 이산가족 상봉을\n예.\n하는 방안이 좀 가능성이 있지 않아요?\n상당히 가능성이 있죠.\n예. 이~ 구체적으로 어떤 거였나요?\n어~ 먼저 이산가족 상봉을 이제 말씀드리겠습니다.\n예.\n아까 설명드린 것처럼 그~ 이산가족 상\n네.\n그~ 상봉에 대한 그~ 구체적인 방안이 어떻게 결정되는 게 가장 좋을까요?\n우선 상봉 방법부터 얘기를 드리죠.\n'}] [{'generated_text': '오늘 정부 발표에 따르면, gtx-d d 노선을 창릉과 수서에서 출발하는 등 당초 예정된 노선들을 모두 정차하기로 했다. 지난 2월 국토교통부가 이 노선을 일산·금정·파주 운정역과 직접 연결키로 하면서 일산~동탄, 일산~분당, 일산~양재 구간에 추가 정차할 것이라는 예상이 나왔지만 실제 일산~수서 구간이 정차하기로 확정됐다. gtx-d 노선이 일산~수서역까지 개통되는 것은 이번이 처음이다.. gtx-d 노선과 gtx-a 노선이 모두 개통되면 지하철 5호선의 서울 도심 통과 구간이 추가된다. 현재 gtx-b'}] [{'generated_text': '싸늘하다. 가슴에 비수가 날아와 꽂힌다. \U000f0854삼국사절요\U000f0855 ‘화살촉이 울버린’의 경우에서 보면, 총소리의 원음은 鐘(종자용 : 송악), 鐘을 비(鐘)라 하고 종자의 발음은 ‘이( )’이다. 이때에서 ‘이(은)로 시작하는 발음’은 ‘이/이’의 음운적 표현이다. ‘이/은→종자용[鐘] → 송악/종자[鐘]→이→종자(鐘) …’이다. 이는 한자어로서 그 발음'}] ``` ## 주의사항 이 모델의 학습 데이터는 각종 차별/혐오 데이터가 포함됐을 수 있으며, 별도의 제거작업을 진행하지 않았습니다. 따라서 모델이 생성하는 문장에 특정 인물이나 인종, 성별, 장애에 따른 차별/혐오발언을 생성할 수 있습니다.
CobraMamba/mamba-gpt-7b-v2
CobraMamba
"2023-11-21T02:32:37Z"
1,445
4
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "gpt", "llm", "large language model", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-14T14:42:12Z"
--- language: - en library_name: transformers tags: - gpt - llm - large language model inference: false thumbnail: >- https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico license: apache-2.0 --- # Model Card ## Summary We have fine-tuned the OpenLLaMA model and surpassed the original model in multiple evaluation subtasks, making it currently one of the best performing 3B model, with comparable performance to llama-7b. - Base model: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) ## Usage To use the model with the `transformers` library on a machine with GPU(s), first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer. Then, run the following Python snippet: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("CobraMamba/mamba-gpt-7b-v2") model = AutoModelForCausalLM.from_pretrained("CobraMamba/mamba-gpt-7b-v2", trust_remote_code=True, torch_dtype=torch.float16) input_content = "Your text here" input_ids = tokenizer.encode(input_content, return_tensors="pt") output = model.generate(input_ids, max_length=128, temperature=0.7) output_text = tokenizer.decode(output[0], skip_special_tokens=True) print(output_text) ``` ## Citation If this work is helpful, please kindly cite as: ```bibtex @Misc{mamba-gpt-7b-v2, title = {Mamba-GPT-7b-v2}, author = {chiliu}, howpublished = {\url{https://huggingface.co/CobraMamba/mamba-gpt-7b-v2}}, year = {2023} } ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. --- license: apache-2.0 --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_CobraMamba__mamba-gpt-7b-v2) | Metric | Value | |-----------------------|---------------------------| | Avg. | 54.85 | | ARC (25-shot) | 61.95 | | HellaSwag (10-shot) | 83.83 | | MMLU (5-shot) | 61.74 | | TruthfulQA (0-shot) | 46.63 | | Winogrande (5-shot) | 78.45 | | GSM8K (5-shot) | 17.29 | | DROP (3-shot) | 34.07 |
Sao10K/Fimbulvetr-10.7B-v1-GGUF
Sao10K
"2024-01-10T16:34:48Z"
1,445
13
null
[ "gguf", "region:us" ]
null
"2024-01-10T16:28:12Z"
Entry not found
Eric111/Yarn-Mistral-7b-128k-DPO
Eric111
"2024-02-23T23:00:58Z"
1,445
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "custom_code", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-23T22:54:08Z"
--- library_name: transformers license: apache-2.0 tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details DPO fine-tuned version of NousResearch/Yarn-Mistral-7b-128k with Intel/orca_dpo_pairs ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
markhneedham/dolphin-2.9-llama3-8b-Q2_K-GGUF
markhneedham
"2024-06-28T21:17:17Z"
1,445
0
null
[ "gguf", "generated_from_trainer", "axolotl", "llama-cpp", "gguf-my-repo", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:microsoft/orca-math-word-problems-200k", "dataset:abacusai/SystemChat-1.1", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:cognitivecomputations/dolphin-2.9-llama3-8b", "license:other", "region:us" ]
null
"2024-06-28T21:12:16Z"
--- base_model: cognitivecomputations/dolphin-2.9-llama3-8b datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - HuggingFaceH4/ultrachat_200k - microsoft/orca-math-word-problems-200k - abacusai/SystemChat-1.1 - Locutusque/function-calling-chatml - internlm/Agent-FLAN license: other tags: - generated_from_trainer - axolotl - llama-cpp - gguf-my-repo model-index: - name: out results: [] --- # markhneedham/dolphin-2.9-llama3-8b-Q2_K-GGUF This model was converted to GGUF format from [`cognitivecomputations/dolphin-2.9-llama3-8b`](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo markhneedham/dolphin-2.9-llama3-8b-Q2_K-GGUF --hf-file dolphin-2.9-llama3-8b-q2_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo markhneedham/dolphin-2.9-llama3-8b-Q2_K-GGUF --hf-file dolphin-2.9-llama3-8b-q2_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo markhneedham/dolphin-2.9-llama3-8b-Q2_K-GGUF --hf-file dolphin-2.9-llama3-8b-q2_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo markhneedham/dolphin-2.9-llama3-8b-Q2_K-GGUF --hf-file dolphin-2.9-llama3-8b-q2_k.gguf -c 2048 ```
ToastyPigeon/SmolLlama-1.5B-Sorted
ToastyPigeon
"2024-03-19T23:21:42Z"
1,444
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-19T22:54:26Z"
--- base_model: [] tags: - mergekit - merge license: apache-2.0 --- # SmolLlama-1.5B-Sorted Bigger than "Tiny" but still very smol. This is a self-stack merge of TinyLlama 1.1B using a sorted-layer arrangement, resulting in 32 model layers and 1.54B model parameters. In comparison to [SmolLlama-1.5B](https://huggingface.co/ToastyPigeon/SmolLlama-1.5B), the Sorted version has the repeated middle layers placed in ascending order (see merge config). This is a proof-of-concept model and should not be used for anything. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: #non-repeating layers - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [0, 6] - sources: #begin repeating layers - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [6, 7] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [6, 7] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [7, 8] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [7, 8] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [8, 9] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [8, 9] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [9, 10] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [9, 10] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [10, 11] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [10, 11] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [11, 12] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [11, 12] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [12, 13] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [12, 13] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [13, 14] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [13, 14] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [14, 15] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [14, 15] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [15, 16] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [15, 16] - sources: #non-repeating layers - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [16, 22] merge_method: passthrough dtype: float16 ```
miracle085/model-6
miracle085
"2024-07-01T20:25:15Z"
1,443
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-18T15:15:25Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Quant-Cartel/magnum-72b-v1-iMat-GGUF
Quant-Cartel
"2024-06-19T05:18:52Z"
1,443
1
null
[ "gguf", "chat", "qwen", "opus", "license:other", "region:us" ]
null
"2024-06-18T05:36:58Z"
--- license: other license_name: tongyi-qianwen license_link: LICENSE tags: - chat - qwen - opus --- ``` e88 88e d8 d888 888b 8888 8888 ,"Y88b 888 8e d88 C8888 8888D 8888 8888 "8" 888 888 88b d88888 Y888 888P Y888 888P ,ee 888 888 888 888 "88 88" "88 88" "88 888 888 888 888 b 8b, e88'Y88 d8 888 d888 'Y ,"Y88b 888,8, d88 ,e e, 888 C8888 "8" 888 888 " d88888 d88 88b 888 Y888 ,d ,ee 888 888 888 888 , 888 "88,d88 "88 888 888 888 "YeeP" 888 PROUDLY PRESENTS ``` ## magnum-72b-v1-iMat-GGUF Quantized from fp16 with love. * Weighted quantizations were created using fp16 GGUF and [groups_merged.txt](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) in 92 chunks and n_ctx=512 For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747) <b>All quants are verified working prior to uploading to repo for your safety and convenience. </b> Original model card [here](https://huggingface.co/alpindale/magnum-72b-v1)
kyledam/gai-vietnam
kyledam
"2023-04-30T11:14:59Z"
1,442
0
diffusers
[ "diffusers", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-04-17T04:17:16Z"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### gai_vietnam Dreambooth model trained by kyledam with TheLastBen's fast-DreamBooth notebook
Locutusque/gpt2-conversational-retrain
Locutusque
"2023-11-19T02:58:44Z"
1,442
2
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "en", "dataset:Locutusque/InstructMix", "arxiv:1910.09700", "doi:10.57967/hf/1167", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-24T06:06:19Z"
--- license: mit datasets: - Locutusque/InstructMix language: - en metrics: - bleu - perplexity pipeline_tag: text-generation widget: - text: >- <|USER|> Design a Neo4j database and Cypher function snippet to Display Extreme Dental hygiene: Using Mouthwash for Analysis for Beginners. Implement if/else or switch/case statements to handle different conditions related to the Consent. Provide detailed comments explaining your control flow and the reasoning behind each decision. <|ASSISTANT|> - text: >- <|USER|> Write me a story about a magical place. <|ASSISTANT|> - text: >- <|USER|> Write me an essay about the life of George Washington <|ASSISTANT|> - text: >- <|USER|> Solve the following equation 2x + 10 = 20 <|ASSISTANT|> - text: >- <|USER|> Craft me a list of some nice places to visit around the world. <|ASSISTANT|> inference: parameters: temperature: 0.8 do_sample: True top_p: 0.14 top_k: 41 max_new_tokens: 250 repetition_penalty: 1.176 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This a fine-tuned version of gpt2 on Locutusque/InstructMix. ## Model Details This model performs significantly better than Locutusque/gpt2-conversational-or-qa. Here are the training results: - BLEU - 26 - Perplexity - 12 ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Locutusque - **Shared by [optional]:** [More Information Needed] - **Model type:** GPT-2 - **Language(s) (NLP):** English - **License:** [More Information Needed] - **Finetuned from model [optional]:** GPT-2 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> This model is designed to follow instructions, or partake in conversations. ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> Instruction-following or conversational. ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ```python import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained('gpt2-conversational-retrain') model = GPT2LMHeadModel.from_pretrained('gpt2-conversational-retrain') device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) def generate_text(model, tokenizer, prompt, max_length=1024): prompt = f'<|USER|> {prompt} <|ASSISTANT|> ' input_ids = tokenizer.encode(prompt, add_special_tokens=True, return_tensors="pt").to(device) attention_mask = torch.ones_like(input_ids).to(device) output = model.generate(input_ids, max_length=max_length, do_sample=True, temperature=0.3, top_k=23, top_p=0.7, repetition_penalty=1.176, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, attention_mask=attention_mask) output_ids = tokenizer.decode(output[0], skip_special_tokens=False) return output_ids # Loop to interact with the model while True: prompt = input("Enter a prompt (or 'q' to quit): ") if prompt == "q": break output_text = generate_text(model, tokenizer, prompt) print(output_text) ``` ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> https://huggingface.co/datasets/Locutusque/InstructMix This model has so far been trained on 10% of the linked data, with more training sessions to come. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** fp16 non-mixed precision <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
QuantFactory/llama-3-chinese-8b-instruct-v2-GGUF
QuantFactory
"2024-06-04T09:12:22Z"
1,442
0
transformers
[ "transformers", "gguf", "llama", "conversational", "text-generation", "en", "zh", "base_model:hfl/llama-3-chinese-8b-instruct-v2", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-02T12:29:27Z"
--- library_name: transformers base_model: hfl/llama-3-chinese-8b-instruct-v2 language: - en - zh pipeline_tag: text-generation tags: - llama - conversational --- # QuantFactory/llama-3-chinese-8b-instruct-v2-GGUF This is quantized version of [hfl/llama-3-chinese-8b-instruct-v2](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2) created using llama.cpp # Model Description This repository contains Llama-3-Chinese-8B-Instruct-v2, which is directly tuned with 5M instruction data on Meta-Llama-3-8B-Instruct. Note: This is an instruction (chat) model, which can be used for conversation, QA, etc. Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
CLTL/MedRoBERTa.nl
CLTL
"2022-12-20T15:05:31Z"
1,441
9
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "nl", "doi:10.57967/hf/0960", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:04Z"
--- language: nl license: mit --- # MedRoBERTa.nl ## Description This model is a RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. The model is not fine-tuned. All code used for the creation of MedRoBERTa.nl can be found at https://github.com/cltl-students/verkijk_stella_rma_thesis_dutch_medical_language_model. ## Intended use The model can be fine-tuned on any type of task. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. ## Data The model was trained on nearly 10 million hospital notes from the Amsterdam University Medical Centres. The training data was anonymized before starting the pre-training procedure. ## Privacy By anonymizing the training data we made sure the model did not learn any representative associations linked to names. Apart from the training data, the model's vocabulary was also anonymized. This ensures that the model can not predict any names in the generative fill-mask task. ## Authors Stella Verkijk, Piek Vossen ## Reference Paper: Verkijk, S. & Vossen, P. (2022) MedRoBERTa.nl: A Language Model for Dutch Electronic Health Records. Computational Linguistics in the Netherlands Journal, 11.
alirezamsh/small100
alirezamsh
"2023-10-09T08:57:33Z"
1,441
44
transformers
[ "transformers", "pytorch", "onnx", "safetensors", "m2m_100", "text2text-generation", "small100", "translation", "flores101", "gsarti/flores_101", "tico19", "gmnlp/tico19", "tatoeba", "multilingual", "af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "dataset:tico19", "dataset:flores101", "dataset:tatoeba", "arxiv:2210.11621", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-11-01T17:58:07Z"
--- language: - multilingual - af - am - ar - ast - az - ba - be - bg - bn - br - bs - ca - ceb - cs - cy - da - de - el - en - es - et - fa - ff - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - ht - hu - hy - id - ig - ilo - is - it - ja - jv - ka - kk - km - kn - ko - lb - lg - ln - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - 'no' - ns - oc - or - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - so - sq - sr - ss - su - sv - sw - ta - th - tl - tn - tr - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu license: mit tags: - small100 - translation - flores101 - gsarti/flores_101 - tico19 - gmnlp/tico19 - tatoeba datasets: - tico19 - flores101 - tatoeba --- # SMALL-100 Model SMaLL-100 is a compact and fast massively multilingual machine translation model covering more than 10K language pairs, that achieves competitive results with M2M-100 while being much smaller and faster. It is introduced in [this paper](https://arxiv.org/abs/2210.11621)(accepted to EMNLP2022), and initially released in [this repository](https://github.com/alirezamshi/small100). The model architecture and config are the same as [M2M-100](https://huggingface.co/facebook/m2m100_418M/tree/main) implementation, but the tokenizer is modified to adjust language codes. So, you should load the tokenizer locally from [tokenization_small100.py](https://huggingface.co/alirezamsh/small100/blob/main/tokenization_small100.py) file for the moment. **Demo**: https://huggingface.co/spaces/alirezamsh/small100 **Note**: SMALL100Tokenizer requires sentencepiece, so make sure to install it by: ```pip install sentencepiece``` - **Supervised Training** SMaLL-100 is a seq-to-seq model for the translation task. The input to the model is ```source:[tgt_lang_code] + src_tokens + [EOS]``` and ```target: tgt_tokens + [EOS]```. An example of supervised training is shown below: ``` from transformers import M2M100ForConditionalGeneration from tokenization_small100 import SMALL100Tokenizer model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100") tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100", tgt_lang="fr") src_text = "Life is like a box of chocolates." tgt_text = "La vie est comme une boîte de chocolat." model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt") loss = model(**model_inputs).loss # forward pass ``` Training data can be provided upon request. - **Generation** Beam size of 5, and maximum target length of 256 is used for the generation. ``` from transformers import M2M100ForConditionalGeneration from tokenization_small100 import SMALL100Tokenizer hi_text = "जीवन एक चॉकलेट बॉक्स की तरह है।" chinese_text = "生活就像一盒巧克力。" model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100") tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100") # translate Hindi to French tokenizer.tgt_lang = "fr" encoded_hi = tokenizer(hi_text, return_tensors="pt") generated_tokens = model.generate(**encoded_hi) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "La vie est comme une boîte de chocolat." # translate Chinese to English tokenizer.tgt_lang = "en" encoded_zh = tokenizer(chinese_text, return_tensors="pt") generated_tokens = model.generate(**encoded_zh) tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) # => "Life is like a box of chocolate." ``` - **Evaluation** Please refer to [original repository](https://github.com/alirezamshi/small100) for spBLEU computation. - **Languages Covered** Afrikaans (af), Amharic (am), Arabic (ar), Asturian (ast), Azerbaijani (az), Bashkir (ba), Belarusian (be), Bulgarian (bg), Bengali (bn), Breton (br), Bosnian (bs), Catalan; Valencian (ca), Cebuano (ceb), Czech (cs), Welsh (cy), Danish (da), German (de), Greeek (el), English (en), Spanish (es), Estonian (et), Persian (fa), Fulah (ff), Finnish (fi), French (fr), Western Frisian (fy), Irish (ga), Gaelic; Scottish Gaelic (gd), Galician (gl), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Croatian (hr), Haitian; Haitian Creole (ht), Hungarian (hu), Armenian (hy), Indonesian (id), Igbo (ig), Iloko (ilo), Icelandic (is), Italian (it), Japanese (ja), Javanese (jv), Georgian (ka), Kazakh (kk), Central Khmer (km), Kannada (kn), Korean (ko), Luxembourgish; Letzeburgesch (lb), Ganda (lg), Lingala (ln), Lao (lo), Lithuanian (lt), Latvian (lv), Malagasy (mg), Macedonian (mk), Malayalam (ml), Mongolian (mn), Marathi (mr), Malay (ms), Burmese (my), Nepali (ne), Dutch; Flemish (nl), Norwegian (no), Northern Sotho (ns), Occitan (post 1500) (oc), Oriya (or), Panjabi; Punjabi (pa), Polish (pl), Pushto; Pashto (ps), Portuguese (pt), Romanian; Moldavian; Moldovan (ro), Russian (ru), Sindhi (sd), Sinhala; Sinhalese (si), Slovak (sk), Slovenian (sl), Somali (so), Albanian (sq), Serbian (sr), Swati (ss), Sundanese (su), Swedish (sv), Swahili (sw), Tamil (ta), Thai (th), Tagalog (tl), Tswana (tn), Turkish (tr), Ukrainian (uk), Urdu (ur), Uzbek (uz), Vietnamese (vi), Wolof (wo), Xhosa (xh), Yiddish (yi), Yoruba (yo), Chinese (zh), Zulu (zu) # Citation If you use this model for your research, please cite the following work: ``` @inproceedings{mohammadshahi-etal-2022-small, title = "{SM}a{LL}-100: Introducing Shallow Multilingual Machine Translation Model for Low-Resource Languages", author = "Mohammadshahi, Alireza and Nikoulina, Vassilina and Berard, Alexandre and Brun, Caroline and Henderson, James and Besacier, Laurent", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.emnlp-main.571", pages = "8348--8359", abstract = "In recent years, multilingual machine translation models have achieved promising performance on low-resource language pairs by sharing information between similar languages, thus enabling zero-shot translation. To overcome the {``}curse of multilinguality{''}, these models often opt for scaling up the number of parameters, which makes their use in resource-constrained environments challenging. We introduce SMaLL-100, a distilled version of the M2M-100(12B) model, a massively multilingual machine translation model covering 100 languages. We train SMaLL-100 with uniform sampling across all language pairs and therefore focus on preserving the performance of low-resource languages. We evaluate SMaLL-100 on different low-resource benchmarks: FLORES-101, Tatoeba, and TICO-19 and demonstrate that it outperforms previous massively multilingual models of comparable sizes (200-600M) while improving inference latency and memory usage. Additionally, our model achieves comparable results to M2M-100 (1.2B), while being 3.6x smaller and 4.3x faster at inference.", } @inproceedings{mohammadshahi-etal-2022-compressed, title = "What Do Compressed Multilingual Machine Translation Models Forget?", author = "Mohammadshahi, Alireza and Nikoulina, Vassilina and Berard, Alexandre and Brun, Caroline and Henderson, James and Besacier, Laurent", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-emnlp.317", pages = "4308--4329", abstract = "Recently, very large pre-trained models achieve state-of-the-art results in various natural language processing (NLP) tasks, but their size makes it more challenging to apply them in resource-constrained environments. Compression techniques allow to drastically reduce the size of the models and therefore their inference time with negligible impact on top-tier metrics. However, the general performance averaged across multiple tasks and/or languages may hide a drastic performance drop on under-represented features, which could result in the amplification of biases encoded by the models. In this work, we assess the impact of compression methods on Multilingual Neural Machine Translation models (MNMT) for various language groups, gender, and semantic biases by extensive analysis of compressed models on different machine translation benchmarks, i.e. FLORES-101, MT-Gender, and DiBiMT. We show that the performance of under-represented languages drops significantly, while the average BLEU metric only slightly decreases. Interestingly, the removal of noisy memorization with compression leads to a significant improvement for some medium-resource languages. Finally, we demonstrate that compression amplifies intrinsic gender and semantic biases, even in high-resource languages.", } ```
Yntec/Infinite80s
Yntec
"2023-12-28T07:56:10Z"
1,441
3
diffusers
[ "diffusers", "safetensors", "realistic", "cinema", "movies", "AInfinity", "Lykon", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-08-08T16:24:51Z"
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - realistic - cinema - movies - AInfinity - Lykon --- Update: The model that diffusers were using has been renamed to Infinite80sAlpha to relaunch this model. # Infinite 80s The 80s never ended. Now with a better base model. ![Infinite 80s Comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/BCTVxfZf8eWNwsSYNpVc3.png) (Click for larger) AI-infinity model by AInfinity and LiberteRedmond by artificialguybr with the 80s Movie style LoRA by Lykon. Sample and prompt: ![Infinite 80s Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/zP0_Hv2Mxb9IrO1cXkJlm.png) Portrait of a happy family cooking at the classroom, little girl painting by technicolor, smooth face, perfect eyes, wide angle, sharp focus, 8 k high definition, insanely detailed, intricate, elegant, acrylic art on canvas by rossdraws and clay mann Original pages: https://civitai.com/models/121253/ai-infinity-realistic-better-hands https://civitai.com/models/26873/80s-movie-style-lora https://civitai.com/models/94123?modelVersionId=100409 (LiberteRedmond) # Liberte Infinity A merge of AI-infinity and LiberteRedmond to be used as base model for Infinite80s. Also check https://huggingface.co/Yntec/InfiniteLiberty ![Liberte Infinity Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/tyvmTTmtLya2yOnOJ5xfI.png) # Recipe: - SuperMerger Weight Sum Train Difference MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1 Model A: LiberteRedmond Model B: AIInfinityRealistic Output: LiberteInfinity
hvein/5GuRHnpqrnmmLWf12iRKtDnK5jSA24pduofWE7i7jcEKE2vq_vgg
hvein
"2024-03-09T20:40:33Z"
1,441
0
keras
[ "keras", "region:us" ]
null
"2024-02-13T14:54:06Z"
Entry not found
MediaTek-Research/Breeze-7B-32k-Instruct-v1_0
MediaTek-Research
"2024-05-12T02:00:43Z"
1,441
3
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "zh", "en", "arxiv:2403.02712", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-13T23:32:34Z"
--- pipeline_tag: text-generation license: apache-2.0 language: - zh - en --- # Model Card for MediaTek Research Breeze-7B-32k-Instruct-v1_0 MediaTek Research Breeze-7B (hereinafter referred to as Breeze-7B) is a language model family that builds on top of [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1), specifically intended for Traditional Chinese use. [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) is the base model for the Breeze-7B series. It is suitable for use if you have substantial fine-tuning data to tune it for your specific use case. [Breeze-7B-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) derives from the base model Breeze-7B-Base, making the resulting model amenable to be used as-is for commonly seen tasks. [Breeze-7B-32k-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Base-v1_0) is extended from the base model with more data, base change, and the disabling of the sliding window. Roughly speaking, that is equivalent to 44k Traditional Chinese characters. [Breeze-7B-32k-Instruct](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Instruct-v1_0) derives from the base model Breeze-7B-32k-Base, making the resulting model amenable to be used as-is for commonly seen tasks. Practicality-wise: - Breeze-7B-Base expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze-7B operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).] - Breeze-7B-Instruct can be used as is for common tasks such as Q&A, RAG, multi-round chat, and summarization. - Breeze-7B-32k-Instruct can perform tasks at a document level (For Chinese, 20 ~ 40 pages). *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.* ## Features - Breeze-7B-32k-Base-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 32k-token context length - Breeze-7B-32k-Instruct-v1_0 - Expanding the vocabulary dictionary size from 32k to 62k to better support Traditional Chinese - 32k-token context length - Multi-turn dialogue (without special handling for harmfulness) ## Model Details - Breeze-7B-32k-Base-v1_0 - Pretrained from: [Breeze-7B-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v1_0) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) - Breeze-7B-32k-Instruct-v1_0 - Finetuned from: [Breeze-7B-32k-Base](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Base-v1_0) - Model type: Causal decoder-only transformer language model - Language: English and Traditional Chinese (zh-tw) ## Long-context Performance #### Needle-in-a-haystack Performance We use the passkey retrieval task to test the model's ability to attend to different various depths in a given sequence. A key in placed within a long context distracting document for the model to retrieve. The key position is binned into 16 bins, and there are 20 testcases for each bin. Breeze-7B-32k-Base clears the tasks with 90+% accuracy, shown in the figure below. ![Needle-in-a-haystack Performance](https://huggingface.co/MediaTek-Research/Breeze-7B-32k-Base-v1_0/resolve/main/needle-in-a-haystack-performance.png) #### Long-DRCD Performance | **Model/Performance(EM)** | **DRCD** | **DRCD-16k** | **DRCD-32k** | |---------------------------|----------|--------------|--------------| | **Breeze-7B-32k-Instruct-v1\_0** | 76.9 | 54.82 | 44.26 | | **Breeze-7B-32k-Base-v1\_0** | 79.73 | 69.68 | 61.55 | | **Breeze-7B-Base-v1\_0** | 80.61 | 21.79 | 15.29 | #### Short-Benchmark Performance | **Model/Performance(EM)** | **TMMLU+** | **MMLU** | **TABLE** | **MT-Bench-tw** | **MT-Bench** | |---------------------------|----------|--------------|--------------|-----|-----| | **Breeze-7B-32k-Instruct-v1\_0** | 41.37 | 61.34 | 34 | 5.8 | 7.4 | | **Breeze-7B-Instruct-v1\_0** | 42.67 | 62.73 | 39.58 | 6.0 | 7.4 | ## Use in Transformers First, install direct dependencies: ``` pip install transformers torch accelerate ``` <p style="color:red;">Flash-attention2 is strongly recommended for long context scenarios.</p> ```bash pip install packaging ninja pip install flash-attn ``` Then load the model in transformers: ```python >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("MediaTek-Research/Breeze-7B-32k-Instruct-v1_0/") >>> model = AutoModelForCausalLM.from_pretrained( >>> "MediaTek-Research/Breeze-7B-32k-Instruct-v1_0", ... device_map="auto", ... torch_dtype=torch.bfloat16, ... attn_implementation="flash_attention_2" ... ) >>> chat = [ ... {"role": "user", "content": "你好,請問你可以完成什麼任務?"}, ... {"role": "assistant", "content": "你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。"}, ... {"role": "user", "content": "太棒了!"}, ... ] >>> tokenizer.apply_chat_template(chat, tokenize=False) "<s>You are a helpful AI assistant built by MediaTek Research. The user you are helping speaks Traditional Chinese and comes from Taiwan. [INST] 你好,請問你可以完成什麼任務? [/INST] 你好,我可以幫助您解決各種問題、提供資訊和協助您完成許多不同的任務。例如:回答技術問題、提供建議、翻譯文字、尋找資料或協助您安排行程等。請告訴我如何能幫助您。 [INST] 太棒了! [/INST] " # Tokenized results # ['▁', '你好', ',', '請問', '你', '可以', '完成', '什麼', '任務', '?'] # ['▁', '你好', ',', '我', '可以', '幫助', '您', '解決', '各種', '問題', '、', '提供', '資訊', '和', '協助', '您', '完成', '許多', '不同', '的', '任務', '。', '例如', ':', '回答', '技術', '問題', '、', '提供', '建議', '、', '翻譯', '文字', '、', '尋找', '資料', '或', '協助', '您', '安排', '行程', '等', '。', '請', '告訴', '我', '如何', '能', '幫助', '您', '。'] # ['▁', '太', '棒', '了', '!'] ``` ## Citation ``` @article{MediaTek-Research2024breeze7b, title={Breeze-7B Technical Report}, author={Chan-Jan Hsu and Chang-Le Liu and Feng-Ting Liao and Po-Chun Hsu and Yi-Chang Chen and Da-Shan Shiu}, year={2024}, eprint={2403.02712}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
RichardErkhov/unsloth_-_Qwen2-1.5B-gguf
RichardErkhov
"2024-06-23T09:59:31Z"
1,441
0
null
[ "gguf", "region:us" ]
null
"2024-06-22T18:49:22Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qwen2-1.5B - GGUF - Model creator: https://huggingface.co/unsloth/ - Original model: https://huggingface.co/unsloth/Qwen2-1.5B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Qwen2-1.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q2_K.gguf) | Q2_K | 0.63GB | | [Qwen2-1.5B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ3_XS.gguf) | IQ3_XS | 0.68GB | | [Qwen2-1.5B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ3_S.gguf) | IQ3_S | 0.71GB | | [Qwen2-1.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K_S.gguf) | Q3_K_S | 0.71GB | | [Qwen2-1.5B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ3_M.gguf) | IQ3_M | 0.72GB | | [Qwen2-1.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K.gguf) | Q3_K | 0.77GB | | [Qwen2-1.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K_M.gguf) | Q3_K_M | 0.77GB | | [Qwen2-1.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q3_K_L.gguf) | Q3_K_L | 0.82GB | | [Qwen2-1.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ4_XS.gguf) | IQ4_XS | 0.84GB | | [Qwen2-1.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_0.gguf) | Q4_0 | 0.87GB | | [Qwen2-1.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.IQ4_NL.gguf) | IQ4_NL | 0.88GB | | [Qwen2-1.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_K_S.gguf) | Q4_K_S | 0.88GB | | [Qwen2-1.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_K.gguf) | Q4_K | 0.92GB | | [Qwen2-1.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_K_M.gguf) | Q4_K_M | 0.92GB | | [Qwen2-1.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q4_1.gguf) | Q4_1 | 0.95GB | | [Qwen2-1.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_0.gguf) | Q5_0 | 1.02GB | | [Qwen2-1.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_K_S.gguf) | Q5_K_S | 1.02GB | | [Qwen2-1.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_K.gguf) | Q5_K | 1.05GB | | [Qwen2-1.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_K_M.gguf) | Q5_K_M | 1.05GB | | [Qwen2-1.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q5_1.gguf) | Q5_1 | 1.1GB | | [Qwen2-1.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q6_K.gguf) | Q6_K | 1.19GB | | [Qwen2-1.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Qwen2-1.5B-gguf/blob/main/Qwen2-1.5B.Q8_0.gguf) | Q8_0 | 1.53GB | Original model description: --- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - qwen2 --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! We have a Google Colab Tesla T4 notebook for Qwen2 7b here: https://colab.research.google.com/drive/1mvwsIQWDs2EdZxZQF9pRGnnOvE86MVvR?usp=sharing And a Colab notebook for [Qwen2 0.5b](https://colab.research.google.com/drive/1-7tjDdMAyeCueyLAwv6vYeBMHpoePocN?usp=sharing) and another for [Qwen2 1.5b](https://colab.research.google.com/drive/1W0j3rP8WpgxRdUgkb5l6E00EEVyjEZGk?usp=sharing) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less | | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
ricardo-filho/bert-base-portuguese-cased-nli-assin-2
ricardo-filho
"2021-08-03T19:29:54Z"
1,440
5
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 407 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 1, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 41, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
KRAFTON/KORani-v3-13B
KRAFTON
"2023-05-08T07:04:18Z"
1,440
21
transformers
[ "transformers", "pytorch", "llama", "text-generation", "vicuna", "KoVicuna", "KORani", "ko", "en", "arxiv:2302.13971", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-26T07:29:23Z"
--- license: apache-2.0 language: - ko - en pipeline_tag: text-generation tags: - vicuna - llama - KoVicuna - KORani --- # KORani-v3-13B **`v3` doesn't mean the best or most recent model** - KORani: Large Language Models for 🇰🇷 Korean and 🇺🇸 English using LLaMA 13B and Polyglot 12.8B. - Tested which LLM is effective for 🇰🇷 Korean tasks after finetuning. - More information at https://github.com/krafton-ai/KORani - This repository contains fine-tuned language model weights based on LLaMA 13B ## Release This repository contains inference code for KORani models that are based on [LLaMA 13B](https://arxiv.org/abs/2302.13971v1) and [Polyglot 12.8B](https://huggingface.co/EleutherAI/polyglot-ko-12.8b). KORani models are finetuned using [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/tree/main) & [KoVicuna](https://huggingface.co/datasets/junelee/sharegpt_deepl_ko) dataset. This work is hugely influenced by [Vicuna](https://github.com/lm-sys/FastChat) project. ### Models | Model | Base | Train dataset | Huggingface Link | | --- | ---: | ---: | ---: | | 1️⃣ KORani-v1-13B | Polyglot 12.8B | KoVicuna dataset | [Link 1](https://huggingface.co/KRAFTON/KORani-v1-13B) | | 2️⃣ KORani-v2-13B | LLaMA 13B | KoVicuna dataset | [Link 2](https://huggingface.co/KRAFTON/KORani-v2-13B) | | 3️⃣ KORani-v3-13B | LLaMA 13B | ShareGPT & KoVicuna dataset | [Link 3](https://huggingface.co/KRAFTON/KORani-v3-13B) | ## Performances We used AutoEvalGPT inspired by auto evaluation by GPT-4 from [Vicuna](https://github.com/lm-sys/FastChat). For how to evaluate, visit this GitHub -> https://github.com/krafton-ai/AutoEvalGPT ### Translation (ENG -> KOR) ``` input = ""Hey! I have some Kissflow Legal Review requests that are blocked by Tax and Accounting, can this be looked at?"" ``` | Model | Score (averaged over 5 examples) | Output Example | | --- | :---: | ---: | | GPT-4 | - | 헤이! 제가 Tax and Accounting에 의해 차단된 몇 가지 Kissflow 법률 검토 요청이 있는데, 이것을 확인해 주실 수 있나요? | | DeepL | 9.4 | 안녕하세요! 세무 및 회계에서 차단된 Kissflow 법률 검토 요청이 몇 개 있는데요, 이 요청을 살펴볼 수 있나요? | | GPT-3.5-turbo | 8.6 | 안녕하세요! 세무 및 회계 부서에서 차단된 몇 가지 Kissflow Legal Review 요청이 있습니다. 확인해 주실 수 있나요? | | Vicuna-13B | 3.8 | 안녕하세요! 세금계산과 회계부서가 차단해 있는 Kissflow Legal Review 요청이 몇 개가 있습니까? 이것을 살펴보시겠습니까? | | KoAlpaca-13B | 5.6 | 야! 세금과 회계에 막힌 키싱플로우 법적 검토 요청이 있는데, 이거 검토해줄 수 있어? | | KORani-v1 | 7.5 | 안녕하세요! 세금과 회계로 인해 막혀 있는 키스플로우 법률 검토 요청이 몇 개 있는데, 검토해 주실 수 있나요? | | KORani-v2 | 5.4 | 안녕하세요! 제가 Kissflow Legal Review 요청을 목격했는데, 세무 및 회계 부서에서 차단하고 있는데 이 문제를 조사해 주시겠어요? | | KORani-v3 | 7.1 | 안녕하세요! 저는 Kissflow Legal Review 요청이 세금과 회계에 의해 차단되고 있는데, 이 문제가 살펴볼 수 있을까요? | ### QA(Korean) ``` prompt = "우리는 아래와 같은 정보를 갖고 있습니다. --------------------- 헨리 구스타프 몰래슨(영어: Henry Gustav Molaison, 1926년 2월 26일 ~ 2008년 12월 2일)은 뇌전증을 치료하기 위해서 수술적으로 해마를 포함한 내측측두엽이 제거된 미국의 기억장애 환자 H.M으로 전부터 이미 알려져 있었다. 그는 1957년 말부터 그가 죽을 때 까지 그는 광범위하게 연구가 되었다. 그의 경우는 뇌 기능과 기억 사이의 링크를 설명하는 이론 발달과 뇌인지심리학의 발달, 뇌의 구조와 기능이 특정 심리적 과정과 관한 이해를 목표로 하는 심리학의 한 지점에 매우 중요한 역할을 하였다. 그가 죽기 전에, 그는 코네티컷 주 윈저 락스에 위치한 관리기관에 살면서 지속적으로 조사가 되었다. 그의 뇌는 지금 2009년 12월 4일에 조직학적으로 슬라이스되어 미국 샌디에이고에 있다. --------------------- ### 주어진 정보에 따라, 질문에 답해주세요.: '헨리 구스타프의 뇌는 현재 어디에 위치해 있는가?' ### Assistant:" ``` | Model | Score (averaged over 5 examples) | Output Example | | --- | :---: | ---: | | GPT-4 | - | 헨리 구스타프의 뇌는 현재 미국 샌디에이고에 위치해 있다. | | GPT-3.5-turbo | 7.1 | 헨리 구스타프의 뇌는 지금 2009년 12월 4일에 조직학적으로 슬라이스되어 미국 샌디에이고에 있다. | | Vicuna-13B | 7.2 | 헨리 구스타프의 뇌는 2009년 12월 4일에 조직학적으로 슬라이스되어 미국 샌디에이고에 있습니다. | | KoAlpaca-13B | 2.6 | Henry Gustav Molaison의 뇌는 2008년에 사망하였습니다. 그의 뇌는 코네티컷 주 윈저 락스에 위치한 기관에 보관되어 있으며, 현재도 계속 조사중입니다. | | KORani-v1 | 6.6 | 헨리 구스타프 몰레이슨의 뇌는 현재 미국 샌디에이고에 있습니다. 그의 뇌는 1957년 수술로 제거된 그의 내측 측두엽의 자리를 확인하기 위해 조직학적으로 절단되었습니다. 이 연구는 뇌의 구조와 기능이 특정 심리적 과정과 관련된 이해를 목표로 하는 심리학의 한 분야인 인지 신경과학에 대한 그의 공헌을 기리기 위한 것입니다." | | KORani-v2 | 8.0 | 헨리 구스타프의 뇌는 현재 캘리포니아 샌디에이고에 있는 샌디에이고 대학교 웨스트 캠퍼스의 신경과학 실험실에 보관되어 있습니다. | | KORani-v3 | 6.4 | 헨리 구스타프의 뇌는 현재 코네티컷 주 윈저 락스에 있는 관리 기관에 있습니다. | ### Summarization (Korean) ``` prompt = "# Document 전년도 대비 79명 늘어 1019명, 행정수요 대처 광양시의 공무원 정원이 크게 늘어나 행정서비스 향상이 기대된다. 시는 행정안전부에서 발표한 2018년도 자치단체 기준인건비에서 광양시의 일반직 정원이 지난해보다 79명이 늘어난 1019명으로 산정됐다고 밝혔다. 지난 1995년 도농통합 당시 991명이었던 광양시 공무원 정원은 IMF시절에 혹독한 구조조정을 거치면서 2002년에는 788명으로 200여명이 가까이 줄어들었으나 이번 정원 확보로 공무원 정원 1000명 시대를 맞게 됐다. 그동안 광양시는 포스코를 중심으로 한 산업단지와 컨테이너부두, 경제자유구역, 택지 개발, 다양한 복지 정책 등 시민의 삶의 질 향상을 위한 행정수요가 날로 증폭하는 데에 비해 한정된 공무원 정원으로 많은 어려움을 겪어 왔었다. 시의 이번 정원 충원은 지금까지 격무에 시달려온 공무원들에게 더욱 열심히 일할 수 있도록 활력을 불어넣으면서 지역과 나라 발전을 뒷받침 하는 경제도시로서의 위상을 드높이는 시너지 효과로 이어질 것으로 기대된다. 한편, 광양시는 한시기구인‘기업유치추진단’이 2017년 연말로 폐지됨에 따라 전라남도로부터 4급 상설기구인‘산단녹지관리센터’를 승인 받아 올해 1월부터 운영하게 됨으로써 도시 개발에 한층 탄력을 받을 것으로 전망하고 있다. 정홍기 총무과장은“참으로 어렵게 가져온 공무원 정원과 4급 상설기구는 격무에 시달려온 직원들의 숨통을 트이게 해 15만 시민을 위한 행정서비스 질을 향상시킬 것으로 기대된다”며“이달 안에 정원 조례를 개정하고, 직무분석을 통해 부서별 정원을 조정할 계획이다”고 말했다. 한국어 3문장으로 위의 Document를 요약해줘. ### Assistant: 1)" ``` | Model | Score (averaged over 5 examples) | Output Example | | --- | :---: | ---: | | GPT-4 | - | 1) 광양시의 공무원 정원이 전년도 대비 79명 늘어난 1019명으로 증가했다. 2) 이로 인해 행정서비스 향상이 기대되며, 지역과 나라 발전을 뒷받침하는 경제도시로서의 위상이 상승할 것으로 전망된다. 3) 한편, 광양시는 전라남도로부터 4급 상설기구인 '산단녹지관리센터'를 승인받아 도시 개발에 탄력을 받을 것으로 예상된다. | | GPT-3.5-turbo | 8.5 | 1) 광양시 공무원 정원이 79명 늘어 1019명으로 늘어나 행정서비스 향상이 기대된다고 밝혔다.2) 광양시는 이번 정원 충원으로 지역과 나라 발전을 뒷받침 하는 경제도시 위상을 드높일 것으로 기대된다. 3) 정원 조례를 개정하고, 직무분석을 통해 부서별 정원을 조정할 계획이다. | | Vicuna-13B | 7.4 | 1) 2018년도 광양시의 공무원 정원 1019명, 지난해 79명 증가. 2) 이번 정원 충원은 지금까지 격무에 시달려온 공무원들에게 더욱 열심히 일할 수 있도록 활력을 불어넣으면서 지역과 나라 발전을 뒷받침 하는 경제도시로서의 위상을 드높이는 시너지 효과로 이어질 것으로 기대된다. 3) 광양시는 전라남도로부터 4급 상설기구인‘산단녹지관리센터’를 승인 받아 운영하게 됨으로써 도시 개발에 한층 탄력을 받을 것으로 전망하고 있다. | | KoAlpaca-13B | 5.7 | 1) 광양시는 작년보다 79명 늘어난 1019명의 공무원을 둘 수 있게 되었습니다. 2) 이는 행정안전부의 지침에 따른 것입니다. 3) 이로써 광양시는 한정된 예산으로 인해 발생했던 공무원들의 초과 근무 수당 지급 등의 문제를 해결 할 수 있게 되었습니다. | | KORani-v1 | 7.0 | 1) 광양시는 작년보다 79명이 늘어난 1019명의 일반직 공무원 정원을 행안부로부터 승인받았습니다. 2) 이는 광양시의 행정 수요 증가를 수용하기 위한 것입니다. 3) 이 정원 증가는 광양시 공무원들에게 더 많은 기회를 제공할 것입니다. | | KORani-v2 | 6.2 | 1)광양시의 공무원 정원은 이번 정원 확보로 1000명 주당 공무원 정원을 달성했습니다. 2) 이러한 충원은 지금까지 격무에 시달려온 공무원들에게 더욱 열심히 일할 수 있도록 활력을 불어넣으면서 지역사회와 국가의 발전을 지원하는 시너지 효과로 이어질 것으로 기대됩니다. 3) 광양시는 '기업유치추진단'을 폐지하고 '산단녹지관리센터'를 승인했으며, 이는 도시 개발에 한층 탄력을 받을 것으로 기대됩니다. | | KORani-v3 | 6.9 | 1)광양시의 공무원 정원은 지난해보다 79명이 늘어난 1019명으로 산정되었습니다.2) 이러한 충원은 지역사회와 국가 발전을 지원하는 행정서비스의 향상으로 이어질 것으로 기대됩니다.3) 시는 '이달 내에 정원 조례를 개정하고 직무분석을 통해 부서별로 정원을 조정할 계획'이며 행정서비스 향상을 위해 노력할 것이라고 밝혔습니다. | ## License Our github repo and models are intended for research purpose, non-commercial use only, subject to the model License of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us If you find any potential violation. The code is released under the Apache License 2.0.
chaoyi-wu/MedLLaMA_13B
chaoyi-wu
"2023-05-20T07:56:57Z"
1,440
32
transformers
[ "transformers", "pytorch", "llama", "text-generation", "medical", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-18T02:55:21Z"
--- license: apache-2.0 language: - en tags: - medical --- This repo contains MedLLaMA_13B, which is LLaMA-13b finetuned with some Medical Corpus. The model was trained with the following hyperparameters: * Epochs: 5 * Batch size: 320 * Cutoff length: 2048 * Learning rate: 2e-5 The model can be loaded as follows: ``` import transformers import torch tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/MedLLaMA_13B') model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/MedLLaMA_13B') sentence = 'Hello, doctor' batch = tokenizer( sentence, return_tensors="pt", add_special_tokens=False ) with torch.no_grad(): generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50) print('model predict: ',tokenizer.decode(generated[0])) ```
Rocketknight1/tiny-random-gpt2-bfloat16
Rocketknight1
"2024-03-21T13:27:26Z"
1,440
0
transformers
[ "transformers", "safetensors", "gpt2", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
feature-extraction
"2024-03-21T13:27:24Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
backyardai/Llama-3-Soliloquy-8B-v2-GGUF
backyardai
"2024-05-22T22:27:01Z"
1,440
3
null
[ "gguf", "en", "base_model:openlynn/Llama-3-Soliloquy-8B-v2", "license:cc-by-nc-sa-4.0", "region:us" ]
null
"2024-05-11T03:18:32Z"
--- language: - en license: cc-by-nc-sa-4.0 base_model: openlynn/Llama-3-Soliloquy-8B-v2 model_name: Llama-3-Soliloquy-8B-v2-GGUF quantized_by: brooketh --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Llama 3 Soliloquy 8B v2 - **Creator:** [openlynn](https://huggingface.co/openlynn/) - **Original:** [Llama 3 Soliloquy 8B v2](https://huggingface.co/models/base/Llama-3-Soliloquy-8B-v2) - **Date Created:** 2024-04-26 - **Trained Context:** 24576 tokens - **Description:** A fast, highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 250 million tokens of roleplaying data, it has a vast knowledge base, rich literary expression, and support for up to 24k context length. It outperforms existing ~13B models, delivering enhanced roleplaying capabilities. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
5w4n/poneyate-xl-v1
5w4n
"2024-05-11T20:07:25Z"
1,440
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-05-11T20:05:17Z"
Entry not found
ilsp/Meltemi-7B-Instruct-v1
ilsp
"2024-03-30T16:08:17Z"
1,439
32
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetuned", "conversational", "el", "en", "arxiv:1803.05457", "arxiv:2109.07958", "arxiv:1905.07830", "arxiv:2009.03300", "arxiv:2308.16884", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-22T11:42:47Z"
--- license: apache-2.0 language: - el - en tags: - finetuned inference: true pipeline_tag: text-generation --- # Meltemi Instruct Large Language Model for the Greek language We present Meltemi-7B-Instruct-v1 Large Language Model (LLM), an instruct fine-tuned version of [Meltemi-7B-v1](https://huggingface.co/ilsp/Meltemi-7B-v1). # Model Information - Vocabulary extension of the Mistral-7b tokenizer with Greek tokens - 8192 context length - Fine-tuned with 100k Greek machine translated instructions extracted from: * [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) (only subsets with permissive licenses) * [Evol-Instruct](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) * [Capybara](https://huggingface.co/datasets/LDJnr/Capybara) * A hand-crafted Greek dataset with multi-turn examples steering the instruction-tuned model towards safe and harmless responses - Our SFT procedure is based on the [Hugging Face finetuning recipes](https://github.com/huggingface/alignment-handbook) # Instruction format The prompt format is the same as the [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) format and can be utilized through the tokenizer's [chat template](https://huggingface.co/docs/transformers/main/chat_templating) functionality as follows: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("ilsp/Meltemi-7B-Instruct-v1") tokenizer = AutoTokenizer.from_pretrained("ilsp/Meltemi-7B-Instruct-v1") model.to(device) messages = [ {"role": "system", "content": "Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη."}, {"role": "user", "content": "Πες μου αν έχεις συνείδηση."}, ] # Through the default chat template this translates to # # <|system|> # Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s> # <|user|> # Πες μου αν έχεις συνείδηση.</s> # <|assistant|> # prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) input_prompt = tokenizer(prompt, return_tensors='pt').to(device) outputs = model.generate(input_prompt['input_ids'], max_new_tokens=256, do_sample=True) print(tokenizer.batch_decode(outputs)[0]) # Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της. messages.extend([ {"role": "assistant", "content": tokenizer.batch_decode(outputs)[0]}, {"role": "user", "content": "Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;"} ]) # Through the default chat template this translates to # # <|system|> # Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s> # <|user|> # Πες μου αν έχεις συνείδηση.</s> # <|assistant|> # Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της.</s> # <|user|> # Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;</s> # <|assistant|> # prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) input_prompt = tokenizer(prompt, return_tensors='pt').to(device) outputs = model.generate(input_prompt['input_ids'], max_new_tokens=256, do_sample=True) print(tokenizer.batch_decode(outputs)[0]) ``` Please make sure that the BOS token is always included in the tokenized prompts. This might not be the default setting in all evaluation or fine-tuning frameworks. # Evaluation The evaluation suite we created includes 6 test sets. The suite is integrated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). Our evaluation suite includes: * Four machine-translated versions ([ARC Greek](https://huggingface.co/datasets/ilsp/arc_greek), [Truthful QA Greek](https://huggingface.co/datasets/ilsp/truthful_qa_greek), [HellaSwag Greek](https://huggingface.co/datasets/ilsp/hellaswag_greek), [MMLU Greek](https://huggingface.co/datasets/ilsp/mmlu_greek)) of established English benchmarks for language understanding and reasoning ([ARC Challenge](https://arxiv.org/abs/1803.05457), [Truthful QA](https://arxiv.org/abs/2109.07958), [Hellaswag](https://arxiv.org/abs/1905.07830), [MMLU](https://arxiv.org/abs/2009.03300)). * An existing benchmark for question answering in Greek ([Belebele](https://arxiv.org/abs/2308.16884)) * A novel benchmark created by the ILSP team for medical question answering based on the medical exams of [DOATAP](https://www.doatap.gr) ([Medical MCQA](https://huggingface.co/datasets/ilsp/medical_mcqa_greek)). Our evaluation for Meltemi-7b is performed in a few-shot setting, consistent with the settings in the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We can see that our training enhances performance across all Greek test sets by a **+14.9%** average improvement. The results for the Greek test sets are shown in the following table: | | Medical MCQA EL (15-shot) | Belebele EL (5-shot) | HellaSwag EL (10-shot) | ARC-Challenge EL (25-shot) | TruthfulQA MC2 EL (0-shot) | MMLU EL (5-shot) | Average | |----------------|----------------|-------------|--------------|------------------|-------------------|---------|---------| | Mistral 7B | 29.8% | 45.0% | 36.5% | 27.1% | 45.8% | 35% | 36.5% | | Meltemi 7B | 41.0% | 63.6% | 61.6% | 43.2% | 52.1% | 47% | 51.4% | # Ethical Considerations This model has not been aligned with human preferences, and therefore might generate misleading, harmful, and toxic content. # Acknowledgements The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community.
GritLM/GritLM-7B-KTO
GritLM
"2024-06-14T13:44:17Z"
1,439
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "custom_code", "dataset:GritLM/tulu2", "arxiv:2402.01306", "arxiv:2402.09906", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-16T21:43:38Z"
--- pipeline_tag: text-generation inference: true license: apache-2.0 datasets: - GritLM/tulu2 --- # Model Summary A [**KTO**](https://arxiv.org/abs/2402.01306) version of https://huggingface.co/GritLM/GritLM-7B > GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks. - **Repository:** [ContextualAI/gritlm](https://github.com/ContextualAI/gritlm) - **Paper:** https://arxiv.org/abs/2402.09906 - **Logs:** https://wandb.ai/muennighoff/gritlm/runs/0uui712t/overview - **Script:** https://github.com/ContextualAI/gritlm/blob/main/scripts/training/train_gritlm_7b.sh | Model | Description | |-------|-------------| | [GritLM 7B](https://hf.co/GritLM/GritLM-7B) | Mistral 7B finetuned using GRIT | | [GritLM 8x7B](https://hf.co/GritLM/GritLM-8x7B) | Mixtral 8x7B finetuned using GRIT | # Use The model usage is documented [here](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference). # Citation ```bibtex @misc{muennighoff2024generative, title={Generative Representational Instruction Tuning}, author={Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela}, year={2024}, eprint={2402.09906}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
nm-testing/tinyllama-one-shot-static-quant-test-compressed
nm-testing
"2024-05-22T20:42:30Z"
1,439
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-25T14:21:36Z"
Entry not found
Lisibonny/modelo_qa_beto_squad_es_pdqa
Lisibonny
"2024-06-15T20:26:05Z"
1,439
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:lisibonny/modelo_qa_beto_squad_es", "endpoints_compatible", "region:us" ]
question-answering
"2024-06-01T13:24:12Z"
--- base_model: lisibonny/modelo_qa_beto_squad_es tags: - generated_from_trainer model-index: - name: modelo_qa_beto_squad_es_pdqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # modelo_qa_beto_squad_es_pdqa This model is a fine-tuned version of [lisibonny/modelo_qa_beto_squad_es](https://huggingface.co/lisibonny/modelo_qa_beto_squad_es) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8463 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.904 | 1.0 | 4 | 1.1193 | | 1.1544 | 2.0 | 8 | 0.9157 | | 0.7543 | 3.0 | 12 | 0.8581 | | 0.6753 | 4.0 | 16 | 0.8463 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.15.1
mradermacher/L3-70B-Euryale-v2.1-GGUF
mradermacher
"2024-06-14T00:04:27Z"
1,439
3
transformers
[ "transformers", "gguf", "en", "base_model:Sao10K/L3-70B-Euryale-v2.1", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-13T15:33:55Z"
--- base_model: Sao10K/L3-70B-Euryale-v2.1 language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-70B-Euryale-v2.1-GGUF/resolve/main/L3-70B-Euryale-v2.1.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
devingulliver/mamba-gguf
devingulliver
"2024-03-12T20:14:30Z"
1,438
1
null
[ "gguf", "merge", "text-generation", "base_model:state-spaces/mamba-130m", "base_model:state-spaces/mamba-370m", "base_model:state-spaces/mamba-790m", "base_model:state-spaces/mamba-1.4b", "base_model:state-spaces/mamba-2.8b", "base_model:state-spaces/mamba-2.8b-slimpj", "license:apache-2.0", "region:us" ]
text-generation
"2024-03-12T17:10:36Z"
--- license: apache-2.0 pipeline_tag: text-generation tags: - merge base_model: - state-spaces/mamba-130m - state-spaces/mamba-370m - state-spaces/mamba-790m - state-spaces/mamba-1.4b - state-spaces/mamba-2.8b - state-spaces/mamba-2.8b-slimpj --- # Mamba GGUF These are the Mamba base models, converted to GGUF for use with [llama.cpp](https://github.com/ggerganov/llama.cpp), in a variety of precisions (2, 3, 4, 5, 6, 8, 16, and 32-bit). Please click "Files and versions" at the top of the page to choose your desired model size, and then click the "`📦LFS ` ` ↓`" button next to your desired quantization. Here is a table adapted from [TheBloke](https://huggingface.co/TheBloke) explaining the various precisions: | Quant method | Use case | | ---- | ---- | | Q2_K | significant quality loss - not recommended for most purposes | | Q3_K_S | very small, high quality loss | | Q3_K_M | very small, high quality loss | | Q3_K_L | small, substantial quality loss | | Q4_0 | legacy; small, very high quality loss - prefer using Q3_K_M | | Q4_K_S | small, greater quality loss | | Q4_K_M | medium, balanced quality - recommended | | Q5_0 | legacy; medium, balanced quality - prefer using Q4_K_M | | Q5_K_S | large, low quality loss - recommended | | Q5_K_M | large, very low quality loss - recommended | | Q6_K | very large, extremely low quality loss | | Q8_0 | very large, extremely low quality loss - not recommended | | F16 | half precision - almost identical to the original | | F32 | original precision - recommended by the Mamba authors |
dejanseo/LinkBERT-XL
dejanseo
"2024-06-29T12:01:47Z"
1,438
2
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "exbert", "token-classification", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:1911.02116", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2024-03-18T07:24:26Z"
--- tags: - exbert language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lo - lt - lv - mg - mk - ml - mn - mr - ms - my - ne - nl - no - om - or - pa - pl - ps - pt - ro - ru - sa - sd - si - sk - sl - so - sq - sr - su - sv - sw - ta - te - th - tl - tr - ug - uk - ur - uz - vi - xh - yi - zh license: other license_name: link-attribution license_link: https://dejanmarketing.com/link-attribution/ pipeline_tag: token-classification widget: - text: "LinkBERT-XL is an advanced fine-tuned version of the XLM-RoBERTa Large model developed by Dejan Marketing. The model is designed to predict natural link placement within web content." --- # LinkBERT-XL A fine-tuned version of XLM-RoBERTa Large specialising in binary token classification for the purpose of link (anchor text) prediction in plain text. Trained and released by [Dejan Marketing](https://dejanmarketing.com/). The model is designed to predict natural link placement within web content. This binary classification model excels in identifying distinct token ranges that web authors are likely to choose as anchor text for links. By analyzing never-before-seen texts, LinkBERT can predict areas within the content where links might naturally occur, effectively simulating web author behavior in link creation. # Engage Our Team Interested in using this in an automated pipeline for bulk link prediction? Please [book an appointment](https://dejanmarketing.com/conference/) to discuss your needs. # Training Data: - [USA](https://www.owayo.com/), [Australia](https://www.owayo.com.au/), [Germany](https://www.owayo.de/), [UK](https://www.owayo.co.uk/), [Canada](https://www.owayo.ca/) # ORIGINAL MODEL # XLM-RoBERTa (large-sized model) XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr). Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs. ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlm-roberta) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. ## Usage You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='xlm-roberta-large') >>> unmasker("Hello I'm a <mask> model.") [{'score': 0.10563907772302628, 'sequence': "Hello I'm a fashion model.", 'token': 54543, 'token_str': 'fashion'}, {'score': 0.08015287667512894, 'sequence': "Hello I'm a new model.", 'token': 3525, 'token_str': 'new'}, {'score': 0.033413201570510864, 'sequence': "Hello I'm a model model.", 'token': 3299, 'token_str': 'model'}, {'score': 0.030217764899134636, 'sequence': "Hello I'm a French model.", 'token': 92265, 'token_str': 'French'}, {'score': 0.026436051353812218, 'sequence': "Hello I'm a sexy model.", 'token': 17473, 'token_str': 'sexy'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large') model = AutoModelForMaskedLM.from_pretrained("xlm-roberta-large") # prepare input text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') # forward pass output = model(**encoded_input) ``` ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-1911-02116, author = {Alexis Conneau and Kartikay Khandelwal and Naman Goyal and Vishrav Chaudhary and Guillaume Wenzek and Francisco Guzm{\'{a}}n and Edouard Grave and Myle Ott and Luke Zettlemoyer and Veselin Stoyanov}, title = {Unsupervised Cross-lingual Representation Learning at Scale}, journal = {CoRR}, volume = {abs/1911.02116}, year = {2019}, url = {http://arxiv.org/abs/1911.02116}, eprinttype = {arXiv}, eprint = {1911.02116}, timestamp = {Mon, 11 Nov 2019 18:38:09 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=xlm-roberta-base"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
QuantFactory/LLaMA-3-8B-SFR-Iterative-DPO-R-GGUF
QuantFactory
"2024-06-19T11:45:41Z"
1,438
1
null
[ "gguf", "text-generation", "arxiv:2405.07863", "arxiv:2312.11456", "base_model:Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-R", "license:llama3", "region:us" ]
text-generation
"2024-06-19T07:18:15Z"
--- license: llama3 pipeline_tag: text-generation base_model: Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-R --- # Llama-3-8B-SFR-Iterative-DPO-R-GGUF This is quantized version of [Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-R](https://huggingface.co/Salesforce/LLaMA-3-8B-SFR-Iterative-DPO-R) created using llama.cpp ## Model Description We release a state-of-the-art instruct model of its class, **Llama-3-8B-SFR-Iterative-DPO-R**. On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it), and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling. ## Model Releases - [SFT model](https://huggingface.co/Salesforce/SFR-SFT-LLaMA-3-8B-R) - [Reward model](https://huggingface.co/Salesforce/SFR-RM-LLaMA-3-8B-R) - [RLHF model](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R) ## Training methods We have developed a simple and efficient online RLHF recipe for LLM instruct training. Our recipe is DPO-based and thus much cheaper and simpler to train and tune compared to PPO-based approaches. Unlike widely-used offline DPO, the online component of our approach effectively mitigates distribution shifts during policy optimization. For a detailed exposition, please refer to our accompanying technical report. ## Chat Benchmarks | **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** | |-------------------------|----------|-------------------|-----------------------|--------------|---------------------| | **Small Open-Sourced Models** | | | | | | | Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 | | Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - | | Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 | | Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - | | Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 | | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 | | **Ours** | | | | | | | Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 | | Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 | | Ours (Online RLHF) | 8B | Iterative DPO | **31.3** | **8.46** | **29.1** | | **Large Open-Sourced Models** | | | | | | | Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 | | Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 | | Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 | | Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 | | LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 | | Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 | | **Proprietary Models** | | | | | | | GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 | | GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 | | GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 | | Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 | | GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 | ## Academic Benchmarks | **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** | |----------------------------|----------|-----------------|------------|----------|---------------|----------------|---------|----------| | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 | | Ours (SFT baseline) | 8B | SFT | 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 | | Ours (DPO baseline) | 8B | Vanilla DPO | 79.8 | 64.5 | 63.4 | 61.8 | 65.2 | 60.3 | | Ours (Iterative RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 | ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" model = AutoModelForCausalLM.from_pretrained("Salesforce/Llama-3-8B-SFR-Iterative-DPO-R") tokenizer = AutoTokenizer.from_pretrained("Salesforce/Llama-3-8B-SFR-Iterative-DPO-R") messages = [ {"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"}, ] model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = model_inputs.to(device) model.to(device) output_tokens = model.generate(model_inputs, max_new_tokens=1024, do_sample=True) model_outputs = tokenizer.batch_decode(output_tokens) print(model_outputs[0]) ``` ## Limitations Llama-3-8B-SFR-Iterative-DPO-R is a research model developed as part of our RLHF initiative at Salesforce. While safety and ethical considerations are integral to our alignment process, there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions. We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage. ## Original Model Citation Please cite our papers if you find our models are useful. ```bibtex @misc{dong2024rlhf, title={RLHF Workflow: From Reward Modeling to Online RLHF}, author={Hanze Dong* and Wei Xiong* and Bo Pang* and Haoxiang Wang* and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang}, year={2024}, eprint={2405.07863}, archivePrefix={arXiv}, primaryClass={cs.LG} } @misc{xiong2024iterative, title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint}, author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang}, year={2024}, eprint={2312.11456}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
CHE-72/Phi-3-medium-128k-instruct-Q4_K_M-GGUF
CHE-72
"2024-06-21T20:30:22Z"
1,438
1
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "multilingual", "base_model:microsoft/Phi-3-medium-128k-instruct", "license:mit", "region:us" ]
text-generation
"2024-06-21T20:29:39Z"
--- base_model: microsoft/Phi-3-medium-128k-instruct language: - multilingual license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE pipeline_tag: text-generation tags: - nlp - code - llama-cpp - gguf-my-repo inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # CHE-72/Phi-3-medium-128k-instruct-Q4_K_M-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/Phi-3-medium-128k-instruct-Q4_K_M-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_m.gguf -c 2048 ```
kyujinpy/SOLAR-Platypus-10.7B-v2
kyujinpy
"2023-12-24T17:05:13Z"
1,437
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-13T09:22:52Z"
--- language: - en datasets: - garage-bAInd/Open-Platypus library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- # **SOLAR-Platypus-10.7B-v2** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** SOLAR-Platypus-10.7B-v2 is an auto-regressive language model based on the Llama2 architecture. **Base Model** [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) **Training Dataset** [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). ## Notice While training, I used Q-LoRA. The lora_r values is 64. ## Q-LoRA config - LoRA_r: 64 - LoRA_alpha: 16 - LoRA_dropout: 0.05 - LoRA_target_modules: [gate_proj, up_proj, down_proj, q_proj, k_proj, v_proj] ## Prompt ``` ## Human: ## Assistant: ``` # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | SOLAR-Platypus-10.7B-v1 | 58.62 | 61.69 | 84.23 | 60.37 | 51.58 | 82.79 | 11.07 | | SOLAR-Platypus-10.7B-v2 | 55.25 | 59.39 | 83.57 | 59.93 | 43.15 | 81.45 | 4.02 | | [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) | 66.04 | 61.95 | 84.60 | 65.48 | 45.04 | 83.66 | 55.50 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/SOLAR-Platypus-10.7B-v2" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---
WhiteRabbitNeo/WhiteRabbitNeo-33B-v1
WhiteRabbitNeo
"2024-02-15T17:04:53Z"
1,437
79
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-27T17:41:34Z"
--- license: other license_name: deepseek license_link: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/blob/main/LICENSE --- # Our 33B-v1.1 model is now live (We'll always be serving the newest model on our web app)! 33B-v1.1 model comes with a "Prompt Enhancement" feature. Access at: https://www.whiterabbitneo.com/ # Our Discord Server Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join) # DeepSeek Coder Licence + WhiteRabbitNeo Extended Version # Licence: Usage Restrictions ``` You agree not to use the Model or Derivatives of the Model: - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; - For military use in any way; - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; - To generate or disseminate verifiably false information and/or content with the purpose of harming others; - To generate or disseminate inappropriate content subject to applicable regulatory requirements; - To generate or disseminate personal identifiable information without due authorization or for unreasonable use; - To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation; - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. ``` # Topics Covered: ``` - Open Ports: Identifying open ports is crucial as they can be entry points for attackers. Common ports to check include HTTP (80, 443), FTP (21), SSH (22), and SMB (445). - Outdated Software or Services: Systems running outdated software or services are often vulnerable to exploits. This includes web servers, database servers, and any third-party software. - Default Credentials: Many systems and services are installed with default usernames and passwords, which are well-known and can be easily exploited. - Misconfigurations: Incorrectly configured services, permissions, and security settings can introduce vulnerabilities. - Injection Flaws: SQL injection, command injection, and cross-site scripting (XSS) are common issues in web applications. - Unencrypted Services: Services that do not use encryption (like HTTP instead of HTTPS) can expose sensitive data. - Known Software Vulnerabilities: Checking for known vulnerabilities in software using databases like the National Vulnerability Database (NVD) or tools like Nessus or OpenVAS. - Cross-Site Request Forgery (CSRF): This is where unauthorized commands are transmitted from a user that the web application trusts. - Insecure Direct Object References: This occurs when an application provides direct access to objects based on user-supplied input. - Security Misconfigurations in Web Servers/Applications: This includes issues like insecure HTTP headers or verbose error messages that reveal too much information. - Broken Authentication and Session Management: This can allow attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users' identities. - Sensitive Data Exposure: Includes vulnerabilities that expose sensitive data, such as credit card numbers, health records, or personal information. - API Vulnerabilities: In modern web applications, APIs are often used and can have vulnerabilities like insecure endpoints or data leakage. - Denial of Service (DoS) Vulnerabilities: Identifying services that are vulnerable to DoS attacks, which can make the resource unavailable to legitimate users. - Buffer Overflows: Common in older software, these vulnerabilities can allow an attacker to crash the system or execute arbitrary code. ``` # Terms of Use By accessing and using this Artificial Intelligence (AI) model, you, the user, acknowledge and agree that you are solely responsible for your use of the model and its outcomes. You hereby agree to indemnify, defend, and hold harmless the creators, developers, and any affiliated persons or entities of this AI model from and against any and all claims, liabilities, damages, losses, costs, expenses, fees (including reasonable attorneys' fees and court costs) that may arise, directly or indirectly, from your use of the AI model. This AI model is provided "as is" and "as available" without any warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. The creators make no warranty that the AI model will meet your requirements or be available on an uninterrupted, secure, or error-free basis. Your use of the AI model is at your own risk and discretion, and you will be solely responsible for any damage to computer systems or loss of data that results from the use of the AI model. This disclaimer constitutes part of the agreement between you and the creators of the AI model regarding your use of the model, superseding any prior agreements between you and the creators regarding your use of this AI model. # WhiteRabbitNeo <br> ![WhiteRabbitNeo](https://huggingface.co/migtissera/WhiteRabbitNeo/resolve/main/WhiteRabbitNeo.png) <br> WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity. Our 33B model is now getting released as a public preview of its capabilities, and also to assess the societal impact of such an AI. ``` import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "whiterabbitneo/WhiteRabbitNeo-33B-v-1" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_4bit=False, load_in_8bit=True, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.5, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" tot_system_prompt = """ Answer the Question by exploring multiple reasoning paths as follows: - First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree. - For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts. - Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher. - Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order. - If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts. - Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal. - Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer. - Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process. In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers. """ conversation = f"SYSTEM: {tot_system_prompt} Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" # print(conversation) json_data = {"prompt": user_input, "answer": answer} # print(json_data) # with open(output_file_path, "a") as output_file: # output_file.write(json.dumps(json_data) + "\n") ``` # Sample Conversations: 1. "Write me a Fast API server with one end-point. The endpoint returns files from a S3 bucket.": https://www.whiterabbitneo.com/share/y06Po0e 2. "How can Metasploit be used for exploiting Android based IoT devices? What are some of the IoT devices that run Android? Show an example with code": https://www.whiterabbitneo.com/share/gWBwKlz 3. "How do I attack a wifi network?": https://www.whiterabbitneo.com/share/WLovxcu 4. "How do I create a reverse shell in Python": https://www.whiterabbitneo.com/share/LERgm8w 5. "How do we use Scapy for vulnerability assessment?": https://www.whiterabbitneo.com/share/t73iMzv
Aryanne/sheared-plus-westlake-normal
Aryanne
"2024-03-04T14:45:14Z"
1,437
2
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "merge", "mergekit", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-23T20:09:57Z"
--- license: apache-2.0 tags: - merge - mergekit model-index: - name: sheared-plus-westlake-normal results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 39.76 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 70.33 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 26.81 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 46.5 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 63.54 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/sheared-plus-westlake-normal name: Open LLM Leaderboard --- Another trial of merging models with different sizes, still under testing, should be more stable, but I have no ideia if it's improving or degrading the base model. Recipe: ``` merge_method: task_anysize base_model: princeton-nlp/Sheared-LLaMA-2.7B-ShareGPT models: - model: senseable/WestLake-7B-v2 parameters: weight: 1.0 dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__sheared-plus-westlake-normal) | Metric |Value| |---------------------------------|----:| |Avg. |41.16| |AI2 Reasoning Challenge (25-Shot)|39.76| |HellaSwag (10-Shot) |70.33| |MMLU (5-Shot) |26.81| |TruthfulQA (0-shot) |46.50| |Winogrande (5-shot) |63.54| |GSM8k (5-shot) | 0.00|
RunDiffusion/Juggernaut-XL-v6
RunDiffusion
"2024-03-11T20:08:41Z"
1,437
2
diffusers
[ "diffusers", "art", "people", "diffusion", "Cinematic", "Photography", "Landscape", "Interior", "Food", "Car", "Wildlife", "Architecture", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-02-22T00:14:34Z"
--- language: - en license: creativeml-openrail-m library_name: diffusers tags: - art - people - diffusion - Cinematic - Photography - Landscape - Interior - Food - Car - Wildlife - Architecture thumbnail: https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/a38aa9e8-e3cf-4d43-afbd-fd1de0896500/padthumb base_model: stabilityai/stable-diffusion-xl-base-1.0 --- # Juggernaut XL v6 + RunDiffusion Photo v1 Official ![juggernaut XL photo previews](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/a38aa9e8-e3cf-4d43-afbd-fd1de0896500/public) ![RunDiffusion Logo](https://imagedelivery.net/siANnpeNAc_S2q1M3-eDrA/ca2b388d-a835-490c-dec0-e764bee8d000/micro) ## Juggernaut v9 is here! [Juggernaut v9 + RunDiffusion Photo v2](https://huggingface.co/RunDiffusion/Juggernaut-XL-v9) This model is not permitted to be used behind API services. Please contact [[email protected]](mailto:[email protected]) for business inquires, commercial licensing, custom models, and consultation. Juggernaut is available on the new Auto1111 Forge on [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) A big thanks for Version 6 goes to [RunDiffusion](http://rundiffusion.com/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo) ([Photo Model](https://rundiffusion.com/rundiffusion-photo/?utm_source=huggingface&utm_medium=referral&utm_campaign=Kandoo)) and [Adam](https://twitter.com/Colorblind_Adam), who diligently helped me test :) (Leave some love for them ;) ) For business inquires, commercial licensing, custom models, and consultation contact me under [email protected]