modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
ChrisWilson011016/5EP72xd2fvk9gShtQoNKsXdH54pWMotosaF39PojCcfAoZRj_vgg
ChrisWilson011016
2024-03-04T18:49:56Z
1,285
0
keras
[ "keras", "region:us" ]
null
2024-02-24T15:06:18Z
Entry not found
ChrisWilson011016/5FPFcTEy9qQeZViNMAB15NzPMWFYkhDMHxv4dJYKqo73ZNRV_vgg
ChrisWilson011016
2024-03-04T18:53:25Z
1,285
0
keras
[ "keras", "region:us" ]
null
2024-02-24T15:16:07Z
Entry not found
JJhooww/Mistral-7B-v0.2-Base_ptbr
JJhooww
2024-05-21T14:34:50Z
1,285
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "pt", "dataset:nicholasKluge/Pt-Corpus", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-11T15:07:22Z
--- license: apache-2.0 datasets: - nicholasKluge/Pt-Corpus language: - pt --- É um modelo base pré-treinado com cerca de 1b tokens em portugues iniciado com os pesos oficiais do modelo, o modelo não segue instrução então precisa fazer fine tuning. | | Mistral Base PTBR | Mistral Base | Melhoria | |------------------------------|-------------------|--------------|----------| | assin2_rte | 90,11 | 87,74 | 2,37 | | assin2_sts | 72,51 | 67,05 | 5,46 | | bluex | 53,97 | 53,27 | 0,70 | | enem | 64,94 | 62,42 | 2,52 | | faquad_nli | 69,04 | 47,63 | 21,41 | | hatebr_offensive_binary | 79,62 | 77,63 | 1,99 | | oab_exams | 45,42 | 45,24 | 0,18 | | portuguese_hate_speech_binary| 58,52 | 55,72 | 2,80 |
duyntnet/MagicPrompt-Stable-Diffusion-imatrix-GGUF
duyntnet
2024-05-03T04:39:32Z
1,285
1
transformers
[ "transformers", "gguf", "imatrix", "MagicPrompt-Stable-Diffusion", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-05-03T04:34:25Z
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - MagicPrompt-Stable-Diffusion --- Quantizations of https://huggingface.co/Gustavosta/MagicPrompt-Stable-Diffusion # From original readme This is a model from the MagicPrompt series of models, which are [GPT-2](https://huggingface.co/gpt2) models intended to generate prompt texts for imaging AIs, in this case: [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion). ## 🖼️ Here's an example: <img src="https://files.catbox.moe/ac3jq7.png"> This model was trained with 150,000 steps and a set of about 80,000 data filtered and extracted from the image finder for Stable Diffusion: "[Lexica.art](https://lexica.art/)". It was a little difficult to extract the data, since the search engine still doesn't have a public API without being protected by cloudflare, but if you want to take a look at the original dataset, you can have a look here: [datasets/Gustavosta/Stable-Diffusion-Prompts](https://huggingface.co/datasets/Gustavosta/Stable-Diffusion-Prompts). If you want to test the model with a demo, you can go to: "[spaces/Gustavosta/MagicPrompt-Stable-Diffusion](https://huggingface.co/spaces/Gustavosta/MagicPrompt-Stable-Diffusion)". ## 💻 You can see other MagicPrompt models: - For Dall-E 2: [Gustavosta/MagicPrompt-Dalle](https://huggingface.co/Gustavosta/MagicPrompt-Dalle) - For Midjourney: [Gustavosta/MagicPrompt-Midourney](https://huggingface.co/Gustavosta/MagicPrompt-Midjourney) **[⚠️ In progress]** - MagicPrompt full: [Gustavosta/MagicPrompt](https://huggingface.co/Gustavosta/MagicPrompt) **[⚠️ In progress]**
seeweb/SeewebLLM-it
seeweb
2023-08-22T06:46:48Z
1,284
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "it", "dataset:seeweb/Seeweb-it-292-forLLM", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-18T14:14:51Z
--- license: llama2 datasets: - seeweb/Seeweb-it-292-forLLM language: - it --- # Model Card for seeweb/SeewebLLM-it <!-- Provide a quick summary of what the model is/does. --> The model is a fine-tuned version of [LLama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) specialized into italian speaking. <!-- Provide a longer summary of what this model is. --> <!-- **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] --> - **Backbone Model**: [LLama2](https://github.com/facebookresearch/llama/tree/main) - **Language(s) :** Italian - **Finetuned from model: [LLama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)** - **Contributors**: [Lorenzo Rocchi](https://huggingface.co/itsrocchi) @ [Seeweb](https://www.seeweb.it/) <!-- ### Model Sources [optional] Provide the basic links for the model. - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] --> ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> The model may not produce 100% correct output sentences. ### Training script The following repository contains scripts and instructions used for the finetuning and testing: **[https://github.com/itsrocchi/finetuning-llama2-ita.git](https://github.com/itsrocchi/finetuning-llama2-ita.git)** ### Inference and confrontation w/LLama 2 here's a little python snippet to perform inference ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer tokenizer = AutoTokenizer.from_pretrained("itsrocchi/SeewebLLM-it-ver2") model = AutoModelForCausalLM.from_pretrained( "itsrocchi/SeewebLLM-it-ver2", device_map="auto", torch_dtype=torch.float16, load_in_8bit=True, rope_scaling={"type": "dynamic", "factor": 2} ) # eventualmente si possono modificare i parametri di model e tokenizer # inserendo il percorso assoluto della directory locale del modello prompt = "### User:\nDescrivi cos' è l'intelligenza artificiale\n\n### Assistant:\n" #modificare ciò che è scritto tra "User" ed "assistant per personalizzare il prompt" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf')) output_text = tokenizer.decode(output[0], skip_special_tokens=True) ``` #### Output finetuned model: ``` L'intelligenza artificiale è un campo di studio che si occupa di creare sistemi che possono imparare, raffinare e prendere decisioni come gli esseri umani. Questi sistemi possono essere utilizzati in molte applicazioni, come la robotica, la visione artificiale, il machine learning e l'elaborazione del linguaggio naturale. ``` #### Output LLama 2: ``` Certainly, I'd be happy to help you understand what artificial intelligence (AI) is! Artificial intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence, such as learning, problem-solving, decision-making, and perception. AI systems use algorithms and machine learning techniques to analyze data, make predictions, and take actions based on that data. There are several types of AI, including: Narrow or weak AI: This type of AI is designed to perform a specific task, such as playing chess or recognizing faces. Narrow AI is the most common form of AI and is used in many applications, such as virtual assistants, language translation, and image recognition. General or strong AI: This type of AI is designed to perform any intellectual task that a human can. General AI has the potential to revolutionize many industries, including healthcare, finance, and education. Superintelligence: This type of AI is significantly more intelligent than the best human minds. Superintelligence could potentially solve complex problems that are currently unsolvable, but it also raises concerns about safety and control. Some examples of AI include: Self-driving cars: These vehicles use a combination of sensors and machine learning algorithms to navigate roads and avoid obstacles. Virtual assistants: These AI-powered tools, such as Amazon's Alexa and Google Assistant, can perform tasks such as scheduling appointments, setting reminders, and controlling smart home devices. Language translation: AI systems can translate text and speech in real-time, allowing for more effective communication between people who speak different languages. It's important to note that AI is not a single entity, but rather a rapidly evolving field that involves many different techniques and technologies. As AI continues to advance, it has the potential to transform many aspects of our lives, from the way we work and interact with each other to the way we make decisions and solve problems. ``` #### Conclusion LLama 2 can understand the question and gives the user a very specific and overall a better answer compared to the one given by the fine-tuned model. However the fine-tuned model answers with a sentence written in a perfect italian, which is what we were trying to achieve with this fine-tuning process. ### Training Data and Details The dataset used is [seeweb/Seeweb-it-292-forLLM](https://huggingface.co/datasets/seeweb/Seeweb-it-292-forLLM), a dataset containing approx. 300 italian prompt-answer conversations. The training has been made on RTX A6000, inside [Seeweb's Cloud Server GPU](https://www.seeweb.it/en/products/cloud-server-gpu) ### What next? The model must be improved: a much bigger dataset needs to be created so that the model can learn many more ways to answer.
TheBloke/em_german_7b_v01-GGUF
TheBloke
2023-10-02T00:14:24Z
1,284
8
transformers
[ "transformers", "gguf", "llama", "facebook", "meta", "pytorch", "llama-2", "german", "deutsch", "text-generation", "de", "base_model:jphme/em_german_7b_v01", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
2023-10-02T00:07:50Z
--- base_model: jphme/em_german_7b_v01 inference: false language: - de library_name: transformers license: llama2 model_creator: Jan Philipp Harries model_name: EM German 7B v01 model_type: llama pipeline_tag: text-generation prompt_template: 'Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke tags: - facebook - meta - pytorch - llama - llama-2 - german - deutsch --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # EM German 7B v01 - GGUF - Model creator: [Jan Philipp Harries](https://huggingface.co/jphme) - Original model: [EM German 7B v01](https://huggingface.co/jphme/em_german_7b_v01) <!-- description start --> ## Description This repo contains GGUF format model files for [Jan Philipp Harries's EM German 7B v01](https://huggingface.co/jphme/em_german_7b_v01). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/em_german_7b_v01-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/em_german_7b_v01-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF) * [Jan Philipp Harries's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jphme/em_german_7b_v01) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: EmGerman ``` Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [em_german_7b_v01.Q2_K.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [em_german_7b_v01.Q3_K_S.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [em_german_7b_v01.Q3_K_M.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [em_german_7b_v01.Q3_K_L.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [em_german_7b_v01.Q4_0.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [em_german_7b_v01.Q4_K_S.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [em_german_7b_v01.Q4_K_M.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [em_german_7b_v01.Q5_0.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [em_german_7b_v01.Q5_K_S.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [em_german_7b_v01.Q5_K_M.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [em_german_7b_v01.Q6_K.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [em_german_7b_v01.Q8_0.gguf](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF/blob/main/em_german_7b_v01.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/em_german_7b_v01-GGUF and below it, a specific filename to download, such as: em_german_7b_v01.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/em_german_7b_v01-GGUF em_german_7b_v01.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/em_german_7b_v01-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/em_german_7b_v01-GGUF em_german_7b_v01.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m em_german_7b_v01.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Du bist ein hilfreicher Assistent. USER: {prompt} ASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/em_german_7b_v01-GGUF", model_file="em_german_7b_v01.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Jan Philipp Harries's EM German 7B v01 ![EM Logo](em_model_logo_web.jpeg) # EM German 7b ([Zur deutschen Version](#deutsch-em-german)) **EM German (v01)** is an experimental llama2-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content. **This 7b model has addtionally been pre-trained on >3bn high-quality tokens of German text**. # Links & Demos We will publish further instructions, updates and code-snippets in the project's [Github-Repo](https://github.com/jphme/EM_German). ## Model Links | Base Model | HF | GPTQ | GGUF | AWQ | |-------|-------|-------|-------|-------| | [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-hf) 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gguf) | soon | | [Llama2](https://huggingface.co/meta-llama/Llama-2-13b-hf) 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/jphme/em_german_13b_v01_gptq) | soon | soon | | [Llama2](https://huggingface.co/meta-llama/Llama-2-70b-hf) 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gguf) | soon | | [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | soon | soon | tbc | | [LeoLm 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc | | [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc | ## Demos: You can use some of the models with **free** google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ): * [Example Colab Notebook for 13b with GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing) * [Example Colab Notebook for 7b with 8bit-Loading](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing) * Further Examples for LLM apps will follow soon... # Prompt Format This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows: ``` Du bist ein hilfreicher Assistent. USER: <instruction> ASSISTANT: ``` You can swap the standard system prompt for a better suited one (see below for RAG-tasks). # Results Attached a few example outputs. I also developed a few small German benchmarks (based on the Eleuther LM Evaluation Harness), which I will publish soon. ## Example Output Prompt: ``` Du bist ein hilfreicher Assistent USER: Ein Buch hat 500 Seiten. Jede Seite enthält 300 Wörter. Wie viele Wörter sind im gesamten Buch enthalten? ASSISTANT: ``` Response EM 7b v01 ``` Die Gesamtzahl der Wörter in dem Buch kann berechnet werden, indem man die Anzahl der Wörter pro Seite mit der Gesamtzahl der Seiten multipliziert. Gesamtzahl der Wörter = Anzahl der Wörter pro Seite * Gesamtzahl der Seiten = 300 Wörter pro Seite * 500 Seiten = 150.000 Wörter Das Buch enthält also insgesamt 150.000 Wörter. ``` Prompt: ``` Du bist ein hilfreicher Assistent USER: Erläutere die Grundprinzipien der evolutionären Biologie und Genetik, indem du den Prozess der natürlichen Selektion, die Rolle von Mutationen und genetischer Drift sowie das Konzept der adaptiven Radiation erklärst. ASSISTANT: ``` Response EM 7b v01 ``` Die Evolutionäre Biologie und Genetik sind zwei intim verwandte Disziplinen, die sich mit dem Studium des Verhaltens und der Entwicklung von Organismen in der Natur befassen. Sie untersuchen, wie Arten sich im Laufe der Zeit entwickeln und wie diese Veränderungen genetisch bedingt sind. 1. Natürliche Selektion: Die natürliche Selektion ist ein grundlegendes Prinzip in der Evolutionäre Biologie. Es besagt, dass Individuen innerhalb einer Spezies unterschiedliche Fähigkeiten oder Merkmale aufweisen können. Diese Merkmale beeinflussen ihre Überlebens- und Fortpflanzungsrate. Diejenigen mit Vorteilen haben eine höhere Wahrscheinlichkeit, ihre Gene an zukünftige Generationen weiterzugeben und so die Art im Laufe der Zeit zu verändern. 2. Rolle von Mutationen: Mutationen sind Veränderungen im Genom eines Organismus, die entweder durch zufällige Fehler während der Replikation oder durch Exposition gegenüber Strahlung verursacht werden. Diese Veränderungen können dazu führen, dass ein Individuum neue Eigenschaften aufweist, die es im Kampf ums Überleben vorteilhaft machen könnten. Diese Mutationen werden dann von der natürlichen Selektion beeinflusst und können so zur Entwicklung neuer Arten beitragen. 3. Genetische Drift: Genetische Drift ist ein zufälliger Prozess, der durch die Verteilung von Allelen in einer Population stattfindet. Wenn eine Population kleiner wird oder wenn die Überlebens- und Fortpflanzungsraten zwischen den Individuen variieren, kann dies dazu führen, dass bestimmte Gene stärker oder weniger häufig werden. Dies kann dazu führen, dass die Population im Laufe der Zeit unterschiedliche Merkmale aufweist. 4. Adaptive Radiation: Adaptive Radiation ist ein Prozess, bei dem eine Art mehrere verschiedene Linien auslässt, die sich auf verschiedene Weise an ihre Umgebung anpassen. Dies geschieht normalerweise, wenn die Umwelt sich ändert und die Art neue Fähigkeiten benötigt, um überleben zu können. Die verschiedenen Linien können dann unterschiedliche Eigenschaften entwickeln, die ihnen helfen, in dieser neuen Umgebung zu überleben. ``` ## Factual Retrieval / RAG EM Models offer a special format for Factual Retrieval/RAG that is optimized for minimizing hallucinations. We use the same format as Airoboros for this: ### Example Prompt ``` Du bist ein hilfreicher Assistent. Für die folgende Aufgabe stehen dir zwischen den tags BEGININPUT und ENDINPUT mehrere Quellen zur Verfügung. Metadaten zu den einzelnen Quellen wie Autor, URL o.ä. sind zwischen BEGINCONTEXT und ENDCONTEXT zu finden, danach folgt der Text der Quelle. Die eigentliche Aufgabe oder Frage ist zwischen BEGININSTRUCTION und ENDINCSTRUCTION zu finden. Beantworte diese wortwörtlich mit einem Zitat aus den Quellen. Sollten diese keine Antwort enthalten, antworte, dass auf Basis der gegebenen Informationen keine Antwort möglich ist! USER: BEGININPUT BEGINCONTEXT Url: https://www.jph.me ENDCONTEXT Das Wetter in Düsseldorf wird heute schön und sonnig! ENDINPUT BEGININSTRUCTION Was ist 1+1? ENDINSTRUCTION ASSISTANT: ``` Model response: ``` Mit den gegebenen Informationen ist diese Frage nicht zu beantworten. ``` Note that the model would normally always answer with the correct response to a trivial question like this but is trained to deny an answer, if there is no source between the BEGININPUT and ENDINPUT that provides a grounding. If we keep the same Context/Input but ask instead: ``` BEGININSTRUCTION Wie wird das Wetter heute in Düsseldorf? Gebe die Quelle mit an! ENDINSTRUCTION ASSISTANT: ``` We get the correct answer with the source: ``` Das Wetter in Düsseldorf wird heute schön und sonnig! (Quelle: Url: https://www.jph.me) ``` # Evaluation I will publish some custom German benchmarks and the results for them soon. # Dataset The dataset is a mixture of multi-turn chats, retrieval instructions and synthetically generated instructions spawning many topics and applications. ## Use with tranformers Unchanged from the base model: ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer # Hugging Face model_path model_path = 'jphme/em_german_7b_v01' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) def ask_model(instruction, system='Du bist ein hilfreicher Assistent.'): prompt=f"{system} USER: {instruction} ASSISTANT:" input_tokens=tokenizer(prompt, return_tensors="pt").to(model.device) output_tokens=model.generate(**input_tokens, max_new_tokens=200)[0] answer=tokenizer.decode(output_tokens, skip_special_tokens=True) return answer print(ask_model("Nenne mir 10 gute Gründe dafür, heute Sport zu machen!")) ``` # Limitations & Biases This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. # Acknowledgements: Many thanks to [winglian/caseus](https://huggingface.co/winglian) for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to [Jon Durbin](https://huggingface.co/jondurbin) and his [Airoboros](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1) models and code from which I borrowed many ideas and code snippets. The 70b model was trained with support of the [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/). # Contact I you are interested in customized LLMs for business applications, please get in contact with me via [my website](https://www.jph.me). I am also always happy about suggestions and feedback. *PS: I am also still searching for a Co-Founder*. # Disclaimer: The license on this model does not constitute legal advice. I am not responsible for the actions of third parties who use this model. This model should only be used for research purposes. The original Llama2 license applies and is distributed with the model files. ------------- ------------- ------------- # (Deutsch) EM German **EM German (v01)** ist eine experimentelle, Llama2-basierte KI-Modellreihe, die in deutscher Sprache fine-getuned wurde. Die Modelle sind für deutschen Text optimiert und können Inhalte in deutscher Sprache verstehen und generieren. **Dieses 7b-Modell wurde zusätzlich mit >3 Milliarden Token deutscher Texte vortrainiert**. # Links & Demos Im [Github-Repo](https://github.com/jphme/EM_German) Repo werde ich weitere Beispiele und Code-Snippets veröffentlichen. ## Model Links | Base Model | HF | GPTQ | GGUF | AWQ | |-------|-------|-------|-------|-------| | [Llama2](https://huggingface.co/meta-llama/Llama-2-7b-hf) 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_7b_v01_gguf) | soon | | [Llama2](https://huggingface.co/meta-llama/Llama-2-13b-hf) 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/jphme/em_german_13b_v01_gptq) | soon | soon | | [Llama2](https://huggingface.co/meta-llama/Llama-2-70b-hf) 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gptq) | [Link](https://huggingface.co/jphme/em_german_70b_v01_gguf) | soon | | [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | soon | soon | tbc | | [LeoLm 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc | | [LeoLM 13b](https://huggingface.co/LeoLM/leo-hessianai-7b) | soon | soon | soon | tbc | ## Colab: Einige der Modelle können mit kostenlosen Google Colab-Instanzen verwendet werden (z.B. das 7b-Modell in 8-Bit oder das 13b-Modell mit GPTQ): * [Beispiel-Notebook 13b mit GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing) * [Beispiel-Notebook 7b mit 8-Bit](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing) * Weitere Beispiele für LLM-Apps folgen in Kürze... # Eingabeformat Dieses Modell folgt dem Vicuna-Format ohne Zeilenumbrüche (sollte aber auch mit Zeilenumbrüchen funktionieren). Das Format lautet wie folgt: ``` Du bist ein hilfreicher Assistent. USER: <Anweisung> ASSISTANT: ``` Sie können das Standard-Systemprompt aber auch ändern (siehe unten für ein Beispiel bei Aufgaben mit externen Datenbanken). # Ergebnisse Anbei einige Beispiele für Modell-Outputs. Wir werden demnächst einige kleine deutsche Benchmarks veröffentlichen (basierend auf dem Eleuther LM Evaluation Harness), die wir bei diesen Modellen bereits getstet haben. ## Beispiel-Ausgaben Prompt: ``` Du bist ein hilfreicher Assistent USER: Ein Buch hat 500 Seiten. Jede Seite enthält 300 Wörter. Wie viele Wörter sind im gesamten Buch enthalten? ASSISTANT: ``` Antwort EM 7b v01 ``` Die Gesamtzahl der Wörter in dem Buch kann berechnet werden, indem man die Anzahl der Wörter pro Seite mit der Gesamtzahl der Seiten multipliziert. Gesamtzahl der Wörter = Anzahl der Wörter pro Seite * Gesamtzahl der Seiten = 300 Wörter pro Seite * 500 Seiten = 150.000 Wörter Das Buch enthält also insgesamt 150.000 Wörter. ``` Prompt: ``` Du bist ein hilfreicher Assistent USER: Erläutere die Grundprinzipien der evolutionären Biologie und Genetik, indem du den Prozess der natürlichen Selektion, die Rolle von Mutationen und genetischer Drift sowie das Konzept der adaptiven Radiation erklärst. ASSISTANT: ``` Antwort EM 7b v01: ``` Die Evolutionäre Biologie und Genetik sind zwei intim verwandte Disziplinen, die sich mit dem Studium des Verhaltens und der Entwicklung von Organismen in der Natur befassen. Sie untersuchen, wie Arten sich im Laufe der Zeit entwickeln und wie diese Veränderungen genetisch bedingt sind. 1. Natürliche Selektion: Die natürliche Selektion ist ein grundlegendes Prinzip in der Evolutionäre Biologie. Es besagt, dass Individuen innerhalb einer Spezies unterschiedliche Fähigkeiten oder Merkmale aufweisen können. Diese Merkmale beeinflussen ihre Überlebens- und Fortpflanzungsrate. Diejenigen mit Vorteilen haben eine höhere Wahrscheinlichkeit, ihre Gene an zukünftige Generationen weiterzugeben und so die Art im Laufe der Zeit zu verändern. 2. Rolle von Mutationen: Mutationen sind Veränderungen im Genom eines Organismus, die entweder durch zufällige Fehler während der Replikation oder durch Exposition gegenüber Strahlung verursacht werden. Diese Veränderungen können dazu führen, dass ein Individuum neue Eigenschaften aufweist, die es im Kampf ums Überleben vorteilhaft machen könnten. Diese Mutationen werden dann von der natürlichen Selektion beeinflusst und können so zur Entwicklung neuer Arten beitragen. 3. Genetische Drift: Genetische Drift ist ein zufälliger Prozess, der durch die Verteilung von Allelen in einer Population stattfindet. Wenn eine Population kleiner wird oder wenn die Überlebens- und Fortpflanzungsraten zwischen den Individuen variieren, kann dies dazu führen, dass bestimmte Gene stärker oder weniger häufig werden. Dies kann dazu führen, dass die Population im Laufe der Zeit unterschiedliche Merkmale aufweist. 4. Adaptive Radiation: Adaptive Radiation ist ein Prozess, bei dem eine Art mehrere verschiedene Linien auslässt, die sich auf verschiedene Weise an ihre Umgebung anpassen. Dies geschieht normalerweise, wenn die Umwelt sich ändert und die Art neue Fähigkeiten benötigt, um überleben zu können. Die verschiedenen Linien können dann unterschiedliche Eigenschaften entwickeln, die ihnen helfen, in dieser neuen Umgebung zu überleben. ``` ## Sachliche Abfrage / RAG EM-Modelle bieten ein spezielles Format für RAG (also die Verwendung mit Informationen aus zusätzlichen externen Quellen wie Datenbanken oder Websiten), das für die Minimierung von Halluzinationen und Falscinformationen optimiert ist. Wir verwenden dasselbe Format wie Airoboros dafür. ### Beispiel Prompt ``` Du bist ein hilfreicher Assistent. Für die folgende Aufgabe stehen dir zwischen den tags BEGININPUT und ENDINPUT mehrere Quellen zur Verfügung. Metadaten zu den einzelnen Quellen wie Autor, URL o.ä. sind zwischen BEGINCONTEXT und ENDCONTEXT zu finden, danach folgt der Text der Quelle. Die eigentliche Aufgabe oder Frage ist zwischen BEGININSTRUCTION und ENDINCSTRUCTION zu finden. Beantworte diese wortwörtlich mit einem Zitat aus den Quellen. Sollten diese keine Antwort enthalten, antworte, dass auf Basis der gegebenen Informationen keine Antwort möglich ist! USER: BEGININPUT BEGINCONTEXT Url: https://www.jph.me ENDCONTEXT Das Wetter in Düsseldorf wird heute schön und sonnig! ENDINPUT BEGININSTRUCTION Was ist 1+1? ENDINSTRUCTION ASSISTANT: ``` Modellantwort: ``` Mit den gegebenen Informationen ist diese Frage nicht zu beantworten. ``` Hier würde das Modell normalerweise die (triviale) Antwort (2) geben, ist aber trainiert, keine Antwort zu geben, wenn es keine Quelle zwischen BEGININPUT und ENDINPUT gibt, die eine belegbare Antwort liefert. Wenn wir anstattdessen fragen: ``` BEGININSTRUCTION Wie wird das Wetter heute in Düsseldorf? Gebe die Quelle mit an! ENDINSTRUCTION ASSISTANT: ``` Bekommen wir die korrekte Antwort mit Quellenangabe ``` Das Wetter in Düsseldorf wird heute schön und sonnig! (Quelle: Url: https://www.jph.me) ``` # Evaluation coming soon # Datensatz Der Datensatz ist eine Mischung aus Multi-Turn-Chats, RAG Instructions und synthetisch generierten Finetuning-Daten, die viele Themen und Anwendungen abdecken. # Einschränkungen & Vorurteile Dieses Modell kann sachlich falsche Ausgaben produzieren und sollte nur zu Forschubngszwecken verwendet werden. # Danksagungen: Vielen Dank an [winglian/caseus](https://huggingface.co/winglian) für seine großartige Arbeit an Axolotl, die ich zur Schulung der EM-Modelle verwendet habe. Ich bin auch [Jon Durbin](https://huggingface.co/jondurbin) für seine Arioboros Modelle und Code-Schnipsel dankbar, von denen ich viele Ideen nutzen konnte. Das 70b-Modell wurde mit Unterstützung des [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/) trainiert. # Kontakt Wenn Sie an customized LLMs für geschäftliche Anwendungen interessiert sind, kontaktieren Sie mich bitte über [meine Website](https://www.jph.me). Ich freue mich auch immer über Anregungen und Feedback zu meinen Modellen. *PS: Ich suche auch immer noch einen Co-Founder für unser Startup, das sich noch im Stealth-Modus befindet.* # Haftungsausschluss: Ich bin nicht verantwortlich für die Handlungen Dritter, die dieses Modell verwenden. Dieses Modell sollte nur für Forschungszwecke verwendet werden. Die ursprüngliche Llama2-Lizenz gilt und wird mit den Modell-Dateien verteilt. <!-- original-model-card end -->
oopsung/llama2-7b-exo-test-v1
oopsung
2023-11-29T03:00:51Z
1,284
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-29T02:54:29Z
Entry not found
dbalasub/test-dinesh-t5-qa
dbalasub
2024-05-05T21:16:31Z
1,284
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2024-05-05T19:17:01Z
--- library_name: transformers license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tlphams/gollm-12.8b-instruct-v2.1
tlphams
2023-11-03T04:19:22Z
1,283
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "generated_from_trainer", "base_model:EleutherAI/polyglot-ko-12.8b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-01T04:17:11Z
--- license: apache-2.0 base_model: EleutherAI/polyglot-ko-12.8b tags: - generated_from_trainer model-index: - name: gollm-12.8b-instruct-v2.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gollm-12.8b-instruct-v2.1 This model is a fine-tuned version of [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) on a custom mixed dataset ## Model description - No-context template ``` 아래는 작업을 설명하는 질문어와 추가 컨텍스트를 제공하는 맥락이 함께 제공됩니다. 요청을 적절히 완료하는 답변을 작성하세요. ### 질문: {instruction} ### 답변: ``` - With context template ``` 아래는 작업을 설명하는 질문어와 추가 컨텍스트를 제공하는 맥락이 함께 제공됩니다. 요청을 적절히 완료하는 답변을 작성하세요. ### 맥락: {input} ### 질문: {instruction} ### 답변: ``` ## Intended uses & limitations More information needed ## Training and evaluation data - self-introduction (20 samples) - Combined KoAlpaca v1.0 and 1.1- no-context samples only (53k samples) + KoAlpaca v1.0 + KoAlpaca v1.1 - KoCoT (2k samples) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - stop_at_epoch: 4 ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
wons/mistral-7B-v0.1
wons
2023-11-22T13:29:24Z
1,283
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-22T13:55:54Z
Entry not found
chujiezheng/tulu-2-dpo-70b-ExPO
chujiezheng
2024-05-27T18:14:39Z
1,283
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "arxiv:2404.16792", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-26T14:57:29Z
--- license: other license_name: ai2-impact-license-low-risk license_link: https://allenai.org/impact-license language: - en --- # tulu-2-dpo-70b-ExPO The extrapolated (ExPO) model based on [`allenai/tulu-2-dpo-70b`](https://huggingface.co/allenai/tulu-2-dpo-70b) and [`allenai/tulu-2-70b`](https://huggingface.co/allenai/tulu-2-70b), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper. Specifically, we obtain this model by extrapolating **(alpha = 0.5)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference. ## Evaluation Results Evaluation results on the **AlpacaEval 2.0** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_alpaca)): | | Win Rate (Ori) | LC Win Rate (Ori) | Win Rate (+ ExPO) | LC Win Rate (+ ExPO) | | ------------------------------------ | -------------- | ----------------- | ----------------- | -------------------- | | `HuggingFaceH4/zephyr-7b-alpha` | 6.7% | 10.0% | **10.6%** | **13.6%** | | `HuggingFaceH4/zephyr-7b-beta` | 10.2% | 13.2% | **11.1%** | **14.0%** | | `berkeley-nest/Starling-LM-7B-alpha` | 15.0% | 18.3% | **18.2%** | **19.5%** | | `Nexusflow/Starling-LM-7B-beta` | 26.6% | 25.8% | **29.6%** | **26.4%** | | `snorkelai/Snorkel-Mistral-PairRM` | 24.7% | 24.0% | **28.8%** | **26.4%** | | `RLHFlow/LLaMA3-iterative-DPO-final` | 29.2% | 36.0% | **32.7%** | **37.8%** | | `internlm/internlm2-chat-1.8b` | 3.8% | 4.0% | **5.2%** | **4.3%** | | `internlm/internlm2-chat-7b` | 20.5% | 18.3% | **28.1%** | **22.7%** | | `internlm/internlm2-chat-20b` | 36.1% | 24.9% | **46.2%** | **27.2%** | | `allenai/tulu-2-dpo-7b` | 8.5% | 10.2% | **11.5%** | **11.7%** | | `allenai/tulu-2-dpo-13b` | 11.2% | 15.5% | **15.6%** | **17.6%** | | `allenai/tulu-2-dpo-70b` | 15.4% | 21.2% | **23.0%** | **25.7%** | Evaluation results on the **MT-Bench** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_mtbench)): | | Original | + ExPO | | ------------------------------------ | -------- | -------- | | `HuggingFaceH4/zephyr-7b-alpha` | 6.85 | **6.87** | | `HuggingFaceH4/zephyr-7b-beta` | 7.02 | **7.06** | | `berkeley-nest/Starling-LM-7B-alpha` | 7.82 | **7.91** | | `Nexusflow/Starling-LM-7B-beta` | 8.10 | **8.18** | | `snorkelai/Snorkel-Mistral-PairRM` | 7.63 | **7.69** | | `RLHFlow/LLaMA3-iterative-DPO-final` | 8.08 | **8.45** | | `internlm/internlm2-chat-1.8b` | 5.17 | **5.26** | | `internlm/internlm2-chat-7b` | 7.72 | **7.80** | | `internlm/internlm2-chat-20b` | 8.13 | **8.26** | | `allenai/tulu-2-dpo-7b` | 6.35 | **6.38** | | `allenai/tulu-2-dpo-13b` | 7.00 | **7.26** | | `allenai/tulu-2-dpo-70b` | 7.79 | **8.03** |
mudler/Minerva-3B-Llama3-Instruct-v0.1
mudler
2024-05-19T13:04:29Z
1,283
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-11T13:30:20Z
--- license: apache-2.0 --- ## Minerva Llama3 Instruct [![local-ai-banner.png](https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/bXvNcxQqQ-wNAnISmx3PS.png)](https://localai.io) This is a https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0 fine-tune with a Llama3 prompt and the camoscio dataset in the Italian language. It is fine-tuned with the llama3 instructor prompt so it is suitable for chat. ### Prompt format Note: the EOS is `</s>`
jtatman/phi3_code_func_lang_alpaca_merged
jtatman
2024-06-05T18:38:21Z
1,283
0
transformers
[ "transformers", "safetensors", "gguf", "phi3", "text-generation", "text-generation-inference", "unsloth", "mistral", "trl", "sft", "conversational", "custom_code", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-02T15:39:35Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** jtatman - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
iarfmoose/t5-base-question-generator
iarfmoose
2022-02-24T08:41:19Z
1,282
52
transformers
[ "transformers", "pytorch", "tf", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
# Model name ## Model description This model is a sequence-to-sequence question generator which takes an answer and context as an input, and generates a question as an output. It is based on a pretrained `t5-base` model. ## Intended uses & limitations The model is trained to generate reading comprehension-style questions with answers extracted from a text. The model performs best with full sentence answers, but can also be used with single word or short phrase answers. #### How to use The model takes concatenated answers and context as an input sequence, and will generate a full question sentence as an output sequence. The max sequence length is 512 tokens. Inputs should be organised into the following format: ``` <answer> answer text here <context> context text here ``` The input sequence can then be encoded and passed as the `input_ids` argument in the model's `generate()` method. For best results, a large number of questions can be generated, and then filtered using [iarfmoose/bert-base-cased-qa-evaluator](https://huggingface.co/iarfmoose/bert-base-cased-qa-evaluator). For examples, please see https://github.com/iarfmoose/question_generator. #### Limitations and bias The model is limited to generating questions in the same style as those found in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), [CoQA](https://stanfordnlp.github.io/coqa/), and [MSMARCO](https://microsoft.github.io/msmarco/). The generated questions can potentially be leading or reflect biases that are present in the context. If the context is too short or completely absent, or if the context and answer do not match, the generated question is likely to be incoherent. ## Training data The model was fine-tuned on a dataset made up of several well-known QA datasets ([SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), [CoQA](https://stanfordnlp.github.io/coqa/), and [MSMARCO](https://microsoft.github.io/msmarco/)). The datasets were restructured by concatenating the answer and context fields into the previously-mentioned format. The question field from the datasets was used as the target during training. The full training set was roughly 200,000 examples. ## Training procedure The model was trained for 20 epochs over the training set with a learning rate of 1e-3. The batch size was only 4 due to GPU memory limitations when training on Google Colab.
TheBloke/Yarn-Llama-2-13B-128K-GGUF
TheBloke
2023-09-27T12:46:59Z
1,282
36
transformers
[ "transformers", "gguf", "llama", "dataset:pg19", "arxiv:2309.00071", "base_model:NousResearch/Yarn-Llama-2-13b-128k", "license:llama2", "text-generation-inference", "region:us" ]
null
2023-09-01T08:16:14Z
--- license: llama2 library_name: transformers datasets: - pg19 metrics: - perplexity model_name: Yarn Llama 2 13B 128K base_model: NousResearch/Yarn-Llama-2-13b-128k inference: false model_creator: NousResearch model_type: llama prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Yarn Llama 2 13B 128K - GGUF - Model creator: [NousResearch](https://huggingface.co/NousResearch) - Original model: [Yarn Llama 2 13B 128K](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-128k) <!-- description start --> ## Description This repo contains GGUF format model files for [NousResearch's Yarn Llama 2 13B 128K](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-128k). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF) * [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-128k) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [yarn-llama-2-13b-128k.Q2_K.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [yarn-llama-2-13b-128k.Q3_K_S.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [yarn-llama-2-13b-128k.Q3_K_M.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [yarn-llama-2-13b-128k.Q3_K_L.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [yarn-llama-2-13b-128k.Q4_0.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [yarn-llama-2-13b-128k.Q4_K_S.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [yarn-llama-2-13b-128k.Q4_K_M.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [yarn-llama-2-13b-128k.Q5_0.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [yarn-llama-2-13b-128k.Q5_K_S.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [yarn-llama-2-13b-128k.Q5_K_M.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [yarn-llama-2-13b-128k.Q6_K.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [yarn-llama-2-13b-128k.Q8_0.gguf](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GGUF/blob/main/yarn-llama-2-13b-128k.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Yarn-Llama-2-13B-128K-GGUF and below it, a specific filename to download, such as: yarn-llama-2-13b-128k.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Yarn-Llama-2-13B-128K-GGUF yarn-llama-2-13b-128k.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Yarn-Llama-2-13B-128K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Yarn-Llama-2-13B-128K-GGUF yarn-llama-2-13b-128k.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m yarn-llama-2-13b-128k.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Yarn-Llama-2-13B-128K-GGUF", model_file="yarn-llama-2-13b-128k.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: NousResearch's Yarn Llama 2 13B 128K # Model Card: Nous-Yarn-Llama-2-13b-128k [Preprint (arXiv)](https://arxiv.org/abs/2309.00071) [GitHub](https://github.com/jquesnelle/yarn) ## Model Description Nous-Yarn-Llama-2-13b-128k is a state-of-the-art language model for long context, further pretrained on long context data for 600 steps. This model is the Flash Attention 2 patched version of the original model: https://huggingface.co/conceptofmind/Yarn-Llama-2-13b-128k Note that this model **requires** the [Flash Attention library](https://pypi.org/project/flash-attn/) in order to function correctly, see the Model Usage section for installation instructions. ## Model Training Starting from the base Llama 2 models, this model was further pretrained on a subset of the PG19 dataset, allowing it to effectively utilize up to 128k tokens of context. ## Collaborators - [bloc97](https://github.com/bloc97): Methods, Paper and evals - [@theemozilla](https://twitter.com/theemozilla): Methods, Paper and evals - [@EnricoShippole](https://twitter.com/EnricoShippole): Model Training - [honglu2875](https://github.com/honglu2875): Paper and evals The authors would like to thank Stability AI, Carper AI, and Eleuther AI for their generous support of significant computing resources that enabled the training of these models and the completion of this research. We would also like to thank Jonathan Tow and Dakota Mahan directly for their help in advising on the use of the Stability AI compute cluster. Additionally, we would like to thank a16z, and PygmalionAI, for providing resources to run evaluations and experiments on the models. ## Usage and Prompt Format Install FA2 and Rotary Extensions: ``` pip install flash-attn --no-build-isolation pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary ``` There are no specific prompt formats as this is a pretrained base model. ## Benchmark Results TODO ## Future Plans We plan to continue training when we have more compute and to improve the dataset and/or instruct tune the models in order to improve the long context performance even further. ## Model Usage The model is available for download on HuggingFace. <!-- original-model-card end -->
Sharathhebbar24/code_gpt2
Sharathhebbar24
2024-03-15T12:04:21Z
1,282
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "dpo", "code", "en", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:mlabonne/CodeLlama-2-20k", "dataset:Intel/orca_dpo_pairs", "dataset:Sharathhebbar24/Evol-Instruct-Code-80k-v1", "dataset:Sharathhebbar24/sql-create-context", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-04T13:41:34Z
--- language: - en license: apache-2.0 tags: - gpt2 - dpo - code datasets: - HuggingFaceH4/ultrachat_200k - mlabonne/CodeLlama-2-20k - Intel/orca_dpo_pairs - Sharathhebbar24/Evol-Instruct-Code-80k-v1 - Sharathhebbar24/sql-create-context pipeline_tag: text-generation model-index: - name: code_gpt2 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 23.29 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/code_gpt2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 30.99 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/code_gpt2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 25.03 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/code_gpt2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 40.6 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/code_gpt2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 49.25 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/code_gpt2 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/code_gpt2 name: Open LLM Leaderboard --- This model is a finetuned version of [Sharathhebbar24/code_gpt2_mini_model](https://huggingface.co/Sharathhebbar24/code_gpt2_mini_model) using [Sharathhebbar24/Evol-Instruct-Code-80k-v1](https://huggingface.co/datasets/Sharathhebbar24/Evol-Instruct-Code-80k-v1) ## Model description GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifting one token (word or piece of word) to the right. The model uses a masking mechanism to make sure the predictions for the token `i` only use the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a prompt. ### To use this model ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model_name = "Sharathhebbar24/code_gpt2" >>> model = AutoModelForCausalLM.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> def generate_text(prompt): >>> inputs = tokenizer.encode(prompt, return_tensors='pt') >>> outputs = model.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id) >>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True) >>> return generated[:generated.rfind(".")+1] >>> prompt = "Can you write a Linear search program in Python" >>> res = generate_text(prompt) >>> res ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__code_gpt2) | Metric |Value| |---------------------------------|----:| |Avg. |28.19| |AI2 Reasoning Challenge (25-Shot)|23.29| |HellaSwag (10-Shot) |30.99| |MMLU (5-Shot) |25.03| |TruthfulQA (0-shot) |40.60| |Winogrande (5-shot) |49.25| |GSM8k (5-shot) | 0.00|
mudler/Asinello-Minerva-3B-v0.1
mudler
2024-05-19T13:07:57Z
1,282
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "llama-factory", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-11T23:23:47Z
--- library_name: transformers tags: - llama-factory license: apache-2.0 --- [![local-ai-banner.png](https://cdn-uploads.huggingface.co/production/uploads/647374aa7ff32a81ac6d35d4/bXvNcxQqQ-wNAnISmx3PS.png)](https://localai.io) ## Model Details This model is a fine-tune of the Minerva https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0 on a custom dataset (Asinello) ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** Ettore Di Giacinto ([email protected]) - **Model type:** Mistral - **Language(s) (NLP):** Italian - **License:** apache-2.0 - **Finetuned from model [optional]:** https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0 ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Paper [optional]:** Soon ;)
TheBloke/Vicuna-13B-1.1-GPTQ
TheBloke
2023-06-23T21:05:47Z
1,281
208
transformers
[ "transformers", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-04-12T22:01:51Z
--- license: other inference: false pipeline_tag: conversational --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Vicuna 13B 1.1 GPTQ 4bit 128g This is a 4-bit GPTQ version of the [Vicuna 13B 1.1 model](https://huggingface.co/lmsys/vicuna-13b-delta-v1.1). It was created by merging the deltas provided in the above repo with the original Llama 13B model, [using the code provided on their Github page](https://github.com/lm-sys/FastChat#vicuna-weights). It was then quantized to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). ## Want to try this in Colab for free? Check out this Google Colab provided by [eucdee](https://huggingface.co/eucdee): [Google Colab for Vicuna 1.1](https://colab.research.google.com/github/eucdee/AI/blob/main/4bit_TextGen_Gdrive.ipynb) ## My Vicuna 1.1 model repositories I have the following Vicuna 1.1 repositories available: **13B models:** * [Unquantized 13B 1.1 model for GPU - HF format](https://huggingface.co/TheBloke/vicuna-13B-1.1-HF) * [GPTQ quantized 4bit 13B 1.1 for GPU - `safetensors` and `pt` formats](https://huggingface.co/TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/vicuna-13B-1.1-GGML) **7B models:** * [Unquantized 7B 1.1 model for GPU - HF format](https://huggingface.co/TheBloke/vicuna-7B-1.1-HF) * [GPTQ quantized 4bit 7B 1.1 for GPU - `safetensors` and `pt` formats](https://huggingface.co/TheBloke/vicuna-7B-1.1-GPTQ-4bit-128g) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/vicuna-7B-1.1-GGML) ## How to easily download and use this model in text-generation-webui Open the text-generation-webui UI as normal. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g`. 3. Click **Download**. 4. Wait until it says it's finished downloading. 5. Click the **Refresh** icon next to **Model** in the top left. 6. In the **Model drop-down**: choose the model you just downloaded, `vicuna-13B-1.1-GPTQ-4bit-128g`. 7. If you see an error in the bottom right, ignore it - it's temporary. 8. Check that the `GPTQ parameters` are correct on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama` 9. Click **Save settings for this model** in the top right. 10. Click **Reload the Model** in the top right. 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! ## GIBBERISH OUTPUT If you get gibberish output, it is because you are using the `safetensors` file without updating GPTQ-for-LLaMA. If you use the `safetensors` file you must have the latest version of GPTQ-for-LLaMA inside text-generation-webui. If you don't want to update, or you can't, use the `pt` file instead. Either way, please read the instructions below carefully. ## Provided files Two model files are provided. Ideally use the `safetensors` file. Full details below: Details of the files provided: * `vicuna-13B-1.1-GPTQ-4bit-128g.compat.no-act-order.pt` * `pt` format file, created without the `--act-order` flag. * This file may have slightly lower quality, but is included as it can be used without needing to compile the latest GPTQ-for-LLaMa code. * It will therefore work with one-click-installers on Windows, which include the older GPTQ-for-LLaMa code. * Command to create: * `python3 llama.py vicuna-13B-1.1-HF c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors vicuna-13B-1.1-GPTQ-4bit-128g.no-act-order.pt` * `vicuna-13B-1.1-GPTQ-4bit-128g.latest.safetensors` * `safetensors` format, with improved file security, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code. * Command to create: * `python3 llama.py vicuna-13B-1.1-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors vicuna-13B-1.1-GPTQ-4bit-128g.safetensors` ## Manual instructions for `text-generation-webui` File `vicuna-13B-1.1-GPTQ-4bit-128g.compat.no-act-order.pt` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui). [Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)). The other `safetensors` model file was created using `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI. If you want to use the act-order `safetensors` files and need to update the Triton branch of GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI: ``` # Clone text-generation-webui, if you don't already have it git clone https://github.com/oobabooga/text-generation-webui # Make a repositories directory mkdir text-generation-webui/repositories cd text-generation-webui/repositories # Clone the latest GPTQ-for-LLaMa code inside text-generation-webui git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa ``` Then install this model into `text-generation-webui/models` and launch the UI as follows: ``` cd text-generation-webui python server.py --model vicuna-13B-1.1-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want ``` The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information. If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead use the CUDA branch: ``` git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda cd GPTQ-for-LLaMa python setup_cuda.py install ``` Then link that into `text-generation-webui/repositories` as described above. Or just use `vicuna-13B-1.1-GPTQ-4bit-128g.compat.no-act-order.pt` as mentioned above, which should work without any upgrades to text-generation-webui. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman. Thank you to all my generous patrons and donaters! <!-- footer end --> # Vicuna Model Card ## Model details **Model type:** Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. It is an auto-regressive language model, based on the transformer architecture. **Model date:** Vicuna was trained between March 2023 and April 2023. **Organizations developing the model:** The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego. **Paper or resources for more information:** https://vicuna.lmsys.org/ **License:** Apache License 2.0 **Where to send questions or comments about the model:** https://github.com/lm-sys/FastChat/issues ## Intended use **Primary intended uses:** The primary use of Vicuna is research on large language models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## Training dataset 70K conversations collected from ShareGPT.com. ## Evaluation dataset A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details. ## Major updates of weights v1.1 - Refactor the tokenization and separator. In Vicuna v1.1, the separator has been changed from `"###"` to the EOS token `"</s>"`. This change makes it easier to determine the generation stop criteria and enables better compatibility with other libraries. - Fix the supervised fine-tuning loss computation for better model quality.
stabilityai/stablecode-completion-alpha-3b-4k
stabilityai
2023-08-08T15:18:07Z
1,281
284
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "code", "dataset:bigcode/starcoderdata", "arxiv:2104.09864", "arxiv:1910.02054", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-07T16:59:19Z
--- datasets: - bigcode/starcoderdata language: - code tags: - causal-lm model-index: - name: stabilityai/stablecode-completion-alpha-3b-4k results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 0.1768 verified: false - name: pass@10 type: pass@10 value: 0.2701 verified: false license: apache-2.0 --- # `StableCode-Completion-Alpha-3B-4K` ## Model Description `StableCode-Completion-Alpha-3B-4K` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey. ## Usage The model is intended to do single/multiline code completion from a long context window upto 4k tokens. Get started generating code with `StableCode-Completion-Alpha-3B-4k` by using the following code snippet: ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b-4k") model = AutoModelForCausalLM.from_pretrained( "stabilityai/stablecode-completion-alpha-3b-4k", trust_remote_code=True, torch_dtype="auto", ) model.cuda() inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda") tokens = model.generate( **inputs, max_new_tokens=48, temperature=0.2, do_sample=True, ) print(tokenizer.decode(tokens[0], skip_special_tokens=True)) ``` ## Model Details * **Developed by**: [Stability AI](https://stability.ai/) * **Model type**: `StableCode-Completion-Alpha-3B-4k` models are auto-regressive language models based on the transformer decoder architecture. * **Language(s)**: Code * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) * **License**: Model checkpoints are licensed under the [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) license. * **Contact**: For questions and comments about the model, please email `[email protected]` ### Model Architecture | Parameters | Hidden Size | Layers | Heads | Sequence Length | |----------------|-------------|--------|-------|-----------------| | 2,796,431,360 | 2560 | 32 | 32 | 4096 | * **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master)) * **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) * **Bias**: LayerNorm bias terms only ## Training `StableCode-Completion-Alpha-3B-4k` is pre-trained at a context length of 4096 for 300 billion tokens on the `bigcode/starcoder-data`. ### Training Dataset The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey present in the `starcoder-data` dataset. ### Training Procedure The model is pre-trained on the dataset mixes mentioned above in mixed-precision BF16), optimized with AdamW, and trained using the [StarCoder](https://huggingface.co/bigcode/starcoder) tokenizer with a vocabulary size of 49k. * **Software**: We use a fork of gpt-neox ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)) and train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)) and rely on flash-attention as well as rotary embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf)) ## Use and Limitations ### Intended Use StableCode-Completion-Alpha-3B-4K independently generates new code completions, but we recommend that you use StableCode-Completion-Alpha-3B-4K together with the tool developed by BigCode and HuggingFace [(huggingface/huggingface-vscode: Code completion VSCode extension for OSS models (github.com))](https://github.com/huggingface/huggingface-vscode), to identify and, if necessary, attribute any outputs that match training code. ### Limitations and bias This model is intended to be used responsibly. It is not intended to be used to create unlawful content of any kind, to further any unlawful activity, or to engage in activities with a high risk of physical or economic harm. ## How to cite ```bibtex @misc{StableCodeCompleteAlpha4K, url={[https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k](https://huggingface.co/stabilityai/stablecode-complete-alpha-3b-4k)}, title={Stable Code Complete Alpha}, author={Adithyan, Reshinth and Phung, Duy and Cooper, Nathan and Pinnaparaju, Nikhil and Laforte, Christian} } ```
EleutherAI/llemma_34b
EleutherAI
2024-04-03T19:56:03Z
1,281
84
transformers
[ "transformers", "pytorch", "llama", "text-generation", "math", "reasoning", "en", "dataset:EleutherAI/proof-pile-2", "dataset:open-web-math/open-web-math", "arxiv:2310.10631", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-27T04:50:04Z
--- license: llama2 datasets: - EleutherAI/proof-pile-2 - open-web-math/open-web-math language: - en tags: - math - reasoning --- <img src="llemma.png" width="400"> [ArXiv](http://arxiv.org/abs/2310.10631) | [Models](https://huggingface.co/EleutherAI/llemma_34b) | [Data](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | [Code](https://github.com/EleutherAI/math-lm) | [Blog](https://blog.eleuther.ai/llemma/) | [Sample Explorer](https://llemma-demo.github.io/) [Zhangir Azerbayev](https://zhangir-azerbayev.github.io/), [Hailey Schoelkopf](https://github.com/haileyschoelkopf), [Keiran Paster](https://keirp.com), [Marco Dos Santos](https://github.com/dsantosmarco), [Stephen McAleer](https://www.andrew.cmu.edu/user/smcaleer/), [Albert Q. Jiang](https://albertqjiang.github.io/), [Jia Deng](https://www.cs.princeton.edu/~jiadeng/), [Stella Biderman](https://www.stellabiderman.com/), [Sean Welleck](https://wellecks.com/) **Llemma 34B** is a language model for mathematics. It was initialized with [Code Llama 34B](https://github.com/facebookresearch/codellama) weights, and trained on the [Proof-Pile-2](https://huggingface.co/datasets/EleutherAI/proof-pile-2) for 50B tokens. This model also comes in a 7B parameter version: [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b). ## Evaluations Llemma models are particularly strong at chain-of-thought mathematical reasoning and using computational tools for mathematics, such as Python and formal theorem provers. ### Chain-of-thought Math On chain-of-thought mathematics tasks, Llemma models outperform Llama-2, Code Llama, and when controlled for model size, outperform Minerva. | Model | Size | GSM8k | [OCW](https://openreview.net/forum?id=IFXTZERXdM7) | MMLU-STEM | [SAT](https://huggingface.co/datasets/mcaleste/sat_multiple_choice_math_may_23) | MATH | |------------|------|--------|-------|-----------|-------|-------| | Llama 2 | 7B | 11.8% | 3.7% | 29.9% | 25% | 3.2% | | Code Llama | 7B | 10.5% | 4.4% | 25.1% | 9.4% | 4.5% | | LLEMMA | 7B | **36.4%** | **7.7%** | **37.7%** | **53.1%** | **18.0%** | | Minerva | 8B | 16.2% | **7.7%** | 35.6% | - | 14.1% | |------------|------|--------|-------|-----------|-------|-------| | Code Llama | 34B | 29.6% | 7.0% | 40.5% | 40.6% | 12.2% | | LLEMMA | 34B | **51.5%** | **11.8%** | **49.0%** | **71.9%** | **25.0%** | |------------|------|--------|-------|-----------|-------|-------| | Minerva | 62B | 52.4% | 12.0% | 53.9% | - | 27.6% | | Minerva | 540B | 58.8% | 17.6% | 63.9% | - | 33.6% | Further performance can be extracted by using majority voting: | Model | Size | GSM8k maj@100 | OCW maj@100 | MMLU-STEM maj@16 | SAT maj@16 | MATH maj@256 | |---------|------|-------------|-----------|-----------------|-----------|------------| | LLEMMA | 7B | 54.0% | 14.3% | 49.9% | 78.1% | **33.5** | | Minerva | 8B | 28.4% | 12.5% | 43.4% | - | 25.4% | |---------|------|-------------|-----------|-----------------|-----------|------------| | LLEMMA | 34B | 69.3% | 18.4% | 59.7% | 81.3% | **43.1%** | |---------|------|-------------|-----------|-----------------|-----------|------------| | Minerva | 62B | 68.5% | 23.5% | 63.5% | - | 43.4% | | Minerva | 540B | 78.5% | 30.8% | 75.0% | - | 50.3% | ### Tool Use and Theorem Proving In addition to chain-of-thought reasoning, Llemma has strong capabilities in computational mathematics tasks. For tool use and formal theorem proving evaluations, see [our paper](http://arxiv.org/abs/2310.10631). ### Citation ``` @misc{azerbayev2023llemma, title={Llemma: An Open Language Model For Mathematics}, author={Zhangir Azerbayev and Hailey Schoelkopf and Keiran Paster and Marco Dos Santos and Stephen McAleer and Albert Q. Jiang and Jia Deng and Stella Biderman and Sean Welleck}, year={2023}, eprint={2310.10631}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
giux78/zefiro-7b-beta-ITA-v0.1
giux78
2024-01-12T20:16:57Z
1,281
9
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "it", "dataset:giux78/100k-sft-ready-ultrafeedback-ita", "arxiv:2310.16944", "arxiv:2312.09993", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-09T21:07:32Z
--- license: apache-2.0 datasets: - giux78/100k-sft-ready-ultrafeedback-ita language: - it pipeline_tag: text-generation --- <img src="https://hoodie-creator.s3.eu-west-1.amazonaws.com/15be78c6-original.png" alt="llamantino53" border="0" width="400px"> # Model Card for zefiro-7b-beta-ITA-v0.1 *Last Update: 11/01/2024*<br> <!-- Provide a quick summary of what the model is/does. --> Zefiro is a SFT fine tuned model for the Italian language based on [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) . To create a set of open source models and datasets suited for italian language is the aim of the project and this is the first experiment. The model can be used as base model for more specific conversationl tasks for Italian language ## Model Details Zefiro is a porting of the [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) model to the italian language using the wonderful recipes from [alignment-handbook](https://huggingface.co/alignment-handbook) . It has also taken ispiration and insights from the [Llamantino](https://huggingface.co/swap-uniba/LLaMAntino-2-chat-7b-hf-UltraChat-ITA) model developed by Università di Bari. For the implementation we combined different approaches from the two models mentioned but also from the wondeful communtity of open source. ## Model description - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily Italian - **License:** Apache 2 - **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) - **Developed by:** [giux78](https://alessandroercolani.webflow.io/) - **Funded by:** [Business Operating System](https://www.businessos.xyz) ## Intended uses & limitations The model was initially fine-tuned on a filtered and preprocessed version of [UltraChat-ITA](https://huggingface.co/datasets/giux78/100k-sft-ready-ultrafeedback-ita) that is a filtered version of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. Here's how you can run the model using Transformers from 🤗 : ```python # Install transformers from source - only needed for versions <= v4.34 # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "giux78/zefiro-7b-beta-ITA-v0.1" model = AutoModelForCausalLM.from_pretrained(model_id) model.to('cuda') tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="left") sys_prompt = "Sei un assistente disponibile, rispettoso e onesto. " \ "Rispondi sempre nel modo piu' utile possibile, pur essendo sicuro. " \ "Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \ "Assicurati che le tue risposte siano socialmente imparziali e positive. " \ "Se una domanda non ha senso o non e' coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \ "Se non conosci la risposta a una domanda, non condividere informazioni false." messages = [{ 'content' : sys_prompt, 'role' : 'assistant'}, {'content' : 'Crea una lista su cosa mangiare a pranzo ogni giorno della settimana a pranzo e cena', 'role' : 'user'}] def generate_text(sys_prompt, user_prompt): messages = [{ 'content' : sys_prompt, 'role' : 'assistant'}, {'content' : user_prompt, 'role' : 'user'}] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda") generated_ids = model.generate(**model_inputs, max_new_tokens=1024) return tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] generate_text(sys_prompt, 'cosa ne pensi della politica italiana?') ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Zefiro-7b-beta-ITA-v0.1 has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this. ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> We used [UltraChat-ITA](https://huggingface.co/datasets/giux78/100k-sft-ready-ultrafeedback-ita) as training data that is a filtered version of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat). For translating the dataset we combined different tools and API we are also evaluating the best approach for translating many more datasets. We have seen that the translation phase is critical and can introduce incorrect syntax and semantics. #### Summary Zefiro-7b-beta-ITA-v0.1 is finetuned version of mistral-7b using the zephyr approach for the italian language. ## Citation ``` @misc{tunstall2023zephyr, title={Zephyr: Direct Distillation of LM Alignment}, author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf}, year={2023}, eprint={2310.16944}, archivePrefix={arXiv}, primaryClass={cs.LG} } @misc{basile2023llamantino, title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language}, author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro}, year={2023}, eprint={2312.09993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Model Card Authors [giux78](https://huggingface.co/giux78) ## Model Card Contact **[email protected]
BioMistral/BioMistral-DARE-NS
BioMistral
2024-04-23T10:54:03Z
1,281
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:BioMistral/BioMistral-7B-DARE", "base_model:Kukedlc/NeuralSynthesis-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-23T10:45:21Z
--- license: apache-2.0 base_model: - BioMistral/BioMistral-7B-DARE - Kukedlc/NeuralSynthesis-7B-v0.1 library_name: transformers tags: - mergekit - merge --- # BioMistral-NS This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Kukedlc/NeuralSynthesis-7B-v0.1](https://huggingface.co/Kukedlc/NeuralSynthesis-7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [BioMistral/BioMistral-7B-DARE](https://huggingface.co/BioMistral/BioMistral-7B-DARE) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Kukedlc/NeuralSynthesis-7B-v0.1 parameters: density: 0.53 weight: 0.4 - model: BioMistral/BioMistral-7B-DARE parameters: density: 0.53 weight: 0.3 merge_method: dare_ties tokenizer_source: union base_model: Kukedlc/NeuralSynthesis-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ```
Easy-Systems/easy-ko-Llama3-8b-Instruct-v1
Easy-Systems
2024-05-16T05:52:53Z
1,281
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Llama3", "meta", "Easy Systems", "conversational", "en", "ko", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-13T04:15:32Z
--- language: - en - ko license: cc-by-nc-sa-4.0 library_name: transformers tags: - transformers - Llama3 - meta - Easy Systems pipeline_tag: text-generation --- <span style="font-size: 20px;"><strong>Easy-Systems/easy-ko-Llama3-8b-Instruct-v1</strong></span> <p align="left"> <img src="https://cdn-uploads.huggingface.co/production/uploads/663d60cea71261a1bd5dbaae/z4ixHwlQUIfx49RJjdXhg.png"> <span style="font-size: 10px;">DALL-E로 생성한 이미지입니다.</span> </p> * (주)이지시스템의 첫번째 LLM 모델인 easy-ko-Llama3-8b-Instruct-v1은 영어 기반 모델인 meta-llama/Meta-Llama-3-8B-Instruct를 베이스로 하여 한국어 파인튜닝 된 모델입니다. * LLM 모델은 추후 지속적으로 업데이트 될 예정 입니다. # Data - AI hub (https://www.aihub.or.kr/) 데이터를 다양한 Task (QA, Summary, Translate 등)로 가공하여 파인튜닝에 사용. - 사내 자체 가공한 데이터를 활용하여 파인튜닝에 사용. # How to use ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "Easy-Systems/easy-ko-Llama3-8b-Instruct-v1" model = AutoModelForCausalLM.from_pretrained(model_id, attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_id, add_special_tokens=True) prompt="리눅스 프로세스를 강제로 종료하는 방법은?" messages = [ {"role": "system", "content": "당신은 친절한 AI chatbot 입니다. 요청에 대해서 step-by-step 으로 간결하게 한국어(Korean)로 답변해주세요."}, {"role": "user", "content": f"\n\n### 명령어: {prompt}\n\n### 응답:"} ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=1024, eos_token_id=terminators, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.2, repetition_penalty = 1.3, top_p=0.9, top_k=10, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True).strip()) ``` # Example Output ``` 리눅스의 경우, `kill` 또는 `pkill` 명령을 사용하여 특정 프로세스를 강제로 종료할 수 있습니다. 1단계: ps -ef | grep <프로세스_이름>`으로 현재 실행 중인 모든 프로세스가 표시됩니다. 2단계: kill <프로세스_ID>`를 입력하면 해당 프로세스가 즉시 종료됩니다. 또는 `-9`(SIGKILL 신호)를 지정하여 프로세스를 강제로 종료하도록 할 수도 있으며, 이는 운영 체제에서 정상적으로 종료하기 전에 마지막 기회를 주지 않고 바로 죽게 됩니다: 3단계: kill -9 <프로세스_ID>`를 입력합니다. 참고로, 시스템의 안정을 위해 필요한 파일이나 서비스가 있는 경우에는 직접 삭제하지 말아야 하며, 적절한 권한과 지시에 따라 처리해야 합니다. 또한 일부 프로그램들은 강제종료 시 데이터 손실 등의 문제가 발생할 가능성이 있으므로 미리 저장된 작업 내용 등을 확인하고 종료하시기 바랍니다. ``` # License - Creative Commons Attribution-NonCommercial-ShareAlike 4.0 (CC-BY-NC-SA-4.0) - 상업적 사용 시, 아래의 연락처로 문의해주시기 바랍니다. # Contact - 상업적 사용 또는 기타 문의 사항에 대하여 연락하시려면 다음 이메일로 연락 주십시오. - 강현구: [email protected]
IDEA-CCNL/Wenzhong-GPT2-110M
IDEA-CCNL
2023-05-25T09:48:34Z
1,280
26
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "generate", "zh", "arxiv:2209.02970", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-05-23T03:15:36Z
--- language: - zh inference: parameters: temperature: 0.7 top_p: 0.6 repetition_penalty: 1.1 max_new_tokens: 128 num_return_sequences: 3 do_sample: true license: apache-2.0 tags: - generate - gpt2 widget: - 北京是中国的 - 西湖的景色 --- # Wenzhong-GPT2-110M - Main Page:[Fengshenbang](https://fengshenbang-lm.com/) - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM) ## 简介 Brief Introduction 善于处理NLG任务,中文版的GPT2-Small。 Focused on handling NLG tasks, Chinese GPT2-Small. ## 模型分类 Model Taxonomy | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | | :----: | :----: | :----: | :----: | :----: | :----: | | 通用 General | 自然语言生成 NLG | 闻仲 Wenzhong | GPT2 | 110M | 中文 Chinese | ## 模型信息 Model Information 类似于Wenzhong2.0-GPT2-3.5B-chinese,我们实现了一个small版本的12层的Wenzhong-GPT2-110M,并且在悟道(300G版本)上面进行预训练。 Similar to Wenzhong2.0-GPT2-3.5B-chinese, we implement a small size Wenzhong-GPT2-110M with 12 layers, which is pre-trained on Wudao Corpus (300G version). ## 使用 Usage ### 加载模型 Loading Models ```python from transformers import GPT2Tokenizer,GPT2LMHeadModel hf_model_path = 'IDEA-CCNL/Wenzhong-GPT2-110M' tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path) model = GPT2LMHeadModel.from_pretrained(hf_model_path) ``` ### 使用示例 Usage Examples ```python question = "北京是中国的" inputs = tokenizer(question,return_tensors='pt') generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, max_length=150, # max_new_tokens=80, do_sample=True, top_p = 0.6, # num_beams=5, eos_token_id=50256, pad_token_id=0, num_return_sequences = 5) for idx,sentence in enumerate(generation_output.sequences): print('next sentence %d:\n'%idx, tokenizer.decode(sentence).split('<|endoftext|>')[0]) print('*'*40) ``` ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```
timm/mobilevit_xxs.cvnets_in1k
timm
2023-04-24T22:23:35Z
1,280
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2110.02178", "license:other", "region:us" ]
image-classification
2023-04-24T22:23:25Z
--- tags: - image-classification - timm library_name: timm license: other datasets: - imagenet-1k --- # Model card for mobilevit_xxs.cvnets_in1k A MobileViT image classification model. Trained on ImageNet-1k by paper authors. See license details at https://github.com/apple/ml-cvnets/blob/main/LICENSE ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 1.3 - GMACs: 0.4 - Activations (M): 8.3 - Image size: 256 x 256 - **Papers:** - MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer: https://arxiv.org/abs/2110.02178 - **Original:** https://github.com/apple/ml-cvnets - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('mobilevit_xxs.cvnets_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilevit_xxs.cvnets_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 128, 128]) # torch.Size([1, 24, 64, 64]) # torch.Size([1, 48, 32, 32]) # torch.Size([1, 64, 16, 16]) # torch.Size([1, 320, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'mobilevit_xxs.cvnets_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 320, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{mehta2022mobilevit, title={MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer}, author={Sachin Mehta and Mohammad Rastegari}, booktitle={International Conference on Learning Representations}, year={2022} } ```
porkorbeef/Llama-2-13b-12_153950
porkorbeef
2023-08-21T05:57:17Z
1,280
0
transformers
[ "transformers", "pytorch", "llama", "feature-extraction", "license:cc-by-nc-4.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
feature-extraction
2023-08-21T02:39:13Z
--- license: cc-by-nc-4.0 ---
filipealmeida/mpt-30b-instruct-GGUF
filipealmeida
2023-10-05T05:34:36Z
1,280
0
null
[ "gguf", "Composer", "MosaicML", "llm-foundry", "arxiv:2205.14135", "arxiv:2108.12409", "license:cc-by-sa-3.0", "region:us" ]
null
2023-10-05T00:59:09Z
--- license: cc-by-sa-3.0 datasets: - competition_math - conceptofmind/cot_submix_original/cot_gsm8k - knkarthick/dialogsum - mosaicml/dolly_hhrlhf - duorc - tau/scrolls/qasper - emozilla/quality - scrolls/summ_screen_fd - spider tags: - Composer - MosaicML - llm-foundry inference: false --- # MPT-30B-Instruct <span style="color:red">This is not working yet with the official version of llama.cpp</span> This is the GGUF version of MPT-30B-Instruct based on [jploski's fork of llama.cpp](https://github.com/jploski/llama.cpp) Right now it only works with that branch, but hopefully, this will be integrated with the official [llama.cpp](https://github.com/ggerganov/llama.cpp) I needed this for some prototyping so I'll also maintain a fork of the python bindings while this isn't all merged in here: [llama-cpp-python-mpt](https://github.com/filipe-m-almeida/llama-cpp-python-mpt) ----- MPT-30B-Instruct is a model for short-form instruction following. It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider). * License: _CC-By-SA-3.0_ This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture. ## Model Date June 22, 2023 ## Model License CC-By-SA-3.0 ## Documentation * [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ### Example Question/Instruction **Bespokenizer46** > I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform. > Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important. > End the email with a friendly inquiry about Phyllis's family. **MPT-30B-Instruct**: > Phyllis - > I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in. > LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy. > They also provide tools to easily connect to and use the model in your daily workflow. > I think you'd really enjoy speaking with their founder, we can set up a call if you're interested. > Also, I know it's been a tough year for your family, how are things? > Best, > Your Friend ## How to Use Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package. It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-30b-instruct', trust_remote_code=True ) ``` To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-30b-instruct' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-30b-instruct' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b') ``` The model can then be used, for example, within a text-generation pipeline. Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html). ```python from transformers import pipeline with torch.autocast('cuda', dtype=torch.bfloat16): inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda') outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # or using the HF pipeline pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe('Here is a recipe for vegan banana bread:\n', max_new_tokens=100, do_sample=True, use_cache=True)) ``` ### Formatting This model was trained on data formatted as follows: ```python def format_prompt(instruction): template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n" return template.format(instruction=instruction) example = "Tell me a funny joke.\nDon't make it too funny though." fmt_ex = format_prompt(instruction=example) ``` In the above example, `fmt_ex` is ready to be tokenized and sent through the model. ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 29.95B | |n_layers | 48 | | n_heads | 64 | | d_model | 7168 | | vocab size | 50432 | | sequence length | 8192 | ## Data Mix The model was trained on the following data mix: | Data Source | Number of Tokens in Source | Proportion | |-------------|----------------------------|------------| | competition_math | 1.6 M | 3.66% | | cot_gsm8k | 3.36 M | 7.67% | | dialogsum | 0.1 M | 0.23% | | dolly_hhrlhf | 5.89 M | 13.43% | | duorc | 7.8 M | 17.80% | | qasper | 8.72 M | 19.90% | | quality | 11.29 M | 25.78% | | scrolls/summ_screen_fd | 4.97 M | 11.33% | | spider | 0.089 M | 0.20% | ## PreTraining Data For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b). The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ### Training Configuration This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-30B-Instruct was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## Acknowledgements This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-30B: Raising the bar for open-source foundation models}, year = {2023}, url = {www.mosaicml.com/blog/mpt-30b}, note = {Accessed: 2023-06-22}, urldate = {2023-06-22} } ```
sronger/ko-llm-llama-2-7b-chat2
sronger
2023-11-29T07:49:35Z
1,280
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-29T07:24:09Z
Entry not found
TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF
TheBloke
2023-12-22T20:14:53Z
1,280
34
transformers
[ "transformers", "gguf", "mistral", "code", "text-generation-inference", "text-generation", "en", "dataset:glaiveai/glaive-code-assistant-v2", "dataset:TokenBender/code_instructions_122k_alpaca_style", "base_model:beowolx/CodeNinja-1.0-OpenChat-7B", "license:mit", "region:us" ]
text-generation
2023-12-22T20:10:18Z
--- base_model: beowolx/CodeNinja-1.0-OpenChat-7B datasets: - glaiveai/glaive-code-assistant-v2 - TokenBender/code_instructions_122k_alpaca_style inference: false language: - en license: mit metrics: - code_eval model_creator: beowulf model_name: CodeNinja 1.0 Openchat 7B model_type: mistral pipeline_tag: text-generation prompt_template: 'GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: ' quantized_by: TheBloke tags: - code - text-generation-inference --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeNinja 1.0 Openchat 7B - GGUF - Model creator: [beowulf](https://huggingface.co/beowolx) - Original model: [CodeNinja 1.0 Openchat 7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) <!-- description start --> ## Description This repo contains GGUF format model files for [beowulf's CodeNinja 1.0 Openchat 7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF) * [beowulf's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: OpenChat-Correct ``` GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [codeninja-1.0-openchat-7b.Q2_K.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [codeninja-1.0-openchat-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [codeninja-1.0-openchat-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [codeninja-1.0-openchat-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [codeninja-1.0-openchat-7b.Q4_0.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [codeninja-1.0-openchat-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [codeninja-1.0-openchat-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [codeninja-1.0-openchat-7b.Q5_0.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [codeninja-1.0-openchat-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [codeninja-1.0-openchat-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [codeninja-1.0-openchat-7b.Q6_K.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [codeninja-1.0-openchat-7b.Q8_0.gguf](https://huggingface.co/TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF/blob/main/codeninja-1.0-openchat-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF and below it, a specific filename to download, such as: codeninja-1.0-openchat-7b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF codeninja-1.0-openchat-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CodeNinja-1.0-OpenChat-7B-GGUF codeninja-1.0-openchat-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m codeninja-1.0-openchat-7b.Q4_K_M.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./codeninja-1.0-openchat-7b.Q4_K_M.gguf", # Download the model file first n_ctx=8192, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./codeninja-1.0-openchat-7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: beowulf's CodeNinja 1.0 Openchat 7B <p align="center"> <img width="700px" alt="DeepSeek Coder" src="https://cdn-uploads.huggingface.co/production/uploads/64b566ab04fa6584c03b5247/5COagfF6EwrV4utZJ-ClI.png"> </p> <hr> # CodeNinja: Your Advanced Coding Assistant ## Overview CodeNinja is an enhanced version of the renowned model [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210). It represents a breakthrough in coding assistance, having been fine-tuned through Supervised Fine Tuning on two expansive datasets, encompassing over 400,000 coding instructions. Designed to be an indispensable tool for coders, CodeNinja aims to integrate seamlessly into your daily coding routine. Discover the quantized versions at: [beowolx/CodeNinja-1.0-OpenChat-7B-GGUF](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF). ### Key Features - **Expansive Training Database**: CodeNinja has been refined with datasets from [glaiveai/glaive-code-assistant-v2](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v2) and [TokenBender/code_instructions_122k_alpaca_style](https://huggingface.co/datasets/TokenBender/code_instructions_122k_alpaca_style), incorporating around 400,000 coding instructions across various languages including Python, C, C++, Rust, Java, JavaScript, and more. - **Flexibility and Scalability**: Available in a 7B model size, CodeNinja is adaptable for local runtime environments. - **Exceptional Performance**: Achieves top-tier results among publicly accessible coding models, particularly notable on benchmarks like HumanEval. - **Advanced Code Completion**: With a substantial context window size of 8192, it supports comprehensive project-level code completion. ## Prompt Format CodeNinja maintains the same prompt structure as OpenChat 3.5. Effective utilization requires adherence to this format: ``` GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant: ``` 🚨 Important: Ensure the use of `<|end_of_turn|>` as the end-of-generation token. **Adhering to this format is crucial for optimal results.** ## Usage Instructions ### Using LM Studio The simplest way to engage with CodeNinja is via the [quantized versions](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B-GGUF) on [LM Studio](https://lmstudio.ai/). Ensure you select the "OpenChat" preset, which incorporates the necessary prompt format. The preset is also available in this [gist](https://gist.github.com/beowolx/b219466681c02ff67baf8f313a3ad817). ### Using the Transformers Library ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Initialize the model model_path = "beowolx/CodeNinja-1.0-OpenChat-7B" model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto") # Load the OpenChat tokenizer tokenizer = AutoTokenizer.from_pretrained("openchat/openchat-3.5-1210", use_fast=True) def generate_one_completion(prompt: str): messages = [ {"role": "user", "content": prompt}, {"role": "assistant", "content": ""} # Model response placeholder ] # Generate token IDs using the chat template input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True) # Produce completion generate_ids = model.generate( torch.tensor([input_ids]).to("cuda"), max_length=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id ) # Process the completion completion = tokenizer.decode(generate_ids[0], skip_special_tokens=True) completion = completion.split("\n\n\n")[0].strip() return completion ``` ## License CodeNinja is licensed under the MIT License, with model usage subject to the Model License. ## Contact For queries or support, please open an issue in the repository. <!-- original-model-card end -->
tushar310/Hippy-AAI-7B
tushar310
2024-03-14T15:17:46Z
1,280
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "EmbeddedLLM/Mistral-7B-Merge-14-v0.1", "liminerity/M7-7b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-14T15:14:48Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - EmbeddedLLM/Mistral-7B-Merge-14-v0.1 - liminerity/M7-7b --- # Hippy-AAI-7B Hippy-AAI-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [EmbeddedLLM/Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1) * [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b) ## 🧩 Configuration ```yaml slices: - sources: - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1 layer_range: [0, 32] - model: liminerity/M7-7b layer_range: [0, 32] merge_method: slerp base_model: liminerity/M7-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
meetkai/functionary-small-v2.5-GGUF
meetkai
2024-06-05T00:53:18Z
1,280
4
null
[ "gguf", "license:mit", "region:us" ]
null
2024-05-17T02:09:19Z
--- license: mit ---
KBlueLeaf/Kohaku-XL-Epsilon-rev3
KBlueLeaf
2024-06-09T15:57:18Z
1,280
19
diffusers
[ "diffusers", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-09T15:51:54Z
--- license: other license_name: fair-ai-public-license-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ ---
Sharathhebbar24/chat_gpt2_dpo
Sharathhebbar24
2024-05-09T14:29:48Z
1,279
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "dpo", "trl", "en", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:Intel/orca_dpo_pairs", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-24T06:13:17Z
--- language: - en license: apache-2.0 tags: - gpt2 - dpo - trl datasets: - HuggingFaceH4/ultrachat_200k - Intel/orca_dpo_pairs pipeline_tag: text-generation model-index: - name: chat_gpt2_dpo results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 23.98 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 31.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 24.95 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 41.26 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 49.96 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Sharathhebbar24/chat_gpt2_dpo name: Open LLM Leaderboard --- This model is a finetuned version of ```Sharathhebbar24/chat_gpt2``` using ```Intel/orca_dpo_pairs``` on DPO ## Model description GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifting one token (word or piece of word) to the right. The model uses a masking mechanism to make sure the predictions for the token `i` only use the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a prompt. ### To use this model ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> model_name = "Sharathhebbar24/chat_gpt2_dpo" >>> model = AutoModelForCausalLM.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> def generate_text(prompt): >>> inputs = tokenizer.encode(prompt, return_tensors='pt') >>> outputs = model.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id) >>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True) >>> return generated[:generated.rfind(".")+1] >>> prompt = """ >>> user: what are you? >>> assistant: I am a Chatbot intended to give a python program >>> user: hmm, can you write a python program to print Hii Heloo >>> assistant: Sure Here is a python code.\n print("Hii Heloo") >>> user: Can you write a Linear search program in python >>> """ >>> res = generate_text(prompt) >>> res ``` # Benchmark / Evaluation | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8k | | ------- | -------- | -------- | ------- | -------- | -------- | ------- | -------- | | Sharathhebbar24/chat_gpt2_dpo | 28.56 | 23.98 | 31.22 | 24.95 | 41.26 | 49.96 | 0 | ```python { "all": { "acc": 0.24915779048270345, "acc_stderr": 0.030509906389610868, "acc_norm": 0.25041231816215265, "acc_norm_stderr": 0.03132600249114931, "mc1": 0.2521419828641371, "mc1_stderr": 0.015201522246299965, "mc2": 0.41257163824244014, "mc2_stderr": 0.015127188811834062 }, "harness|arc:challenge|25": { "acc": 0.18686006825938567, "acc_stderr": 0.011391015649694391, "acc_norm": 0.23976109215017063, "acc_norm_stderr": 0.012476304127453954 }, "harness|hellaswag|10": { "acc": 0.28978291177056364, "acc_stderr": 0.004527343651130803, "acc_norm": 0.3121888070105557, "acc_norm_stderr": 0.0046243936909668975 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.22, "acc_stderr": 0.04163331998932268, "acc_norm": 0.22, "acc_norm_stderr": 0.04163331998932268 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.3037037037037037, "acc_stderr": 0.039725528847851375, "acc_norm": 0.3037037037037037, "acc_norm_stderr": 0.039725528847851375 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.17763157894736842, "acc_stderr": 0.031103182383123398, "acc_norm": 0.17763157894736842, "acc_norm_stderr": 0.031103182383123398 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.26, "acc_stderr": 0.0440844002276808, "acc_norm": 0.26, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.23018867924528302, "acc_stderr": 0.025907897122408173, "acc_norm": 0.23018867924528302, "acc_norm_stderr": 0.025907897122408173 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.2569444444444444, "acc_stderr": 0.03653946969442099, "acc_norm": 0.2569444444444444, "acc_norm_stderr": 0.03653946969442099 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.19, "acc_stderr": 0.039427724440366234, "acc_norm": 0.19, "acc_norm_stderr": 0.039427724440366234 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.24, "acc_stderr": 0.04292346959909283, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909283 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.29, "acc_stderr": 0.04560480215720684, "acc_norm": 0.29, "acc_norm_stderr": 0.04560480215720684 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.2543352601156069, "acc_stderr": 0.0332055644308557, "acc_norm": 0.2543352601156069, "acc_norm_stderr": 0.0332055644308557 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.21568627450980393, "acc_stderr": 0.04092563958237654, "acc_norm": 0.21568627450980393, "acc_norm_stderr": 0.04092563958237654 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.34, "acc_stderr": 0.04760952285695236, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695236 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.26382978723404255, "acc_stderr": 0.028809989854102973, "acc_norm": 0.26382978723404255, "acc_norm_stderr": 0.028809989854102973 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.24561403508771928, "acc_stderr": 0.04049339297748142, "acc_norm": 0.24561403508771928, "acc_norm_stderr": 0.04049339297748142 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.2413793103448276, "acc_stderr": 0.03565998174135302, "acc_norm": 0.2413793103448276, "acc_norm_stderr": 0.03565998174135302 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.24074074074074073, "acc_stderr": 0.02201908001221789, "acc_norm": 0.24074074074074073, "acc_norm_stderr": 0.02201908001221789 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.1349206349206349, "acc_stderr": 0.030557101589417515, "acc_norm": 0.1349206349206349, "acc_norm_stderr": 0.030557101589417515 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.16, "acc_stderr": 0.03684529491774708, "acc_norm": 0.16, "acc_norm_stderr": 0.03684529491774708 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.1774193548387097, "acc_stderr": 0.02173254068932927, "acc_norm": 0.1774193548387097, "acc_norm_stderr": 0.02173254068932927 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.24630541871921183, "acc_stderr": 0.030315099285617736, "acc_norm": 0.24630541871921183, "acc_norm_stderr": 0.030315099285617736 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.28, "acc_stderr": 0.04512608598542126, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542126 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.21818181818181817, "acc_stderr": 0.03225078108306289, "acc_norm": 0.21818181818181817, "acc_norm_stderr": 0.03225078108306289 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.3282828282828283, "acc_stderr": 0.03345678422756776, "acc_norm": 0.3282828282828283, "acc_norm_stderr": 0.03345678422756776 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.37305699481865284, "acc_stderr": 0.03490205592048573, "acc_norm": 0.37305699481865284, "acc_norm_stderr": 0.03490205592048573 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.26666666666666666, "acc_stderr": 0.02242127361292371, "acc_norm": 0.26666666666666666, "acc_norm_stderr": 0.02242127361292371 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.21481481481481482, "acc_stderr": 0.025040443877000683, "acc_norm": 0.21481481481481482, "acc_norm_stderr": 0.025040443877000683 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.22268907563025211, "acc_stderr": 0.027025433498882364, "acc_norm": 0.22268907563025211, "acc_norm_stderr": 0.027025433498882364 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.23178807947019867, "acc_stderr": 0.034454062719870546, "acc_norm": 0.23178807947019867, "acc_norm_stderr": 0.034454062719870546 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.3302752293577982, "acc_stderr": 0.02016446633634298, "acc_norm": 0.3302752293577982, "acc_norm_stderr": 0.02016446633634298 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.19444444444444445, "acc_stderr": 0.026991454502036733, "acc_norm": 0.19444444444444445, "acc_norm_stderr": 0.026991454502036733 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.25, "acc_stderr": 0.03039153369274154, "acc_norm": 0.25, "acc_norm_stderr": 0.03039153369274154 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.26582278481012656, "acc_stderr": 0.028756799629658342, "acc_norm": 0.26582278481012656, "acc_norm_stderr": 0.028756799629658342 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.17937219730941703, "acc_stderr": 0.0257498195691928, "acc_norm": 0.17937219730941703, "acc_norm_stderr": 0.0257498195691928 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.2366412213740458, "acc_stderr": 0.037276735755969174, "acc_norm": 0.2366412213740458, "acc_norm_stderr": 0.037276735755969174 }, "harness|hendrycksTest-international_law|5": { "acc": 0.35537190082644626, "acc_stderr": 0.04369236326573981, "acc_norm": 0.35537190082644626, "acc_norm_stderr": 0.04369236326573981 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.25925925925925924, "acc_stderr": 0.042365112580946336, "acc_norm": 0.25925925925925924, "acc_norm_stderr": 0.042365112580946336 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.2822085889570552, "acc_stderr": 0.03536117886664742, "acc_norm": 0.2822085889570552, "acc_norm_stderr": 0.03536117886664742 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.32142857142857145, "acc_stderr": 0.04432804055291519, "acc_norm": 0.32142857142857145, "acc_norm_stderr": 0.04432804055291519 }, "harness|hendrycksTest-management|5": { "acc": 0.1941747572815534, "acc_stderr": 0.03916667762822585, "acc_norm": 0.1941747572815534, "acc_norm_stderr": 0.03916667762822585 }, "harness|hendrycksTest-marketing|5": { "acc": 0.2905982905982906, "acc_stderr": 0.02974504857267404, "acc_norm": 0.2905982905982906, "acc_norm_stderr": 0.02974504857267404 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.23627075351213284, "acc_stderr": 0.015190473717037497, "acc_norm": 0.23627075351213284, "acc_norm_stderr": 0.015190473717037497 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.24566473988439305, "acc_stderr": 0.02317629820399201, "acc_norm": 0.24566473988439305, "acc_norm_stderr": 0.02317629820399201 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.24581005586592178, "acc_stderr": 0.014400296429225587, "acc_norm": 0.24581005586592178, "acc_norm_stderr": 0.014400296429225587 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.25163398692810457, "acc_stderr": 0.024848018263875195, "acc_norm": 0.25163398692810457, "acc_norm_stderr": 0.024848018263875195 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.18006430868167203, "acc_stderr": 0.021823422857744953, "acc_norm": 0.18006430868167203, "acc_norm_stderr": 0.021823422857744953 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.25617283950617287, "acc_stderr": 0.024288533637726095, "acc_norm": 0.25617283950617287, "acc_norm_stderr": 0.024288533637726095 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.2801418439716312, "acc_stderr": 0.02678917235114023, "acc_norm": 0.2801418439716312, "acc_norm_stderr": 0.02678917235114023 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.24837027379400262, "acc_stderr": 0.011035212598034503, "acc_norm": 0.24837027379400262, "acc_norm_stderr": 0.011035212598034503 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.3125, "acc_stderr": 0.02815637344037142, "acc_norm": 0.3125, "acc_norm_stderr": 0.02815637344037142 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.25, "acc_stderr": 0.01751781884501444, "acc_norm": 0.25, "acc_norm_stderr": 0.01751781884501444 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.18181818181818182, "acc_stderr": 0.03694284335337801, "acc_norm": 0.18181818181818182, "acc_norm_stderr": 0.03694284335337801 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.31020408163265306, "acc_stderr": 0.029613459872484378, "acc_norm": 0.31020408163265306, "acc_norm_stderr": 0.029613459872484378 }, "harness|hendrycksTest-sociology|5": { "acc": 0.24875621890547264, "acc_stderr": 0.030567675938916707, "acc_norm": 0.24875621890547264, "acc_norm_stderr": 0.030567675938916707 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-virology|5": { "acc": 0.19879518072289157, "acc_stderr": 0.03106939026078942, "acc_norm": 0.19879518072289157, "acc_norm_stderr": 0.03106939026078942 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.29239766081871343, "acc_stderr": 0.034886477134579215, "acc_norm": 0.29239766081871343, "acc_norm_stderr": 0.034886477134579215 }, "harness|truthfulqa:mc|0": { "mc1": 0.2521419828641371, "mc1_stderr": 0.015201522246299965, "mc2": 0.41257163824244014, "mc2_stderr": 0.015127188811834062 }, "harness|winogrande|5": { "acc": 0.4996053670086819, "acc_stderr": 0.014052481306049512 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 } } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sharathhebbar24__chat_gpt2_dpo) | Metric |Value| |---------------------------------|----:| |Avg. |28.56| |AI2 Reasoning Challenge (25-Shot)|23.98| |HellaSwag (10-Shot) |31.22| |MMLU (5-Shot) |24.95| |TruthfulQA (0-shot) |41.26| |Winogrande (5-shot) |49.96| |GSM8k (5-shot) | 0.00|
Isaak-Carter/J.O.S.I.E.3-Beta11-7B-slerp
Isaak-Carter
2024-05-15T10:10:21Z
1,279
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "cognitivecomputations/dolphin-2.8-experiment26-7b", "argilla/CapybaraHermes-2.5-Mistral-7B", "conversational", "en", "de", "es", "fr", "ja", "zh", "base_model:cognitivecomputations/dolphin-2.8-experiment26-7b", "base_model:argilla/CapybaraHermes-2.5-Mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-05T08:39:11Z
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - cognitivecomputations/dolphin-2.8-experiment26-7b - argilla/CapybaraHermes-2.5-Mistral-7B base_model: - cognitivecomputations/dolphin-2.8-experiment26-7b - argilla/CapybaraHermes-2.5-Mistral-7B language: - en - de - es - fr - ja - zh --- # J.O.S.I.E.3-Beta11-7B-slerp J.O.S.I.E.3-Beta11-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [cognitivecomputations/dolphin-2.8-experiment26-7b](https://huggingface.co/cognitivecomputations/dolphin-2.8-experiment26-7b) * [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B) [-- GGUF Quants --](https://huggingface.co/Isaak-Carter/J.O.S.I.E.3-Beta11-7B-slerp-gguf) #### Run in ollama: ```sh ollama run goekdenizguelmez/j.o.s.i.e.v3-beta11 ``` Only Quant 4-k-m for now! This model will bee further Finetuned on my custom J.O.S.I.E.v3.13 Dataset, in the ChatML prompt Format. ```text <|im_start|>system You are JOSIE, a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|im_end|> <|im_start|>user {{ .Prompt }}<|im_end|> <|im_start|>assistant {{ .Response }}<|im_end|> ``` ## 🧩 Configuration ```yaml slices: - sources: - model: cognitivecomputations/dolphin-2.8-experiment26-7b layer_range: [0, 32] - model: argilla/CapybaraHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: argilla/CapybaraHermes-2.5-Mistral-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## Evaluation ```json { "all": { "acc": 0.6403971587643947, "acc_stderr": 0.03228725576276735, "acc_norm": 0.6413927640714372, "acc_norm_stderr": 0.03294011331780708, "mc1": 0.39167686658506734, "mc1_stderr": 0.017087795881769622, "mc2": 0.5576866593959974, "mc2_stderr": 0.01554622060467735 }, "harness|arc:challenge|25": { "acc": 0.6186006825938567, "acc_stderr": 0.014194389086685244, "acc_norm": 0.6450511945392492, "acc_norm_stderr": 0.013983036904094087 }, "harness|hellaswag|10": { "acc": 0.6738697470623382, "acc_stderr": 0.004678375103797962, "acc_norm": 0.8499302927703645, "acc_norm_stderr": 0.003564098420387764 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5777777777777777, "acc_stderr": 0.04266763404099582, "acc_norm": 0.5777777777777777, "acc_norm_stderr": 0.04266763404099582 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6907894736842105, "acc_stderr": 0.037610708698674805, "acc_norm": 0.6907894736842105, "acc_norm_stderr": 0.037610708698674805 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.63, "acc_stderr": 0.04852365870939099, "acc_norm": 0.63, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.02863723563980089, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.02863723563980089 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7708333333333334, "acc_stderr": 0.03514697467862388, "acc_norm": 0.7708333333333334, "acc_norm_stderr": 0.03514697467862388 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956912, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.04878317312145634, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145634 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6358381502890174, "acc_stderr": 0.03669072477416907, "acc_norm": 0.6358381502890174, "acc_norm_stderr": 0.03669072477416907 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4019607843137255, "acc_stderr": 0.048786087144669955, "acc_norm": 0.4019607843137255, "acc_norm_stderr": 0.048786087144669955 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.0440844002276808, "acc_norm": 0.74, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.548936170212766, "acc_stderr": 0.032529096196131965, "acc_norm": 0.548936170212766, "acc_norm_stderr": 0.032529096196131965 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4649122807017544, "acc_stderr": 0.046920083813689104, "acc_norm": 0.4649122807017544, "acc_norm_stderr": 0.046920083813689104 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5172413793103449, "acc_stderr": 0.04164188720169375, "acc_norm": 0.5172413793103449, "acc_norm_stderr": 0.04164188720169375 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41534391534391535, "acc_stderr": 0.025379524910778398, "acc_norm": 0.41534391534391535, "acc_norm_stderr": 0.025379524910778398 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.044444444444444495, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.044444444444444495 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.33, "acc_stderr": 0.047258156262526045, "acc_norm": 0.33, "acc_norm_stderr": 0.047258156262526045 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7741935483870968, "acc_stderr": 0.023785577884181012, "acc_norm": 0.7741935483870968, "acc_norm_stderr": 0.023785577884181012 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4729064039408867, "acc_stderr": 0.03512819077876106, "acc_norm": 0.4729064039408867, "acc_norm_stderr": 0.03512819077876106 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.68, "acc_stderr": 0.04688261722621505, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621505 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.793939393939394, "acc_stderr": 0.0315841532404771, "acc_norm": 0.793939393939394, "acc_norm_stderr": 0.0315841532404771 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8181818181818182, "acc_stderr": 0.027479603010538804, "acc_norm": 0.8181818181818182, "acc_norm_stderr": 0.027479603010538804 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9067357512953368, "acc_stderr": 0.02098685459328972, "acc_norm": 0.9067357512953368, "acc_norm_stderr": 0.02098685459328972 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.658974358974359, "acc_stderr": 0.024035489676335082, "acc_norm": 0.658974358974359, "acc_norm_stderr": 0.024035489676335082 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.32592592592592595, "acc_stderr": 0.028578348365473072, "acc_norm": 0.32592592592592595, "acc_norm_stderr": 0.028578348365473072 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.680672268907563, "acc_stderr": 0.030283995525884396, "acc_norm": 0.680672268907563, "acc_norm_stderr": 0.030283995525884396 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258, "acc_norm": 0.3576158940397351, "acc_norm_stderr": 0.03913453431177258 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8348623853211009, "acc_stderr": 0.015919557829976037, "acc_norm": 0.8348623853211009, "acc_norm_stderr": 0.015919557829976037 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5138888888888888, "acc_stderr": 0.03408655867977749, "acc_norm": 0.5138888888888888, "acc_norm_stderr": 0.03408655867977749 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7892156862745098, "acc_stderr": 0.028626547912437413, "acc_norm": 0.7892156862745098, "acc_norm_stderr": 0.028626547912437413 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8143459915611815, "acc_stderr": 0.02531049537694486, "acc_norm": 0.8143459915611815, "acc_norm_stderr": 0.02531049537694486 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6995515695067265, "acc_stderr": 0.030769352008229146, "acc_norm": 0.6995515695067265, "acc_norm_stderr": 0.030769352008229146 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7938931297709924, "acc_stderr": 0.03547771004159465, "acc_norm": 0.7938931297709924, "acc_norm_stderr": 0.03547771004159465 }, "harness|hendrycksTest-international_law|5": { "acc": 0.768595041322314, "acc_stderr": 0.03849856098794088, "acc_norm": 0.768595041322314, "acc_norm_stderr": 0.03849856098794088 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7870370370370371, "acc_stderr": 0.0395783547198098, "acc_norm": 0.7870370370370371, "acc_norm_stderr": 0.0395783547198098 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7668711656441718, "acc_stderr": 0.0332201579577674, "acc_norm": 0.7668711656441718, "acc_norm_stderr": 0.0332201579577674 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5178571428571429, "acc_stderr": 0.047427623612430116, "acc_norm": 0.5178571428571429, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.7864077669902912, "acc_stderr": 0.040580420156460344, "acc_norm": 0.7864077669902912, "acc_norm_stderr": 0.040580420156460344 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8675213675213675, "acc_stderr": 0.022209309073165616, "acc_norm": 0.8675213675213675, "acc_norm_stderr": 0.022209309073165616 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8007662835249042, "acc_stderr": 0.014283378044296418, "acc_norm": 0.8007662835249042, "acc_norm_stderr": 0.014283378044296418 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7225433526011561, "acc_stderr": 0.024105712607754307, "acc_norm": 0.7225433526011561, "acc_norm_stderr": 0.024105712607754307 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.30726256983240224, "acc_stderr": 0.01543015884646961, "acc_norm": 0.30726256983240224, "acc_norm_stderr": 0.01543015884646961 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7320261437908496, "acc_stderr": 0.025360603796242557, "acc_norm": 0.7320261437908496, "acc_norm_stderr": 0.025360603796242557 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7138263665594855, "acc_stderr": 0.025670259242188933, "acc_norm": 0.7138263665594855, "acc_norm_stderr": 0.025670259242188933 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7283950617283951, "acc_stderr": 0.024748624490537368, "acc_norm": 0.7283950617283951, "acc_norm_stderr": 0.024748624490537368 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48936170212765956, "acc_stderr": 0.029820747191422473, "acc_norm": 0.48936170212765956, "acc_norm_stderr": 0.029820747191422473 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4706649282920469, "acc_stderr": 0.01274823839736555, "acc_norm": 0.4706649282920469, "acc_norm_stderr": 0.01274823839736555 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6764705882352942, "acc_stderr": 0.028418208619406762, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.028418208619406762 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6764705882352942, "acc_stderr": 0.018926082916083376, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.018926082916083376 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6636363636363637, "acc_stderr": 0.04525393596302506, "acc_norm": 0.6636363636363637, "acc_norm_stderr": 0.04525393596302506 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7387755102040816, "acc_stderr": 0.02812342933514278, "acc_norm": 0.7387755102040816, "acc_norm_stderr": 0.02812342933514278 }, "harness|hendrycksTest-sociology|5": { "acc": 0.835820895522388, "acc_stderr": 0.02619392354445412, "acc_norm": 0.835820895522388, "acc_norm_stderr": 0.02619392354445412 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.83, "acc_stderr": 0.0377525168068637, "acc_norm": 0.83, "acc_norm_stderr": 0.0377525168068637 }, "harness|hendrycksTest-virology|5": { "acc": 0.5421686746987951, "acc_stderr": 0.0387862677100236, "acc_norm": 0.5421686746987951, "acc_norm_stderr": 0.0387862677100236 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8187134502923976, "acc_stderr": 0.029547741687640038, "acc_norm": 0.8187134502923976, "acc_norm_stderr": 0.029547741687640038 }, "harness|truthfulqa:mc|0": { "mc1": 0.39167686658506734, "mc1_stderr": 0.017087795881769622, "mc2": 0.5576866593959974, "mc2_stderr": 0.01554622060467735 }, "harness|winogrande|5": { "acc": 0.7884767166535123, "acc_stderr": 0.011477747684223188 }, "harness|gsm8k|5": { "acc": 0.6360879454131918, "acc_stderr": 0.013252539227966195 } } ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Isaak-Carter/J.O.S.I.E.3-Beta11-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ludocomito/Minerva-MoE-3x3B
ludocomito
2024-05-13T18:00:08Z
1,279
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "sapienzanlp/Minerva-3B-base-v1.0", "DeepMount00/Minerva-3B-base-RAG", "FairMind/Minerva-3B-Instruct-v1.0", "base_model:sapienzanlp/Minerva-3B-base-v1.0", "base_model:DeepMount00/Minerva-3B-base-RAG", "base_model:FairMind/Minerva-3B-Instruct-v1.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-13T17:46:02Z
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - sapienzanlp/Minerva-3B-base-v1.0 - DeepMount00/Minerva-3B-base-RAG - FairMind/Minerva-3B-Instruct-v1.0 base_model: - sapienzanlp/Minerva-3B-base-v1.0 - DeepMount00/Minerva-3B-base-RAG - FairMind/Minerva-3B-Instruct-v1.0 --- # Minerva-MoE-3x3B Minerva-MoE-3x3B is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [sapienzanlp/Minerva-3B-base-v1.0](https://huggingface.co/sapienzanlp/Minerva-3B-base-v1.0) * [DeepMount00/Minerva-3B-base-RAG](https://huggingface.co/DeepMount00/Minerva-3B-base-RAG) * [FairMind/Minerva-3B-Instruct-v1.0](https://huggingface.co/FairMind/Minerva-3B-Instruct-v1.0) ## 🧩 Configuration ```yaml base_model: sapienzanlp/Minerva-3B-base-v1.0 experts: - source_model: sapienzanlp/Minerva-3B-base-v1.0 positive_prompts: - "ciao" - "chat" - "parlare" - source_model: DeepMount00/Minerva-3B-base-RAG positive_prompts: - "rispondi a domande" - "cosa è" - "chi è" - "dove è" - "come si" - "spiegami" - "definisci" - source_model: FairMind/Minerva-3B-Instruct-v1.0 positive_prompts: - "istruzione" - "input" - "risposta" - "scrivi" - "sequenza" - "istruzioni" dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "ludocomito/Minerva-MoE-3x3B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
4stack/Geom-ViT-L-14-336-CLIP
4stack
2024-06-27T04:09:45Z
1,279
0
open_clip
[ "open_clip", "safetensors", "clip", "zero-shot-image-classification", "license:mit", "region:us" ]
zero-shot-image-classification
2024-06-24T00:14:29Z
--- tags: - clip library_name: open_clip pipeline_tag: zero-shot-image-classification license: mit --- # Model card for Geom-ViT-L-14-336-CLIP
Helsinki-NLP/opus-mt-bn-en
Helsinki-NLP
2023-08-16T11:26:30Z
1,278
5
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "bn", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- language: - bn - en tags: - translation license: apache-2.0 --- ### ben-eng * source group: Bengali * target group: English * OPUS readme: [ben-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md) * model: transformer-align * source language(s): ben * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ben.eng | 49.7 | 0.641 | ### System Info: - hf_name: ben-eng - source_languages: ben - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ben-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['bn', 'en'] - src_constituents: {'ben'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ben-eng/opus-2020-06-17.test.txt - src_alpha3: ben - tgt_alpha3: eng - short_pair: bn-en - chrF2_score: 0.6409999999999999 - bleu: 49.7 - brevity_penalty: 0.976 - ref_len: 13978.0 - src_name: Bengali - tgt_name: English - train_date: 2020-06-17 - src_alpha2: bn - tgt_alpha2: en - prefer_old: False - long_pair: ben-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
ckiplab/albert-base-chinese-ws
ckiplab
2022-05-10T03:28:09Z
1,278
2
transformers
[ "transformers", "pytorch", "albert", "token-classification", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - albert - zh license: gpl-3.0 --- # CKIP ALBERT Base Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/albert-base-chinese-ws') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
timm/tf_efficientnet_b2.ns_jft_in1k
timm
2023-04-27T21:18:15Z
1,278
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.11946", "arxiv:1911.04252", "license:apache-2.0", "region:us" ]
image-classification
2022-12-13T00:02:26Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnet_b2.ns_jft_in1k A EfficientNet image classification model. Trained on ImageNet-1k and unlabeled JFT-300m using Noisy Student semi-supervised learning in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 9.1 - GMACs: 1.0 - Activations (M): 13.8 - Image size: 260 x 260 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - Self-training with Noisy Student improves ImageNet classification: https://arxiv.org/abs/1911.04252 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnet_b2.ns_jft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b2.ns_jft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 130, 130]) # torch.Size([1, 24, 65, 65]) # torch.Size([1, 48, 33, 33]) # torch.Size([1, 120, 17, 17]) # torch.Size([1, 352, 9, 9]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b2.ns_jft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1408, 9, 9) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @article{Xie2019SelfTrainingWN, title={Self-Training With Noisy Student Improves ImageNet Classification}, author={Qizhe Xie and Eduard H. Hovy and Minh-Thang Luong and Quoc V. Le}, journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2019}, pages={10684-10695} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
Doctor-Shotgun/TinyLlama-1.1B-32k
Doctor-Shotgun
2024-02-02T21:25:35Z
1,278
28
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "llama 2", "en", "dataset:togethercomputer/RedPajama-Data-1T-Sample", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-29T05:19:34Z
--- license: apache-2.0 datasets: - togethercomputer/RedPajama-Data-1T-Sample language: - en tags: - llama - llama 2 --- # TinyLlama-1.1B-32k 32k context finetune of TinyLlama-1.1B using increased rope theta (rope frequency base) meant to serve as a long-context speculative decoding model. Created using [TinyLlama-1.1B](https://huggingface.co/TinyLlama/tinyLlama-intermediate-checkpoints-after-1T-token) and further pretraining at 32768 context length on [togethercomputer/RedPajama-Data-1T-Sample](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample). Of note, the base checkpoint used was from commit "final model" fad4f1a5cd0563ac41349b8fec2e6e51156568a0 which was subsequently reverted, and not the current main branch 3T checkpoint of TinyLlama-1.1B. [EXL2 Quants by turboderp](https://huggingface.co/turboderp/TinyLlama-1B-32k-exl2) The quantized model fits alongside a 4.25bpw 70B model at 32k sequence length on a single A6000 and provides noticeable speed-up with speculative decoding. ### Wikitext (wikitext-2-raw-v1_train) Perplexity (64 rows) as evaluated via [exllamav2](https://github.com/turboderp/exllamav2): | Model | 2048 | 4096 | 8192 | 16384 | 32768 | | ---------------------- | ---------- | ---------- | ---------- | ---------- | ---------- | | TinyLlama-1.1B | **8.5633** | 208.3586 | 863.7507 | 1600.5021 | 6981.9021 | | **TinyLlama-1.1B-32k** | 8.6548 | **7.8339** | **7.4904** | **7.3674** | **7.1338** | ### Evaluation on HumanEval by [turboderp](https://huggingface.co/turboderp): | Model | Pass@1 | Pass@10 | | -------------------------------------- | --------------- | ----------- | | TinyLlama-1.1B | **0.0841** | **0.1524** | | TinyLlama-1.1B (NTK alpha=7.7) | 0.0598 | 0.1098 | | TinyLlama-1.1B-32k-ckpt-554 | 0.0732 | 0.1402 | | **TinyLlama-1.1B-32k** | 0.0829 | **0.1524** |
occiglot/occiglot-7b-it-en-instruct
occiglot
2024-03-12T20:24:14Z
1,278
5
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "it", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-05T10:15:33Z
--- license: apache-2.0 language: - en - it pipeline_tag: text-generation --- ![image/png](https://huggingface.co/datasets/malteos/images/resolve/main/occiglot.medium.png) # Occiglot-7B-it-en-Instruct > A [polyglot](https://en.wikipedia.org/wiki/Multilingualism#In_individuals) language model for the [Occident](https://en.wikipedia.org/wiki/Occident). > **Occiglot-7B-EU5-Instruct** is a the instruct version of [occiglot-7b-it-en](https://huggingface.co/occiglot/occiglot-7b-it-en), a generative language model with 7B parameters supporting the top-5 EU languages (English, Spanish, French, German, and Italian) and trained by the [Occiglot Research Collective](https://occiglot.github.io/occiglot/). It was trained on 160M tokens of additional multilingual and code instructions. Note that the model was not safety aligned and might generate problematic outputs. This is the first release of an ongoing open research project for multilingual language models. If you want to train a model for your own language or are working on evaluations, please contact us or join our [Discord server](https://discord.gg/wUpvYs4XvM). **We are open for collaborations!** ### Model details - **Instruction tuned from:** [occiglot-7b-it-en](https://huggingface.co/occiglot/occiglot-7b-it-en) - **Model type:** Causal decoder-only transformer language model - **Languages:** English, Italian, and code. - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html) - **Compute resources:** [DFKI cluster](https://www.dfki.de/en/web) - **Contributors:** Manuel Brack, Patrick Schramowski, Pedro Ortiz, Malte Ostendorff, Fabio Barth, Georg Rehm, Kristian Kersting - **Research labs:** [Occiglot](https://occiglot.github.io/occiglot/) with support from [SAINT](https://www.dfki.de/en/web/research/research-departments/foundations-of-systems-ai) and [SLT](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology) - **Contact:** [Discord](https://discord.gg/wUpvYs4XvM) ### How to use The model was trained using the chatml instruction template. You can use the transformers chat template feature for interaction. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import AutoTokenizer, MistralForCausalLM, set_seed >>> tokenizer = AutoTokenizer.from_pretrained("occiglot/occiglot-7b-eu5-instruct") >>> model = MistralForCausalLM.from_pretrained('occiglot/occiglot-7b-eu5-instruct') # You may want to use bfloat16 and/or move to GPU here >>> set_seed(42) >>> messages = [ >>> {"role": "system", 'content': 'You are a helpful assistant. Please give short and concise answers.'}, >>> {"role": "user", "content": "chi è il primo ministro italiano?"}, >>> ] >>> tokenized_chat = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_dict=False, return_tensors='pt',) >>> set_seed(42) >>> outputs = model.generate(tokenized_chat.to('cuda'), max_new_tokens=200,) >>> tokenizer.decode(out[0][len(tokenized_chat[0]):]) 'Il primo ministro italiano è attualmente Giorgia Meloni, presidente di Fratelli d'Italia, un partito politico di estrema destra.' ``` ## Dataset The training data was split evenly amongst the 5 languages based on the total number of tokens. We would like to thank Disco Research and Björn Plüster for making their dataset available to us. **English and Code** - [Open-Hermes-2B](https://huggingface.co/datasets/teknium/OpenHermes-2.5) **Italian** - [Quora-IT-Baize](https://huggingface.co/datasets/andreabac3/Quora-Italian-Fauno-Baize) - [Stackoverflow-IT-Vaize](https://huggingface.co/datasets/andreabac3/StackOverflow-Italian-Fauno-Baize) - [Camoscio](https://huggingface.co/datasets/teelinsan/camoscio_cleaned) - [OASST-2](https://huggingface.co/datasets/OpenAssistant/oasst2) (Italian subset) - [Aya-Dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) (Italian subset) ## Training settings - Full instruction fine-tuning on 8xH100. - 0.6 - 4 training epochs (depending on dataset sampling). - Framework: [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) - Precision: bf16 - Optimizer: AdamW - Global batch size: 128 (with 8192 context length) - Cosine Annealing with Warmup ## Tokenizer Tokenizer is unchanged from [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). ## Evaluation Preliminary evaluation results can be found below. Please note that the non-English results are based on partially machine-translated datasets and English prompts ([Belebele](https://huggingface.co/datasets/facebook/belebele) and [Okapi framework](https://github.com/nlp-uoregon/Okapi)) and thus should be interpreted with caution, e.g., biased towards English model performance. Currently, we are working on more suitable benchmarks for Spanish, French, German, and Italian. <details> <summary>Evaluation results</summary> ### English | | arc_challenge | belebele | hellaswag | mmlu | truthfulqa | avg | |:-------------------------------------|----------------:|-----------:|------------:|---------:|-------------:|---------:| | occiglot/occiglot-7b-eu5 | 0.530717 | 0.726667 | 0.789882 | 0.531904 | 0.403678 | 0.59657 | | occiglot/occiglot-7b-eu5-instruct | 0.558874 | 0.746667 | 0.799841 | 0.535109 | 0.449034 | 0.617905 | | occiglot/occiglot-7b-it-en | 0.580205 | 0.774444 | 0.804222 | 0.578977 | 0.412786 | 0.630127 | | occiglot/occiglot-7b-it-en-instruct | 0.609215 | 0.82 | 0.809301 | 0.578835 | 0.479562 | 0.659383 | | galatolo/cerbero-7b | 0.613481 | 0.827778 | 0.810396 | 0.600484 | 0.480911 | 0.66661 | | mistralai/Mistral-7B-v0.1 | 0.612628 | 0.844444 | 0.834097 | 0.624555 | 0.426201 | 0.668385 | | mistralai/Mistral-7B-Instruct-v0.2 | 0.637372 | 0.824444 | 0.846345 | 0.59201 | 0.668116 | 0.713657 | ### Italian | | arc_challenge_it | belebele_it | hellaswag_it | mmlu_it | truthfulqa_it | avg | |:-------------------------------------|-------------------:|--------------:|---------------:|----------:|----------------:|---------:| | occiglot/occiglot-7b-eu5 | 0.501283 | 0.652222 | 0.700533 | 0 | 0.252874 | 0.421382 | | occiglot/occiglot-7b-eu5-instruct | 0.516681 | 0.661111 | 0.71326 | 0 | 0.295019 | 0.437214 | | occiglot/occiglot-7b-it-en | 0.536356 | 0.684444 | 0.694768 | 0 | 0.247765 | 0.432667 | | occiglot/occiglot-7b-it-en-instruct | 0.545766 | 0.717778 | 0.713804 | 0 | 0.303959 | 0.456261 | | galatolo/cerbero-7b | 0.522669 | 0.717778 | 0.631567 | 0 | 0.302682 | 0.434939 | | mistralai/Mistral-7B-v0.1 | 0.502139 | 0.734444 | 0.630371 | 0 | 0.264368 | 0.426264 | | mistralai/Mistral-7B-Instruct-v0.2 | 0.519247 | 0.703333 | 0.6394 | 0 | 0.349936 | 0.442383 | </details> ## Acknowledgements The pre-trained model training was supported by a compute grant at the [42 supercomputer](https://hessian.ai/) which is a central component in the development of [hessian AI](https://hessian.ai/), the [AI Innovation Lab](https://hessian.ai/infrastructure/ai-innovationlab/) (funded by the [Hessian Ministry of Higher Education, Research and the Art (HMWK)](https://wissenschaft.hessen.de) & the [Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)](https://innen.hessen.de)) and the [AI Service Centers](https://hessian.ai/infrastructure/ai-service-centre/) (funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html)). The curation of the training data is partially funded by the [German Federal Ministry for Economic Affairs and Climate Action (BMWK)](https://www.bmwk.de/Navigation/EN/Home/home.html) through the project [OpenGPT-X](https://opengpt-x.de/en/) (project no. 68GX21007D). ## License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html) ## See also - https://huggingface.co/collections/occiglot/occiglot-eu5-7b-v01-65dbed502a6348b052695e01 - https://huggingface.co/NikolayKozloff/occiglot-7b-it-en-GGUF
FreedomIntelligence/Apollo-0.5B
FreedomIntelligence
2024-04-26T11:10:55Z
1,278
3
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2403.03640", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-06T13:04:41Z
--- license: apache-2.0 --- # Multilingual Medicine: Model, Dataset, Benchmark, Code Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far <p align="center"> 👨🏻‍💻<a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Github</a> •📃 <a href="https://arxiv.org/abs/2403.03640" target="_blank">Paper</a> • 🌐 <a href="https://apollo.llmzoo.com/" target="_blank">Demo</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> <br> <a href="./README_zh.md"> 中文 </a> | <a href="./README.md"> English </p> ![Apollo](assets/apollo_medium_final.png) ## 🌈 Update * **[2024.04.25]** [MedJamba](https://huggingface.co/FreedomIntelligence/Apollo-MedJamba) released, train and evaluation code refer to [repo](https://github.com/FreedomIntelligence/MedJamba). * **[2024.03.07]** [Paper](https://arxiv.org/abs/2403.03640) released. * **[2024.02.12]** <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> and <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> is published!🎉 * **[2024.01.23]** Apollo repo is published!🎉 ## Results 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B" target="_blank">Apollo-0.5B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-1.8B" target="_blank">Apollo-1.8B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B" target="_blank">Apollo-2B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B" target="_blank">Apollo-6B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B" target="_blank">Apollo-7B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-34B" target="_blank">Apollo-34B</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-72B" target="_blank">Apollo-72B</a> 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-MedJamba" target="_blank">MedJamba</a> 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B-GGUF" target="_blank">Apollo-0.5B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B-GGUF" target="_blank">Apollo-2B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B-GGUF" target="_blank">Apollo-6B-GGUF</a> • 🤗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF" target="_blank">Apollo-7B-GGUF</a> ![Apollo](assets/result.png) ## Usage Format User:{query}\nAssistant:{response}<|endoftext|> ## Dataset & Evaluation - Dataset 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> <details><summary>Click to expand</summary> ![Apollo](assets/dataset.png) - [Zip File](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/blob/main/ApolloCorpus.zip) - [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train) - Pretrain: - data item: - json_name: {data_source}_{language}_{data_type}.json - data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki - language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi) - data_type: qa(generated qa from text) - data_type==text: list of string ``` [ "string1", "string2", ... ] ``` - data_type==qa: list of qa pairs(list of string) ``` [ [ "q1", "a1", "q2", "a2", ... ], ... ] ``` - SFT: - json_name: {data_source}_{language}.json - data_type: code, general, math, medicalExam, medicalPatient - data item: list of qa pairs(list of string) ``` [ [ "q1", "a1", "q2", "a2", ... ], ... ] ``` </details> - Evaluation 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> <details><summary>Click to expand</summary> - EN: - [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) - [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test) - [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper. - [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - ZH: - [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test) - [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper - Randomly sample 2,000 multiple-choice questions with single answer. - [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu) - Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology - [CExam](https://github.com/williamliujl/CMExam): Not used in the paper - Randomly sample 2,000 multiple-choice questions - ES: [Head_qa](https://huggingface.co/datasets/head_qa) - FR: [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA) - HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - AR: [MMLU_Ara](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine </details> ## Results reproduction <details><summary>Click to expand</summary> **Waiting for Update** </details> ## Citation Please use the following citation if you intend to use our dataset for training or evaluation: ``` @misc{wang2024apollo, title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People}, author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang}, year={2024}, eprint={2403.03640}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
fangyuan/tqa_extractive_compressor
fangyuan
2024-03-08T18:00:37Z
1,278
0
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
2024-03-08T17:04:04Z
Entry not found
RachidAR/Llama-3-8B-saiga-suzume-ties-Q6_K-GGUF-OLD
RachidAR
2024-04-26T07:50:55Z
1,277
2
null
[ "gguf", "merge", "mergekit", "lazymergekit", "IlyaGusev/saiga_llama3_8b", "lightblue/suzume-llama-3-8B-multilingual", "llama-cpp", "gguf-my-repo", "text-generation", "ru", "en", "base_model:IlyaGusev/saiga_llama3_8b", "base_model:lightblue/suzume-llama-3-8B-multilingual", "license:llama3", "region:us" ]
text-generation
2024-04-26T07:50:38Z
--- language: - ru - en license: llama3 tags: - merge - mergekit - lazymergekit - IlyaGusev/saiga_llama3_8b - lightblue/suzume-llama-3-8B-multilingual - llama-cpp - gguf-my-repo base_model: - IlyaGusev/saiga_llama3_8b - lightblue/suzume-llama-3-8B-multilingual pipeline_tag: text-generation --- # RachidAR/Llama-3-8B-saiga-suzume-ties-Q6_K-GGUF This model was converted to GGUF format from [`d0rj/Llama-3-8B-saiga-suzume-ties`](https://huggingface.co/d0rj/Llama-3-8B-saiga-suzume-ties) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/d0rj/Llama-3-8B-saiga-suzume-ties) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo RachidAR/Llama-3-8B-saiga-suzume-ties-Q6_K-GGUF --model llama-3-8b-saiga-suzume-ties.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo RachidAR/Llama-3-8B-saiga-suzume-ties-Q6_K-GGUF --model llama-3-8b-saiga-suzume-ties.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-3-8b-saiga-suzume-ties.Q6_K.gguf -n 128 ```
AlekseyElygin/mistral-7b-instruct-v0.3-bnb-8ep-GGUF
AlekseyElygin
2024-06-28T06:52:20Z
1,277
0
transformers
[ "transformers", "safetensors", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-27T09:47:06Z
--- base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf --- # Uploaded model - **Developed by:** AlekseyElygin - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Bakanayatsu/Pony-Diffusion-V6-XL-for-Anime
Bakanayatsu
2024-03-04T15:27:30Z
1,276
4
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-03-04T10:26:19Z
Same as original [Pony Diffusion for Anime](https://civitai.com/models/315596/pony-diffusion-for-anime) but in diffusers version
OEvortex/MediKAI
OEvortex
2024-03-08T15:25:05Z
1,276
2
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-08T07:19:40Z
--- license: other --- # MediKAI - Your Healthcare Companion 🏥💬 Welcome to mediKAI, the latest healthcare-focused model by HelpingAI designed to provide personalized assistance and support in medical-related queries. ## Overview mediKAI is a 14 billion parameters model that specializes in healthcare-related topics and medical assistance. Whether you have questions about symptoms, treatments, medications, or general health and wellness, mediKAI is here to help. ## Languages Supported - English - French - Hindi - Spanish - Arabic ```
fatgong/5C8RFeGG3JtVDu9WUzQJjJFDuB3YfE2d4SVqi3Qj9vaXqvKm_vgg
fatgong
2024-03-20T14:12:33Z
1,276
0
keras
[ "keras", "region:us" ]
null
2024-03-09T14:08:03Z
Entry not found
segment-any-text/sat-3l
segment-any-text
2024-06-26T08:26:52Z
1,276
1
transformers
[ "transformers", "pytorch", "xlm-token", "token-classification", "multilingual", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "pa", "pl", "ps", "pt", "ro", "ru", "si", "sk", "sl", "sq", "sr", "sv", "ta", "te", "tg", "th", "tr", "uk", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "arxiv:2406.16678", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-06-16T09:15:05Z
--- license: mit language: - multilingual - am - ar - az - be - bg - bn - ca - ceb - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hu - hy - id - ig - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lt - lv - mg - mk - ml - mn - mr - ms - mt - my - ne - nl - no - pa - pl - ps - pt - ro - ru - si - sk - sl - sq - sr - sv - ta - te - tg - th - tr - uk - ur - uz - vi - xh - yi - yo - zh - zu library: - wtpsplit --- # sat-3l Model for [`wtpsplit`](https://github.com/segment-any-text/wtpsplit). State-of-the-art sentence segmentation with 3 Transfomer layers. For details, see our [`Segment any Text` paper](arxiv.org/abs/2406.16678)
zero-one-01/mistral-7b-orpo
zero-one-01
2024-06-27T03:50:41Z
1,276
0
peft
[ "peft", "pytorch", "safetensors", "gguf", "mistral", "arxiv:1910.09700", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "region:us" ]
null
2024-06-27T03:39:16Z
--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
vdo/stable-video-diffusion-img2vid-xt-1-1
vdo
2024-02-05T19:32:48Z
1,275
11
diffusers
[ "diffusers", "safetensors", "image-to-video", "license:other", "diffusers:StableVideoDiffusionPipeline", "region:us" ]
image-to-video
2024-02-05T19:27:03Z
--- pipeline_tag: image-to-video license: other license_name: stable-video-diffusion-1-1-nc-community license_link: LICENSE extra_gated_prompt: >- STABILITY AI NON-COMMERCIAL RESEARCH COMMUNITY LICENSE AGREEMENT Dated: February 2, 2024 By clicking “I Accept” below or by using or distributing any portion or element of the Models, Software, Software Products or Derivative Works, you agree to the terms of this License. If you do not agree to this License, then you do not have any rights to use the Software Products or Derivative Works through this License, and you must immediately cease using the Software Products or Derivative Works. If you are agreeing to be bound by the terms of this License on behalf of your employer or other entity, you represent and warrant to Stability AI that you have full legal authority to bind your employer or such entity to this License. If you do not have the requisite authority, you may not accept the License or access the Software Products or Derivative Works on behalf of your employer or other entity. "Agreement" means this Stable Non-Commercial Research Community License Agreement. “AUP” means the Stability AI Acceptable Use Policy available at https://stability.ai/use-policy, as may be updated from time to time. "Derivative Work(s)” means (a) any derivative work of the Software Products as recognized by U.S. copyright laws and (b) any modifications to a Model, and any other model created which is based on or derived from the Model or the Model’s output. For clarity, Derivative Works do not include the output of any Model. “Documentation” means any specifications, manuals, documentation, and other written information provided by Stability AI related to the Software. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Model(s)" means, collectively, Stability AI’s proprietary models and algorithms, including machine-learning models, trained model weights and other elements of the foregoing, made available under this Agreement. “Non-Commercial Uses” means exercising any of the rights granted herein for the purpose of research or non-commercial purposes. Non-Commercial Uses does not include any production use of the Software Products or any Derivative Works. "Stability AI" or "we" means Stability AI Ltd. and its affiliates. "Software" means Stability AI’s proprietary software made available under this Agreement. “Software Products” means the Models, Software and Documentation, individually or in any combination. 1. License Rights and Redistribution. a. Subject to your compliance with this Agreement, the AUP (which is hereby incorporated herein by reference), and the Documentation, Stability AI grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Stability AI’s intellectual property or other rights owned or controlled by Stability AI embodied in the Software Products to use, reproduce, distribute, and create Derivative Works of, the Software Products, in each case for Non-Commercial Uses only. b. You may not use the Software Products or Derivative Works to enable third parties to use the Software Products or Derivative Works as part of your hosted service or via your APIs, whether you are adding substantial additional functionality thereto or not. Merely distributing the Software Products or Derivative Works for download online without offering any related service (ex. by distributing the Models on HuggingFace) is not a violation of this subsection. If you wish to use the Software Products or any Derivative Works for commercial or production use or you wish to make the Software Products or any Derivative Works available to third parties via your hosted service or your APIs, contact Stability AI at https://stability.ai/contact. c. If you distribute or make the Software Products, or any Derivative Works thereof, available to a third party, the Software Products, Derivative Works, or any portion thereof, respectively, will remain subject to this Agreement and you must (i) provide a copy of this Agreement to such third party, and (ii) retain the following attribution notice within a "Notice" text file distributed as a part of such copies: "This Stability AI Model is licensed under the Stability AI Non-Commercial Research Community License, Copyright (c) Stability AI Ltd. All Rights Reserved.” If you create a Derivative Work of a Software Product, you may add your own attribution notices to the Notice file included with the Software Product, provided that you clearly indicate which attributions apply to the Software Product and you must state in the NOTICE file that you changed the Software Product and how it was modified. 2. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE SOFTWARE PRODUCTS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE SOFTWARE PRODUCTS, DERIVATIVE WORKS OR ANY OUTPUT OR RESULTS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE SOFTWARE PRODUCTS, DERIVATIVE WORKS AND ANY OUTPUT AND RESULTS. 3. Limitation of Liability. IN NO EVENT WILL STABILITY AI OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY DIRECT, INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF STABILITY AI OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 4. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Software Products or Derivative Works, neither Stability AI nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Software Products or Derivative Works. b. Subject to Stability AI’s ownership of the Software Products and Derivative Works made by or for Stability AI, with respect to any Derivative Works that are made by you, as between you and Stability AI, you are and will be the owner of such Derivative Works c. If you institute litigation or other proceedings against Stability AI (including a cross-claim or counterclaim in a lawsuit) alleging that the Software Products, Derivative Works or associated outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Stability AI from and against any claim by any third party arising out of or related to your use or distribution of the Software Products or Derivative Works in violation of this Agreement. 5. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Software Products and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Stability AI may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of any Software Products or Derivative Works. Sections 2-4 shall survive the termination of this Agreement. 6. Governing Law. This Agreement will be governed by and construed in accordance with the laws of the United States and the State of California without regard to choice of law principles. extra_gated_description: Stable Video Diffusion 1.1 License Agreement extra_gated_button_content: Submit extra_gated_fields: Name: text Company Name (if applicable): text Email: text Other Comments: text By clicking here, you accept the License agreement, and will use the Software Products and Derivative Works for non-commercial or research purposes only: checkbox By clicking here, you agree to sharing with Stability AI the information contained within this form and that Stability AI can contact you for the purposes of marketing our products and services: checkbox --- # Stable Video Diffusion 1.1 Image-to-Video Model Card <!-- Provide a quick summary of what the model is/does. --> ![row01](svd11.webp) Stable Video Diffusion (SVD) 1.1 Image-to-Video is a diffusion model that takes in a still image as a conditioning frame, and generates a video from it. ## Model Details ### Model Description (SVD 1.1) Image-to-Video is a latent diffusion model trained to generate short video clips from an image conditioning. This model was trained to generate 25 frames at resolution 1024x576 given a context frame of the same size, finetuned from [SVD Image-to-Video [25 frames]](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt). Fine tuning was performed with fixed conditioning at 6FPS and Motion Bucket Id 127 to improve the consistency of outputs without the need to adjust hyper parameters. These conditions are still adjustable and have not been removed. Performance outside of the fixed conditioning settings may vary compared to SVD 1.0. - **Developed by:** Stability AI - **Funded by:** Stability AI - **Model type:** Generative image-to-video model - **Finetuned from model:** SVD Image-to-Video [25 frames] ### Model Sources For research purposes, we recommend our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popular diffusion frameworks (both training and inference). - **Repository:** https://github.com/Stability-AI/generative-models - **Paper:** https://stability.ai/research/stable-video-diffusion-scaling-latent-video-diffusion-models-to-large-datasets ## Uses ### Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Research on generative models. - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. Excluded uses are described below. ### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. The model should not be used in any way that violates Stability AI's [Acceptable Use Policy](https://stability.ai/use-policy). ## Limitations and Bias ### Limitations - The generated videos are rather short (<= 4sec), and the model does not achieve perfect photorealism. - The model may generate videos without motion, or very slow camera pans. - The model cannot be controlled through text. - The model cannot render legible text. - Faces and people in general may not be generated properly. - The autoencoding part of the model is lossy. ### Recommendations The model is intended for research purposes only. ## How to Get Started with the Model Check out https://github.com/Stability-AI/generative-models
John6666/fluffy-tart-xl-v14-sdxl
John6666
2024-06-26T12:47:52Z
1,275
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "kemono", "furry", "chibi", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-26T12:41:51Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony - kemono - furry - chibi --- Original model is [here](https://civitai.com/models/537240/fluffy-tart-xl?modelVersionId=597239).
John6666/mala-anime-mix-nsfw-pony-xl-v5-sdxl
John6666
2024-06-29T00:47:45Z
1,275
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-29T00:41:09Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://civitai.com/models/442163/mala-anime-mix-nsfw-ponyxl?modelVersionId=604755).
Helsinki-NLP/opus-mt-fr-de
Helsinki-NLP
2023-08-16T11:36:16Z
1,274
0
transformers
[ "transformers", "pytorch", "tf", "rust", "marian", "text2text-generation", "translation", "fr", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-fr-de * source languages: fr * target languages: de * OPUS readme: [fr-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-de/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-de/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-de/opus-2020-01-09.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | euelections_dev2019.transformer-align.fr | 26.4 | 0.571 | | newssyscomb2009.fr.de | 22.1 | 0.524 | | news-test2008.fr.de | 22.1 | 0.524 | | newstest2009.fr.de | 21.6 | 0.520 | | newstest2010.fr.de | 22.6 | 0.527 | | newstest2011.fr.de | 21.5 | 0.518 | | newstest2012.fr.de | 22.4 | 0.516 | | newstest2013.fr.de | 24.2 | 0.532 | | newstest2019-frde.fr.de | 27.9 | 0.595 | | Tatoeba.fr.de | 49.1 | 0.676 |
digiplay/BreakDro_i1464
digiplay
2024-04-06T06:59:40Z
1,274
7
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-06-22T14:33:44Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/28828?modelVersionId=53149 Original Author's DEMO image : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b9f0ead3-c7aa-4863-af4d-e2a755cd7500/width=2304/00042-3140695667.jpeg)
elyza/ELYZA-japanese-CodeLlama-7b-instruct
elyza
2023-11-17T05:01:00Z
1,274
17
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ja", "en", "arxiv:2308.12950", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-07T12:04:07Z
--- license: llama2 language: - ja - en --- ## ELYZA-japanese-CodeLlama-7b ![ELYZA-Japanese-CodeLlama](./key_visual.png) ### Model Description **ELYZA-japanese-CodeLlama-7b** は、 [Code Llama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)をベースとして日本語能力を拡張するために追加事前学習を行ったモデルです。 詳細は [Blog記事](https://zenn.dev/elyza/articles/fcbf103e0a05b1) を参照してください。 ### Usage ````python import torch from transformers import AutoModelForCausalLM, AutoTokenizer B_INST, E_INST = "[INST]", "[/INST]" B_SYS, E_SYS = "<<SYS>>\n", "\n<</SYS>>\n\n" DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。" text = "エラトステネスの篩についてサンプルコードを示し、解説してください。" model_name = "elyza/ELYZA-japanese-CodeLlama-7b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto") if torch.cuda.is_available(): model = model.to("cuda") prompt = "{bos_token}{b_inst} {system}{prompt} {e_inst} ".format( bos_token=tokenizer.bos_token, b_inst=B_INST, system=f"{B_SYS}{DEFAULT_SYSTEM_PROMPT}{E_SYS}", prompt=text, e_inst=E_INST, ) with torch.no_grad(): token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") output_ids = model.generate( token_ids.to(model.device), max_new_tokens=768, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, ) output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1) :], skip_special_tokens=True) print(output) """ エラトステネスの篩は、素数を探すアルゴリズムの一つです。 以下にそのサンプルコードを示します。 ```python def eratosthenes_sieve(n): sieve = [True] * (n + 1) sieve[0] = sieve[1] = False for i in range(2, int(n ** 0.5) + 1): if sieve[i]: for j in range(i * i, n + 1, i): sieve[j] = False return [i for i in range(n + 1) if sieve[i]] ``` このコードは、エラトステネスの篩を用いて、n以下のすべての素数を求める関数です。 エラトステネスの篩は、以下のようなアルゴリズムで動作します。 1. 2以外のすべての数を素数として扱う 2. 2以外の数のうち、2の倍数をすべて除外する 3. 3以外の数のうち、3の倍数をすべて除外する 4. 5以外の数のうち、5の倍数をすべて除外する 5. 7以外の数のうち、7の倍数をすべて除外する 6. … このアルゴリズムでは、2の倍数、3の倍数、5の倍数、7の倍数…というように、素数の倍数を除外していきます。 このアルゴリズムは、素数の倍数は必ず素数の倍数の倍数となるという性質を利用しているため、非常に効率的です。 """ ```` ### ELYZA-japanese-CodeLlama-7b Models | Model Name | Vocab Size | #Params | |:---------------------------------------------|:----------:|:-------:| |[elyza/ELYZA-japanese-CodeLlama-7b](https://huggingface.co/elyza/ELYZA-japanese-CodeLlama-7b)| 32016 | 6.27B | |[elyza/ELYZA-japanese-CodeLlama-7b-instruct](https://huggingface.co/elyza/ELYZA-japanese-CodeLlama-7b-instruct)| 32016 | 6.27B | ### Developers 以下アルファベット順 - [Akira Sasaki](https://huggingface.co/akirasasaki) - [Masato Hirakawa](https://huggingface.co/m-hirakawa) - [Shintaro Horie](https://huggingface.co/e-mon) - [Tomoaki Nakamura](https://huggingface.co/tyoyo) ### Licence Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ### How to Cite ```tex @misc{elyzacodellama2023, title={ELYZA-japanese-CodeLlama-7b}, url={https://huggingface.co/elyza/ELYZA-japanese-CodeLlama-7b}, author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura}, year={2023}, } ``` ### Citations ```tex @misc{rozière2023code, title={Code Llama: Open Foundation Models for Code}, author={Baptiste Rozière and Jonas Gehring and Fabian Gloeckle and Sten Sootla and Itai Gat and Xiaoqing Ellen Tan and Yossi Adi and Jingyu Liu and Tal Remez and Jérémy Rapin and Artyom Kozhevnikov and Ivan Evtimov and Joanna Bitton and Manish Bhatt and Cristian Canton Ferrer and Aaron Grattafiori and Wenhan Xiong and Alexandre Défossez and Jade Copet and Faisal Azhar and Hugo Touvron and Louis Martin and Nicolas Usunier and Thomas Scialom and Gabriel Synnaeve}, year={2023}, eprint={2308.12950}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
tastypear/CausalLM-7B-DPO-alpha-GGUF
tastypear
2023-11-26T06:57:52Z
1,274
21
transformers
[ "transformers", "gguf", "llama", "llama2", "qwen", "text-generation", "en", "zh", "dataset:JosephusCheung/GuanacoDataset", "dataset:Open-Orca/OpenOrca", "dataset:stingning/ultrachat", "dataset:meta-math/MetaMathQA", "dataset:liuhaotian/LLaVA-Instruct-150K", "dataset:jondurbin/airoboros-3.1", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "dataset:RyokoAI/ShareGPT52K", "dataset:RyokoAI/Fandom23K", "dataset:milashkaarshif/MoeGirlPedia_wikitext_raw_archive", "dataset:wikipedia", "dataset:wiki_lingua", "dataset:fnlp/moss-003-sft-data", "dataset:garage-bAInd/Open-Platypus", "dataset:LDJnr/Puffin", "dataset:openbmb/llava_zh", "dataset:BAAI/COIG", "dataset:TigerResearch/tigerbot-zhihu-zh-10k", "dataset:liwu/MNBVC", "dataset:teknium/openhermes", "base_model:CausalLM/7B-DPO-alpha", "license:wtfpl", "text-generation-inference", "region:us" ]
text-generation
2023-11-19T15:36:16Z
--- base_model: CausalLM/7B-DPO-alpha datasets: - JosephusCheung/GuanacoDataset - Open-Orca/OpenOrca - stingning/ultrachat - meta-math/MetaMathQA - liuhaotian/LLaVA-Instruct-150K - jondurbin/airoboros-3.1 - WizardLM/WizardLM_evol_instruct_V2_196k - RyokoAI/ShareGPT52K - RyokoAI/Fandom23K - milashkaarshif/MoeGirlPedia_wikitext_raw_archive - wikipedia - wiki_lingua - fnlp/moss-003-sft-data - garage-bAInd/Open-Platypus - LDJnr/Puffin - openbmb/llava_zh - BAAI/COIG - TigerResearch/tigerbot-zhihu-zh-10k - liwu/MNBVC - teknium/openhermes inference: false language: - en - zh license: wtfpl model_creator: CausalLM model_name: CausalLM 7B-DPO-alpha model_type: llama pipeline_tag: text-generation prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: tastypear tags: - llama - llama2 - qwen --- <!-- header start --> I made a quantized version of this model by referring to TheBloke's publishing format and based on the recommendation of TheBloke/CausalLM-7B-GGUF. 我参考 TheBloke 的发布格式,并根据 TheBloke/CausalLM-7B-GGUF 的推荐,制作了这个模型的量化版本。 --- <!-- header end --> <!-- markdownlint-disable MD041 --> # CausalLM 7B-DPO-alpha - GGUF - Model creator: [CausalLM](https://huggingface.co/CausalLM) - Original model: [CausalLM 7B-DPO-alpha](https://huggingface.co/CausalLM/7B-DPO-alpha) <!-- description start --> ## Description This repo contains GGUF format model files for [CausalLM's CausalLM 7B-DPO-alpha](https://huggingface.co/CausalLM/7B-DPO-alpha). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `wtfpl`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [CausalLM's CausalLM 7B-DPO-alpha](https://huggingface.co/CausalLM/7B-DPO-alpha). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | | ---- | ---- | ---- | ---- | | [causallm_7b.Q4_K_M.gguf](https://huggingface.co/tastypear/CausalLM-7B-DPO-alpha-GGUF/blob/main/causallm_7b-dpo-alpha.Q4_K_M.gguf) | Q4_K_M | 4 | 4.77 GB| | [causallm_7b.Q5_K_S.gguf](https://huggingface.co/tastypear/CausalLM-7B-DPO-alpha-GGUF/blob/main/causallm_7b-dpo-alpha.Q5_K_S.gguf) | Q5_K_S | 5 | 5.40 GB| | [causallm_7b.Q5_K_M.gguf](https://huggingface.co/tastypear/CausalLM-7B-DPO-alpha-GGUF/blob/main/causallm_7b-dpo-alpha.Q5_K_M.gguf) | Q5_K_M | 5 | 5.53 GB| <!-- README_GGUF.md-provided-files end --> <!-- footer start --> <!-- original-model-card start --> # Original model card: CausalLM's CausalLM 7B-DPO-alpha For details, please refer to the version without DPO training: [CausalLM/7B](https://huggingface.co/CausalLM/7B). | Model | MT-Bench | | ------------------------- | ------------ | | GPT-4 | 8.99 | | GPT-3.5-Turbo | 7.94 | | | | | Zephyr-7b-β (Overfitting) | 7.34 | | Zephyr-7b-α | 6.88 | | | | | **CausalLM/14B-DPO-α** | **7.618868** | | **CausalLM/7B-DPO-α** | **7.038125** | It should be noted that this is not a version that continues training on CausalLM/14B & 7B, but rather an optimized version that has undergone DPO training concurrently on a previous training branch, and some detailed parameters may have changed. You will still need to download the full model. The beta branch will soon be released, employing some aggressive approaches that might be detrimental in certain tasks, in order to achieve better alignment with human preferences, aiming to meet or exceed the GPT-3.5 benchmarks. Stay tuned. Disclaimer: Please note that the model was trained on unfiltered internet data. Since we do not have the capacity to vet all of it, there may be a substantial amount of objectionable content, pornography, violence, and offensive language present that we are unable to remove. Therefore, you will still need to complete your own checks on the model's safety and filter keywords in the output. Due to computational resource constraints, we are presently unable to implement RLHF for the model's ethics and safety, nor training on SFT samples that refuse to answer certain questions for restrictive fine-tuning. 更多详情,请参见未经DPO训练的版本:[CausalLM/14B](https://huggingface.co/CausalLM/14B) 需要注意的是,这并不是在 CausalLM/14B & 7B 上继续训练的版本,而是在之前的训练分支上同时进行了 DPO 训练的优化版本,一些细节参数可能发生了变化。 您仍然需要下载完整模型。 很快将会发布beta分支,采用了一些可能不利于某些任务的激进方法,以实现更好地符合人类偏好以接近和超过GPT-3.5基准。敬请期待。 免责声明:请注意,模型是在未经过滤的互联网数据上进行训练的。由于我们无法审核所有数据,可能会出现大量不良内容、色情、暴力和冒犯性语言,我们无法删除这些内容。因此,您仍然需要对模型的安全性进行自己的检查,并对输出中的关键词进行过滤。由于计算资源的限制,我们目前无法为模型的伦理和安全实施RLHF,也无法对拒绝回答某些问题的SFT样本进行训练以进行限制性微调。 <!-- original-model-card end -->
state-spaces/mamba-1.4b
state-spaces
2023-12-03T23:44:27Z
1,274
18
transformers
[ "transformers", "pytorch", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-12-03T23:20:30Z
--- license: apache-2.0 ---
Lewdiculous/llama3-8B-aifeifei-1.1-GGUF-IQ-Imatrix
Lewdiculous
2024-06-06T20:58:17Z
1,274
1
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-06-06T20:10:29Z
--- license: apache-2.0 --- [Model request #41](https://huggingface.co/Lewdiculous/Model-Requests/discussions/41). This model has a narrow use case in mind. Read the original description in their page. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/KYU-ez0vwP-6caL0lCEYJ.png)
setu4993/smaller-LaBSE
setu4993
2023-10-19T06:24:02Z
1,273
12
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "feature-extraction", "sentence_embedding", "multilingual", "google", "sentence-similarity", "labse", "ar", "de", "en", "es", "fr", "it", "ja", "ko", "nl", "pl", "pt", "ru", "th", "tr", "zh", "dataset:CommonCrawl", "dataset:Wikipedia", "arxiv:2010.05609", "arxiv:2007.01852", "license:apache-2.0", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2022-03-02T23:29:05Z
--- pipeline_tag: sentence-similarity language: - ar - de - en - es - fr - it - ja - ko - nl - pl - pt - ru - th - tr - zh tags: - bert - sentence_embedding - multilingual - google - sentence-similarity - labse license: apache-2.0 datasets: - CommonCrawl - Wikipedia --- # LaBSE ## Model description Smaller Language-agnostic BERT Sentence Encoder (LaBSE) is a BERT-based model distilled from the [original LaBSE model](https://huggingface.co/setu4993/LaBSE) to 15 languages (from the original 109 languages) using the techniques described in the paper ['Load What You Need: Smaller Versions of Multilingual BERT'](https://arxiv.org/abs/2010.05609) by [Ukjae Jeong](https://github.com/jeongukjae/). - Model: [HuggingFace's model hub](https://huggingface.co/setu4993/smaller-LaBSE). - Original model: [TensorFlow Hub](https://tfhub.dev/jeongukjae/smaller_LaBSE_15lang/1). - Distillation source: [GitHub](https://github.com/jeongukjae/smaller-labse). - Conversion from TensorFlow to PyTorch: [GitHub](https://github.com/setu4993/convert-labse-tf-pt). ## Usage Using the model: ```python import torch from transformers import BertModel, BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("setu4993/smaller-LaBSE") model = BertModel.from_pretrained("setu4993/smaller-LaBSE") model = model.eval() english_sentences = [ "dog", "Puppies are nice.", "I enjoy taking long walks along the beach with my dog.", ] english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True) with torch.no_grad(): english_outputs = model(**english_inputs) ``` To get the sentence embeddings, use the pooler output: ```python english_embeddings = english_outputs.pooler_output ``` Output for other languages: ```python italian_sentences = [ "cane", "I cuccioli sono carini.", "Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.", ] japanese_sentences = ["犬", "子犬はいいです", "私は犬と一緒にビーチを散歩するのが好きです"] italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True) japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True) with torch.no_grad(): italian_outputs = model(**italian_inputs) japanese_outputs = model(**japanese_inputs) italian_embeddings = italian_outputs.pooler_output japanese_embeddings = japanese_outputs.pooler_output ``` For similarity between sentences, an L2-norm is recommended before calculating the similarity: ```python import torch.nn.functional as F def similarity(embeddings_1, embeddings_2): normalized_embeddings_1 = F.normalize(embeddings_1, p=2) normalized_embeddings_2 = F.normalize(embeddings_2, p=2) return torch.matmul( normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1) ) print(similarity(english_embeddings, italian_embeddings)) print(similarity(english_embeddings, japanese_embeddings)) print(similarity(italian_embeddings, japanese_embeddings)) ``` ## Details Details about data, training, evaluation and performance metrics are available in the [original paper](https://arxiv.org/abs/2007.01852). ### BibTeX entry and citation info ```bibtex @misc{feng2020languageagnostic, title={Language-agnostic BERT Sentence Embedding}, author={Fangxiaoyu Feng and Yinfei Yang and Daniel Cer and Naveen Arivazhagan and Wei Wang}, year={2020}, eprint={2007.01852}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
lunarfish/furrydiffusion
lunarfish
2023-05-16T09:40:51Z
1,273
55
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "furry", "anything-v3.0", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-26T23:17:54Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - furry - anything-v3.0 --- ![images](https://cdn.discordapp.com/attachments/1050047774315532300/1057079481581445230/grid-0005.png) FurryDiffusion is a model made to generate furry art, this model is very much in beta still and will keep improoving! To use this please make sure to include `furry` in your prompt and to make a specific breed add the breed name only. Example Prompts: ``` Positive: highres, furry, fox, orange fur, blue eyes Negative: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, blurry ``` Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) **NOTE**: Its better to run it in Google Colab since you can use google's powerful gpu's for free. Go ahead try it now!
TheTravellingEngineer/bloom-1b1-RLHF
TheTravellingEngineer
2023-08-03T10:16:07Z
1,273
0
transformers
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2023-08-03T10:03:08Z
The base model is bigscience/bloom-1b1. It was finetuned using RLHF and the dataset and the model prompt is similar to the original model. This repo contains the merged fp16 model. **Legal Disclaimer: This model is bound by the usage restrictions of the original BLOOM model. And comes with no warranty or gurantees of any kind.** --- - license: - bloom <br> - datasets: - timdettmers/openassistant-guanaco <br> - language: - en <br> - reference: https://github.com/hiyouga/LLaMA-Efficient-Tuning/tree/main ---
LykosAI/GPT-Prompt-Expansion-Fooocus-v2
LykosAI
2023-11-25T23:46:03Z
1,273
4
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "license:agpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-25T23:33:23Z
--- license: agpl-3.0 --- GPT2 Prompt Expansion model from [lllyasviel/Fooocus](https://github.com/lllyasviel/Fooocus) Third-party [license terms](https://github.com/lllyasviel/Fooocus/blob/main/LICENSE) ## Disclaimer All trademarks, logos, and brand names are the property of their respective owners. All company, product and service names used in this website and licensed applications are for identification purposes only. Use of these names, trademarks, and brands does not imply endorsement.
PipableAI/pip-sql-1.3b
PipableAI
2024-03-27T06:02:04Z
1,273
75
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "sql", "code", "text2sql", "instruction_tuned", "basemodel", "jax", "text-generation-inference", "conversational", "en", "dataset:PipableAI/pip-txt-to-sql-spider-bird-dataset", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-02-14T18:58:05Z
--- license: apache-2.0 datasets: - PipableAI/pip-txt-to-sql-spider-bird-dataset language: - en metrics: - accuracy tags: - sql - code - text2sql - instruction_tuned - basemodel - jax - pytorch - text-generation-inference library_name: transformers pipeline_tag: text-generation widget: - text: >- <schema>CREATE TABLE system(JobID: String,GID: String, UID: String, Start:Time(yyyy/mm/dd), End: Time,ElapsedRaw: Time, CPUTimeRAW: Time,NCPUS: Number,NNodes: Number, NodeList: List, State:String, Timelimit: Time);</schema><question>Get UID and job id for Jobs that started on Jan 20 , 2023 ended on feb 14 2023 and has job id 20</question><sql> example_title: example --- # pipSQL-1.3b [pipableAi](https://www.linkedin.com/company/pipable.ai/about/) [colab_notebook](https://colab.research.google.com/drive/1insSxvc3jjAXe0zmdIjmbG3ttb5mpRgQ?usp=sharing) ## What have we built? A 1.3 bn SQL model that outperforms most SQL expert models and chatgpt on popular benchmarks. This is a distilled model built on the deepseek base model. Please refer to https://huggingface.co/PipableAI/pip-library-etl-1.3b for our state of the art model. ## How we built it? We used softmax cross entropy and a modified form of policy grad along with Q loss, optimized in an EM set up. Loss behaviour in the set up mentioned above - ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658d8095a2a6a6e0da8bb8a6/I80Ru1r4thoYrLagIWALa.png) ## Benchmarking : For benchmarking purposes we are using Semantic Evaluation for Text-to-SQL with Distilled Test Suites, an officially accepted evaluation framework for Spider, SParC, and CoSQL which was proposed by a research team of Yale and Berkeley. The benchmark contains 2200 test data points Here is the link to run the evaluation: [Test Suite SQL Eval](https://github.com/taoyds/test-suite-sql-eval) |model|easy|medium|hard|extra| |-----|----|------|----|-----| |sqlcoder-7b-2|72.0|58.0|40.6|37.3| |pipSQL-1.3b|78.5|57.5|42.1|28.3| |pipSQL-7b|63.0|40.0|30.2|25.0| |sqlcoder-7b|60.6|48.2|28.3|20.4| |gpt-3.5|58.8|44.7|31.0|28.4| We have also benchmarked it on defog eval. It contains 200 test data points handpicked by defog team. Here is the link to it: [Defog SQL-Eval](https://github.com/defog-ai/sql-eval) These are the results - ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d32c6b921678fdc9de3302/fFeLSEYBNpQk_JWjFsF5M.png) ## License The model is open source under apache 2.0. License ## Usage ### Installation ```bash pip install transformers ``` ### Prompt ```python prompt = f"""<schema>{schema}</schema> <question>{question}</question> <sql>""" ``` ### PyTorch ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" model = AutoModelForCausalLM.from_pretrained("PipableAI/pip-sql-1.3b") tokenizer = AutoTokenizer.from_pretrained("PipableAI/pip-sql-1.3b") inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(outputs[0], skip_special_tokens=True).split('<sql>')[1].split('</sql>')[0]) ``` ### Flax ```python from transformers import FlaxAutoModelForCausalLM, AutoTokenizer device = "cuda" model = FlaxAutoModelForCausalLM.from_pretrained("PipableAI/pip-sql-1.3b",from_pt=True) tokenizer = AutoTokenizer.from_pretrained("PipableAI/pip-sql-1.3b") inputs = tokenizer(text, return_tensors="jax") outputs = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(outputs[0], skip_special_tokens=True).split('<sql>')[1].split('</sql>')[0]) ``` ## Examples ### Schema ```sql CREATE TABLE Products ( product_id number, parent_product_id number, product_name text, product_price number, product_color text, product_size text, product_description text); CREATE TABLE Customers ( customer_id number, gender_code text, customer_first_name text, customer_middle_initial text, customer_last_name text, email_address text, login_name text, login_password text, phone_number text, address_line_1 text, town_city text, county text, country text); CREATE TABLE Customer_Payment_Methods ( customer_id number, payment_method_code text); CREATE TABLE Invoices ( invoice_number number, invoice_status_code text, invoice_date time); CREATE TABLE Orders ( order_id number, customer_id number, order_status_code text, date_order_placed time); CREATE TABLE Order_Items ( order_item_id number, product_id number, order_id number, order_item_status_code text); CREATE TABLE Shipments ( shipment_id number, order_id number, invoice_number number, shipment_tracking_number text, shipment_date time); CREATE TABLE Shipment_Items ( shipment_id number, order_item_id number); ``` ### Questions What are the email address, town and county of the customers who are of the least common gender? ```sql SELECT email_address , town_city , county FROM customers GROUP BY gender_code ORDER BY count(*) ASC LIMIT 1 ``` What are the product price and the product size of the products whose price is above average? ```sql SELECT product_price , product_size FROM products WHERE product_price > (SELECT avg(product_price) FROM products) ``` Which customers did not make any orders? List the first name, middle initial and last name. ```sql SELECT T1.customer_first_name , T1.customer_middle_initial , T1.customer_last_name FROM Customers AS T1 WHERE T1.customer_id NOT IN (SELECT T2.customer_id FROM Orders AS T2) ``` ### Team Avi Kothari, Pratham Gupta, Ritvik Aryan Kalra, Rohan Bhatial, Soham Acharya
mii-llm/maestrale-chat-v0.2-alpha-sft
mii-llm
2024-01-25T23:38:33Z
1,272
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "sft", "it", "chatml", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-24T21:00:06Z
--- tags: - sft - it - mistral - chatml model-index: - name: maestrale-chat-v0.2-alpha results: [] license: cc-by-nc-4.0 language: - it prompt_template: >- <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/8eqbpHp.jpg" alt="Mii-LLM" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://buy.stripe.com/8wM00Sf3vb3H3pmfYY">Want to contribute? Please donate! This will let us work on better datasets and models!</a></p> </div> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Maestrale chat alpha ༄ By @efederici and @mferraretto ## Model description - **Language Model**: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus. - **Fine-Tuning**: SFT performed on ~270k Italian convs/instructions for one epoch. This model uses ChatML prompt format: ``` <|im_start|>system Assisti sempre con cura, rispetto e verità. Rispondi con la massima utilità ma in modo sicuro. Evita contenuti dannosi, non etici, pregiudizievoli o negativi. Assicurati che le risposte promuovano equità e positività.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Usage: ```python from transformers import ( AutoTokenizer, AutoModelForCausalLM, GenerationConfig, TextStreamer ) import torch torch.backends.cuda.matmul.allow_tf32 = True tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.2-alpha") model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.2-alpha", load_in_8bit=True, device_map="auto") gen = GenerationConfig( do_sample=True, temperature=0.7, repetition_penalty=1.2, top_k=50, top_p=0.95, max_new_tokens=500, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>") ) messages = [ {"role": "system", "content": "Assisti sempre con cura, rispetto e verità. Rispondi con la massima utilità ma in modo sicuro. Evita contenuti dannosi, non etici, pregiudizievoli o negativi. Assicurati che le risposte promuovano equità e positività."}, {"role": "user", "content": "{prompt}"} ] with torch.no_grad(), torch.backends.cuda.sdp_kernel( enable_flash=True, enable_math=False, enable_mem_efficient=False ): temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(temp, return_tensors="pt").to("cuda") streamer = TextStreamer(tokenizer, skip_prompt=True) _ = model.generate( **inputs, streamer=streamer, generation_config=gen ) ``` ## Intended uses & limitations It's an alpha version, it's not `aligned`. We are working on alignment data and evals.
ConvexAI/Pelican-9b-v0.1
ConvexAI
2024-03-04T16:33:20Z
1,272
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:flemmingmiguel/MBX-7B", "base_model:flemmingmiguel/MBX-7B-v3", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-02T12:44:05Z
--- license: apache-2.0 tags: - mergekit - merge base_model: - flemmingmiguel/MBX-7B - flemmingmiguel/MBX-7B-v3 model-index: - name: Pelican-9b-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 47.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 66.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.85 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 50.61 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ⚠️**Warning** ⚠️ Model is broken and outputs only broken german. Possibly obsessed with Fußball. ⚽ ### Merge Method This model was merged using the passthrough merge method and only speaks german, somewhat obsessed with football. ### Models Merged The following models were included in the merge: * [flemmingmiguel/MBX-7B](https://huggingface.co/flemmingmiguel/MBX-7B) * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: flemmingmiguel/MBX-7B-v3 layer_range: [0, 32] - sources: - model: flemmingmiguel/MBX-7B layer_range: [20, 32] merge_method: passthrough dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ConvexAI__Pelican-9b-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |50.38| |AI2 Reasoning Challenge (25-Shot)|47.95| |HellaSwag (10-Shot) |66.22| |MMLU (5-Shot) |62.85| |TruthfulQA (0-shot) |50.61| |Winogrande (5-shot) |74.66| |GSM8k (5-shot) | 0.00|
M4-ai/NeuralReyna-Mini-1.8B-v0.3
M4-ai
2024-04-11T00:52:45Z
1,272
10
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "en", "dataset:argilla/OpenHermes2.5-dpo-binarized-alpha", "dataset:Locutusque/Hercules-v3.0", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-18T09:36:18Z
--- language: - en license: apache-2.0 datasets: - argilla/OpenHermes2.5-dpo-binarized-alpha - Locutusque/Hercules-v3.0 inference: parameters: do_sample: true temperature: 0.8 top_p: 0.95 top_k: 40 min_new_tokens: 2 max_new_tokens: 250 repetition_penalty: 1.1 model-index: - name: NeuralReyna-Mini-1.8B-v0.3 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 35.58 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 61.13 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 44.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 41.99 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 60.93 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 6.75 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.3 name: Open LLM Leaderboard --- # NeuralReyna-Mini-1.8B-v0.3 ![Reyna image](https://th.bing.com/th/id/OIG3.8IBxuT77hh6Y_r1DZ6WK?dpr=2.6&pid=ImgDetMain) # Description Taken aloobun/Reyna-Mini-1.8B-v0.2 and further fine-tuned it using DPO using the argilla/OpenHermes2.5-dpo-binarized-alpha. This model has capabilities in coding, math, science, roleplay, and function calling. This model was trained on OpenAI's ChatML prompt format. # Quants HQQ - https://huggingface.co/twoxfh/NeuralReyna-Mini-hqq-1.8B-v0.3 # Evaluation Coming soon # Contributions Thanks to @aloobun and @Locutusque for their contributions to this model. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_M4-ai__NeuralReyna-Mini-1.8B-v0.3) | Metric |Value| |---------------------------------|----:| |Avg. |41.77| |AI2 Reasoning Challenge (25-Shot)|35.58| |HellaSwag (10-Shot) |61.13| |MMLU (5-Shot) |44.22| |TruthfulQA (0-shot) |41.99| |Winogrande (5-shot) |60.93| |GSM8k (5-shot) | 6.75|
FinancialSupport/hellfire-2b
FinancialSupport
2024-05-27T12:27:51Z
1,272
1
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/gemma-2b-it-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-03-21T07:28:23Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - trl base_model: unsloth/gemma-2b-it-bnb-4bit --- ## SQuAD-it Evaluation The Stanford Question Answering Dataset (SQuAD) in Italian (SQuAD-it) is used to evaluate the model's reading comprehension and question-answering capabilities. The following table presents the F1 score and Exact Match (EM) metrics: | Model | F1 Score | Exact Match (EM) | |----------------------------------------------|--------------|----------------------| | **FinancialSupport/hellfire-2b** | **44.06%** | **26.27%** | ## How to Use How to use hellfire-2b ```python import os from unsloth import FastLanguageModel import torch os.environ['TOKENIZERS_PARALLELISM'] = 'TRUE' model, tokenizer = FastLanguageModel.from_pretrained( model_name = "FinancialSupport/hellfire-2b", max_seq_length = 10000, dtype = torch.bfloat16, load_in_4bit = True, ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference alpaca_prompt = """ Di seguito ti verrà fornito un contesto e poi una domanda. il tuo compito è quello di rispondere alla domanda basandoti esclusivamente sul contesto ### Contesto: {} ### Domanda: {} ### Risposta: {} """ inputs = tokenizer( [ alpaca_prompt.format( "La torre degli Asinelli è una delle cosiddette due torri di Bologna, simbolo della città, situate in piazza di porta Ravegnana, all'incrocio tra le antiche strade San Donato (ora via Zamboni), San Vitale, Maggiore e Castiglione. Eretta, secondo la tradizione, fra il 1109 e il 1119 dal nobile Gherardo Asinelli, la torre è alta 97,20 metri, pende verso ovest per 2,23 metri e presenta all'interno una scalinata composta da 498 gradini. Ancora non si può dire con certezza quando e da chi fu costruita la torre degli Asinelli. Si presume che la torre debba il proprio nome a Gherardo Asinelli, il nobile cavaliere di fazione ghibellina al quale se ne attribuisce la costruzione, iniziata secondo una consolidata tradizione l'11 ottobre 1109 e terminata dieci anni dopo, nel 1119.", # instruction "Quale è alta la torre degli Asinelli?", # input "", # output - leave this blank for generation! ) ], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = False) print(tokenizer.batch_decode(outputs)) ``` # Uploaded model - **Developed by:** FinancialSupport - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
predibase/Meta-Llama-3-8B-dequantized
predibase
2024-05-03T16:35:40Z
1,272
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "en", "base_model:meta-llama/Meta-Llama-3-8B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-02T21:52:03Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - llama base_model: meta-llama/Meta-Llama-3-8B ---
skumar9/Llama-medx_v3.2
skumar9
2024-05-04T19:43:55Z
1,272
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-04T19:37:17Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ludocomito/Minerva-MoE-2x3B
ludocomito
2024-05-14T12:05:44Z
1,272
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "DeepMount00/Minerva-3B-base-RAG", "FairMind/Minerva-3B-Instruct-v1.0", "base_model:DeepMount00/Minerva-3B-base-RAG", "base_model:FairMind/Minerva-3B-Instruct-v1.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-13T17:22:01Z
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - DeepMount00/Minerva-3B-base-RAG - FairMind/Minerva-3B-Instruct-v1.0 base_model: - DeepMount00/Minerva-3B-base-RAG - FairMind/Minerva-3B-Instruct-v1.0 --- # Minerva-MoE-3x3B Minerva-MoE-3x3B is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [DeepMount00/Minerva-3B-base-RAG](https://huggingface.co/DeepMount00/Minerva-3B-base-RAG) * [FairMind/Minerva-3B-Instruct-v1.0](https://huggingface.co/FairMind/Minerva-3B-Instruct-v1.0) ## Evaluation arc_it acc_norm: 31.91 hellaswag_it acc_norm: 52.20 mmmlu_it: 25.72 ## 🧩 Configuration ```yaml base_model: sapienzanlp/Minerva-3B-base-v1.0 experts: - source_model: DeepMount00/Minerva-3B-base-RAG positive_prompts: - "rispondi a domande" - "cosa è" - "chi è" - "dove è" - "come si" - "spiegami" - "definisci" - source_model: FairMind/Minerva-3B-Instruct-v1.0 positive_prompts: - "istruzione" - "input" - "risposta" - "scrivi" - "sequenza" - "istruzioni" dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "ludocomito/Minerva-MoE-3x3B" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
pyannote/speech-separation-ami-1.0
pyannote
2024-06-11T14:40:27Z
1,272
13
pyannote-audio
[ "pyannote-audio", "pyannote", "pyannote-audio-pipeline", "audio", "voice", "speech", "speaker", "speaker-diarization", "speaker-separation", "speech-separation", "arxiv:2403.02288", "license:mit", "region:us" ]
null
2024-05-28T08:17:37Z
--- tags: - pyannote - pyannote-audio - pyannote-audio-pipeline - audio - voice - speech - speaker - speaker-diarization - speaker-separation - speech-separation license: mit extra_gated_prompt: "The collected information will help acquire a better knowledge of pyannote.audio userbase and help its maintainers improve it further. Though this pipeline uses MIT license and will always remain open-source, we will occasionnally email you about premium pipelines and paid services around pyannote." extra_gated_fields: Company/university: text Website: text --- Using this open-source pipeline in production? Consider switching to [pyannoteAI](https://www.pyannote.ai) for better and faster options. # 🎹 PixIT / joint speaker diarization and speech separation This pipeline ingests mono audio sampled at 16kHz and outputs speaker diarization as an [`Annotation`](http://pyannote.github.io/pyannote-core/structure.html#annotation) instance and speech separation as a [`SlidingWindowFeature`](http://pyannote.github.io/pyannote-core/reference.html#pyannote.core.SlidingWindowFeature). Audio files sampled at a different rate are resampled to 16kHz automatically upon loading. ![Pipeline](pipeline.png) It has been trained by [Joonas Kalda](https://www.linkedin.com/in/joonas-kalda-996499133) with [pyannote.audio](https://github.com/pyannote/pyannote-audio) `3.3.0` using the [AMI](https://groups.inf.ed.ac.uk/ami/corpus/) dataset (single distant microphone, SDM). These [paper](https://arxiv.org/abs/2403.02288) and [companion repository](https://github.com/joonaskalda/PixIT) describe the approach in more details. ## Requirements 1. Install [`pyannote.audio`](https://github.com/pyannote/pyannote-audio) `3.3.0` with `pip install pyannote.audio[separation]==3.3.0` 2. Accept [`pyannote/separation-ami-1.0`](https://hf.co/pyannote/separation-ami-1.0) user conditions 3. Accept [`pyannote/speech-separation-ami-1.0`](https://hf.co/pyannote/speech-separation-ami-1.0) user conditions 4. Create access token at [`hf.co/settings/tokens`](https://hf.co/settings/tokens). ## Usage ```python # instantiate the pipeline from pyannote.audio import Pipeline pipeline = Pipeline.from_pretrained( "pyannote/speech-separation-ami-1.0", use_auth_token="HUGGINGFACE_ACCESS_TOKEN_GOES_HERE") # run the pipeline on an audio file diarization, sources = pipeline("audio.wav") # dump the diarization output to disk using RTTM format with open("audio.rttm", "w") as rttm: diarization.write_rttm(rttm) # dump sources to disk as SPEAKER_XX.wav files import scipy.io.wavfile for s, speaker in enumerate(diarization.labels()): scipy.io.wavfile.write(f'{speaker}.wav', 16000, sources.data[:,s]) ``` ### Processing on GPU `pyannote.audio` pipelines run on CPU by default. You can send them to GPU with the following lines: ```python import torch pipeline.to(torch.device("cuda")) ``` ### Processing from memory Pre-loading audio files in memory may result in faster processing: ```python waveform, sample_rate = torchaudio.load("audio.wav") diarization = pipeline({"waveform": waveform, "sample_rate": sample_rate}) ``` ### Monitoring progress Hooks are available to monitor the progress of the pipeline: ```python from pyannote.audio.pipelines.utils.hook import ProgressHook with ProgressHook() as hook: diarization = pipeline("audio.wav", hook=hook) ``` ## Citations ```bibtex @inproceedings{Kalda24, author={Joonas Kalda and Clément Pagés and Ricard Marxer and Tanel Alumäe and Hervé Bredin}, title={{PixIT: Joint Training of Speaker Diarization and Speech Separation from Real-world Multi-speaker Recordings}}, year=2024, booktitle={Proc. Odyssey 2024}, } ``` ```bibtex @inproceedings{Bredin23, author={Hervé Bredin}, title={{pyannote.audio 2.1 speaker diarization pipeline: principle, benchmark, and recipe}}, year=2023, booktitle={Proc. INTERSPEECH 2023}, } ```
sdadas/polish-roberta-large-v2
sdadas
2024-04-23T15:32:52Z
1,271
5
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "pl", "license:lgpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: lgpl-3.0 language: - pl ---
wavymulder/portraitplus
wavymulder
2023-05-05T21:59:07Z
1,271
292
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-23T16:04:26Z
--- language: - en thumbnail: "https://huggingface.co/wavymulder/portraitplus/resolve/main/imgs/page1.jpg" license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - safetensors - diffusers inference: true --- **Portrait+** ![Header](https://huggingface.co/wavymulder/portraitplus/resolve/main/imgs/page1.jpg) [*CKPT DOWNLOAD LINK*](https://huggingface.co/wavymulder/portraitplus/resolve/main/portrait%2B1.0.ckpt) - this is a dreambooth model trained on a diverse set of close to medium range portraits of people. Use `portrait+ style` in your prompt (I recommend at the start) The goal was to create a model with a consistent portrait composition and consistent eyes. See the batch example below for the consistency of the model's eyes. This model can do several styles, so you'll want to guide it along depending on your goals. Note below in the document that prompting celebrities works a bit differently than prompting generic characters, since real people have a more photoreal presence in the base 1.5 model. Also note that fantasy concepts, like cyberpunk people or wizards, will require more rigid prompting for photoreal styles than something common like a person in a park. Portrait+ works best at a 1:1 aspect ratio, though I've had success with tall aspect ratios as well. Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images above.](https://huggingface.co/wavymulder/portraitplus/resolve/main/parameters_for_samples.txt) We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run portraitplus: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/wavymulder/portraitplus) ![Modification example](https://huggingface.co/wavymulder/portraitplus/resolve/main/imgs/page2.jpg) ![Batch example](https://huggingface.co/wavymulder/portraitplus/resolve/main/imgs/batchgrid.jpg)
timm/hrnet_w64.ms_in1k
timm
2023-04-24T21:35:57Z
1,271
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1908.07919", "license:mit", "region:us" ]
image-classification
2023-04-24T21:33:54Z
--- tags: - image-classification - timm library_name: timm license: mit datasets: - imagenet-1k --- # Model card for hrnet_w64.ms_in1k A HRNet image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 128.1 - GMACs: 29.0 - Activations (M): 35.1 - Image size: 224 x 224 - **Papers:** - Deep High-Resolution Representation Learning for Visual Recognition: https://arxiv.org/abs/1908.07919 - **Original:** https://github.com/HRNet/HRNet-Image-Classification - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('hrnet_w64.ms_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'hrnet_w64.ms_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 128, 56, 56]) # torch.Size([1, 256, 28, 28]) # torch.Size([1, 512, 14, 14]) # torch.Size([1, 1024, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'hrnet_w64.ms_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{WangSCJDZLMTWLX19, title={Deep High-Resolution Representation Learning for Visual Recognition}, author={Jingdong Wang and Ke Sun and Tianheng Cheng and Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao}, journal = {TPAMI} year={2019} } ```
osiria/distilbert-base-italian-cased
osiria
2023-06-09T18:06:51Z
1,271
1
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "fill-mask", "it", "arxiv:1910.01108", "arxiv:2010.05609", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-01T16:04:32Z
--- license: apache-2.0 language: - it --- -------------------------------------------------------------------------------------------------- <body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: DistilBERT</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body> -------------------------------------------------------------------------------------------------- <h3>Model description</h3> This is a <b>DistilBERT</b> <b>[1]</b> model for the <b>Italian</b> language, obtained using the multilingual <b>DistilBERT</b> ([distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased)) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in <b>[2]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset) The resulting model has 67M parameters, a vocabulary of 30.785 tokens, and a size of ~270 MB. <h3>Quick usage</h3> ```python from transformers import BertTokenizerFast, DistilBertModel tokenizer = DistilBertTokenizerFast.from_pretrained("osiria/distilbert-base-italian-cased") model = DistilBertModel.from_pretrained("osiria/distilbert-base-italian-cased") ``` <h3>References</h3> [1] https://arxiv.org/abs/1910.01108 [2] https://arxiv.org/abs/2010.05609 <h3>License</h3> The model is released under <b>Apache-2.0</b> license
KnutJaegersberg/RWKV-4-PilePlus-1B5-20230520-2942-486Gtokens-ctx4096
KnutJaegersberg
2023-11-18T07:42:43Z
1,271
0
transformers
[ "transformers", "pytorch", "rwkv", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-09-05T09:09:08Z
--- license: apache-2.0 --- This is just a standard conversion to hf transformers format of models from here: https://huggingface.co/BlinkDL/rwkv-4-pileplus According to the documentation I found, this model should have seen roundabout roundabout 0.8 trillion tokens! # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__RWKV-4-PilePlus-1B5-20230520-2942-486Gtokens-ctx4096) | Metric | Value | |-----------------------|---------------------------| | Avg. | 28.77 | | ARC (25-shot) | 30.63 | | HellaSwag (10-shot) | 52.63 | | MMLU (5-shot) | 25.04 | | TruthfulQA (0-shot) | 34.96 | | Winogrande (5-shot) | 52.8 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 5.33 |
VAIBHAV22334455/JARVIS
VAIBHAV22334455
2024-03-29T05:48:00Z
1,271
8
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "code", "en", "hi", "dataset:fka/awesome-chatgpt-prompts", "dataset:DIBT/10k_prompts_ranked", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-26T08:41:36Z
--- language: - en - hi license: apache-2.0 datasets: - fka/awesome-chatgpt-prompts - DIBT/10k_prompts_ranked metrics: - bleu pipeline_tag: text-generation model-index: - name: JARVIS results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 32.08 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 56.86 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 27.15 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 37.33 source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 60.14 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 1.14 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAIBHAV22334455/JARVIS name: Open LLM Leaderboard tags: - code --- # Model Card for Model ID Overview This model is a conversational AI designed to engage in natural language interactions with users. It is based on the Causal Language Modeling (CLM) architecture and has been fine-tuned on conversational datasets to generate coherent and contextually relevant responses. Usage To use this model, you can interact with it via the Hugging Face Inference API. Provide a text prompt, and the model will generate a response based on the given input. Intended Use This model is intended for various conversational applications, including chatbots, virtual assistants, and dialogue systems. It can be deployed in environments where human-like interactions are required, such as customer service, educational platforms, or entertainment applications. Limitations and Ethical Considerations While this model is capable of generating human-like responses, it may occasionally produce outputs that are inappropriate, offensive, or misleading. It is essential to monitor its responses and ensure responsible deployment to mitigate potential harms. License The model is released under the Apache License 2.0, which allows for both commercial and non-commercial use with proper attribution. Acknowledgments This model was trained using the Hugging Face Transformers library and fine-tuned on conversational datasets. We acknowledge the contributions of the open-source community and the developers of the Transformers library. Contact Information For inquiries or feedback regarding this model, please contact [your contact information]. References Provide any relevant references, citations, or links to resources used in training or developing this model. ## Model Details ### Model Description This model is a state-of-the-art conversational AI system based on the Causal Language Modeling (CLM) architecture. It has been fine-tuned on large-scale conversational datasets to generate contextually relevant and coherent responses to user inputs. The model utilizes self-attention mechanisms and deep neural networks to understand and process natural language inputs, allowing it to engage in human-like conversations across a wide range of topics and contexts. Architecture The architecture of this model consists of multiple layers of transformer blocks, including self-attention mechanisms and feed-forward neural networks. It employs techniques such as positional encoding and layer normalization to enhance its ability to capture and process sequential information in text data. The model's parameters are optimized through training on conversational datasets using techniques such as gradient descent and backpropagation. Fine-Tuning During the fine-tuning process, the model is trained on conversational datasets, where it learns to generate appropriate responses based on input prompts. Fine-tuning involves adjusting the parameters of the pre-trained model to better suit the conversational task at hand, thereby improving its performance in generating contextually relevant and coherent responses. Performance The performance of this model is evaluated based on various metrics, including fluency, coherence, relevance, and engagement. It has been extensively tested on benchmark datasets and real-world conversational applications to assess its ability to produce human-like responses and maintain meaningful interactions with users. Use Cases This model can be deployed in a variety of conversational applications, including chatbots, virtual assistants, customer support systems, and interactive storytelling platforms. It can facilitate natural language interactions between users and systems, enhancing user experience and providing valuable assistance across different domains and industries. Limitations and Ethical Considerations While this model demonstrates advanced capabilities in generating human-like responses, it may occasionally produce outputs that are inappropriate, biased, or misleading. Careful monitoring and evaluation are necessary to ensure responsible deployment and mitigate potential risks, such as spreading misinformation or perpetuating harmful stereotypes. License The model is released under the Apache License 2.0, allowing for both commercial and non-commercial use with proper attribution. Contact Information For inquiries or feedback regarding this model, please contact [your contact information]. References Provide any relevant references, citations, or links to resources used in training or developing this model. - **Developed by:** [VAIBHAV VERMA] - **Model type:** [conversational AI] - **Language(s) (NLP):** [PYTHON] - **License:** [Apache License 2.0] - INSPIRED BY [OEvortex/vortex-3b ] ## Uses The model can be utilized in various conversational applications across different domains and industries. Some potential uses include: Chatbots: Deploy the model as a chatbot to engage with users in natural language conversations, providing assistance, answering questions, and offering recommendations. Virtual Assistants: Integrate the model into virtual assistant applications to help users with tasks such as scheduling appointments, setting reminders, and retrieving information from the web. Customer Support Systems: Use the model to power customer support chat systems, where it can handle customer inquiries, troubleshoot issues, and escalate complex queries to human agents when necessary. Interactive Storytelling: Employ the model in interactive storytelling platforms to create immersive narrative experiences where users can engage with virtual characters and influence the plot through their interactions. Language Learning: Develop language learning applications that leverage the model to provide conversational practice and feedback to learners, helping them improve their language skills through realistic dialogue simulations. Social Media Engagement: Integrate the model into social media platforms to enhance user engagement by enabling automated responses to comments, messages, and posts, personalized recommendations, and conversational interactions. Healthcare Assistants: Adapt the model for use in healthcare applications, where it can assist patients with medical inquiries, provide health-related information, and offer support for mental health and wellness. Educational Tools: Incorporate the model into educational applications to create interactive tutoring systems, virtual classroom assistants, and language practice tools that engage students in conversational learning experiences. Note: This AI model marks my first deployment on the Hugging Face platform. I am grateful for the invaluable assistance provided by Vortex Bahi throughout the development and deployment process. Their guidance and support have been instrumental in bringing this project to fruition. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_VAIBHAV22334455__JARVIS) | Metric |Value| |---------------------------------|----:| |Avg. |35.78| |AI2 Reasoning Challenge (25-Shot)|32.08| |HellaSwag (10-Shot) |56.86| |MMLU (5-Shot) |27.15| |TruthfulQA (0-shot) |37.33| |Winogrande (5-shot) |60.14| |GSM8k (5-shot) | 1.14|
mii-llm/maestrale-chat-v0.3-beta
mii-llm
2024-04-20T17:07:23Z
1,271
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "sft", "it", "chatml", "axolotl", "kpo", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-04T09:51:40Z
--- language: - it license: cc-by-nc-4.0 tags: - sft - it - mistral - chatml - axolotl - kpo prompt_template: <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant model-index: - name: maestrale-chat-v0.3-beta results: [] --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/dgSNbTl.jpg" alt="Mii-LLM" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://buy.stripe.com/8wM00Sf3vb3H3pmfYY">Want to contribute? Please donate! This will let us work on better datasets and models!</a></p> </div> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Maestrale chat beta ༄ By @efederici and @mferraretto ## Model description - **Language Model**: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus. - **Fine-Tuning**: SFT performed on convs/instructions for three epochs. - **KTO**: Aligned with KTO. **v0.3** - Function calling - Reduced default system prompt to avoid wasting tokens (pre-alignment) This model uses ChatML prompt format: ``` <|im_start|>system Sei un assistente utile.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Usage: ```python from transformers import ( AutoTokenizer, AutoModelForCausalLM, GenerationConfig, TextStreamer ) import torch tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.3-beta") model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.3-beta", load_in_8bit=True, device_map="auto") gen = GenerationConfig( do_sample=True, temperature=0.7, repetition_penalty=1.2, top_k=50, top_p=0.95, max_new_tokens=500, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>") ) streamer = TextStreamer(tokenizer, skip_prompt=True) messages = [ {"role": "system", "content": "Sei un assistente utile."}, {"role": "user", "content": "{prompt}"} ] with torch.no_grad(): temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer(temp, return_tensors="pt").to("cuda") _ = model.generate( **inputs, streamer=streamer, generation_config=gen ) ``` ## Intended uses & limitations It's a beta version, but it's not `safe`. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Jayant9928/orpo_med_v2
Jayant9928
2024-05-21T12:44:32Z
1,271
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-01T17:57:27Z
--- license: apache-2.0 --- Model Card for Model ID Model Details Model Description This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. Developed by: [More Information Needed] Funded by [optional]: [More Information Needed] Shared by [optional]: [More Information Needed] Model type: [More Information Needed] Language(s) (NLP): [More Information Needed] License: [More Information Needed] Finetuned from model [optional]: [More Information Needed] Model Sources [optional] Repository: [More Information Needed] Paper [optional]: [More Information Needed] Demo [optional]: [More Information Needed] Uses Direct Use [More Information Needed] Downstream Use [optional] [More Information Needed] Out-of-Scope Use [More Information Needed] Bias, Risks, and Limitations [More Information Needed] Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] Training Details Training Data [More Information Needed] Training Procedure Preprocessing [optional] [More Information Needed] Training Hyperparameters Training regime: [More Information Needed] Speeds, Sizes, Times [optional] [More Information Needed] Evaluation Testing Data, Factors & Metrics Testing Data [More Information Needed] Factors [More Information Needed] Metrics [More Information Needed] Results [More Information Needed] Summary Model Examination [optional] [More Information Needed] Environmental Impact
mii-llm/minerva-chat-v0.1-alpha-sft
mii-llm
2024-05-20T18:30:09Z
1,271
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "minerva", "sft", "conversational", "it", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-20T17:52:59Z
--- license: cc-by-nc-4.0 language: - it tags: - minerva - sft --- Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft Minerva sft
DavidAU/Dark-Forest-V2-Ultra-Quality-20b-GGUF
DavidAU
2024-06-02T02:32:42Z
1,271
2
null
[ "gguf", "story", "roleplay", "creative", "rp", "fantasy", "story telling", "32 bit upscale", "ultra high precision", "nsfw", "en", "license:apache-2.0", "region:us" ]
null
2024-06-01T13:57:52Z
--- license: apache-2.0 language: - en tags: - story - roleplay - creative - rp - fantasy - story telling - 32 bit upscale - ultra high precision - nsfw --- <B> Ultra High Quality - 20B Dark Forest Version 2.0 - 32 bit upscale </b> Fully rebuilt from master files, including full merge(s) to maintain full 32 bit precision right up until it is compressed into GGUF files which results on a top to bottom upgrade. The result is superior performance in instruction following, reasoning, depth, nuance and emotion. NOTE: There are two original versions of "Dark Forest 20B", this is an upscale of the second version. <img src="dark-forest.jpg"> On average this means a q4km operates at Q6 levels and Q6 and Q8 exceeds original model full precision performance. Perplexity drop (lower is better) is close to 10% (over 752 points for q4km) for all quants. That means precision has been enhanced for all 20 billion parameters which affects "brain density" / "function", instruction following and output quality. Imatrix quants to follow shortly. For more details, including a list of enhancements see our other 32 bit upscale of "Space Whale 20B" rebuild here: [ https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF ] For Version 1 of Dark Forest Ultra Quality 32 bit upscale please go here: [ https://huggingface.co/DavidAU/Dark-Forest-V1-Ultra-Quality-20b-GGUF ] Special thanks to "TEEZEE" for making a both fantasic models of "Dark Forest". <b> Info from the original model card: </B> Warning: This model can produce NSFW content! Results: - main difference to v1.0 - model has much better sense of humor. - produces SFW nad NSFW content without issues, switches context seamlessly. - good at following instructions. - good at tracking multiple characters in one scene. - very creative, scenarios produced are mature and complicated, model doesn't shy from writing about PTSD, menatal issues or complicated relationships. - NSFW output is more creative and suprising than typical limaRP output. - definitely for mature audiences, not only because of vivid NSFW content but also because of overall maturity of stories it produces. - This is NOT Harry Potter level storytelling. For original model spec and information please visit: [ https://huggingface.co/TeeZee/DarkForest-20B-v2.0 ]
Salesforce/moirai-1.1-R-small
Salesforce
2024-06-18T17:33:23Z
1,271
1
transformers
[ "transformers", "safetensors", "pytorch_model_hub_mixin", "model_hub_mixin", "endpoints_compatible", "region:us" ]
null
2024-06-14T08:11:18Z
--- tags: - pytorch_model_hub_mixin - model_hub_mixin --- This is new updated version of Moirai-1.0-R (https://huggingface.co/Salesforce/moirai-1.0-R-base). The Moirai-1.1-R model achieved significant improvements (~20%) for low-frequency cases like Yearly and Quarterly data in Normalised Mean Absolute Error (NMAE) for 40 datasets on the Monash repository.
gglabs/Gemma-kiosk-scenario-11-epoch
gglabs
2024-06-20T18:17:30Z
1,271
0
transformers
[ "transformers", "gguf", "gemma", "text-generation-inference", "unsloth", "en", "base_model:gemmathon/gemma-2b-ko-dev-pbmt192", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-20T18:10:41Z
--- base_model: gemmathon/gemma-2b-ko-dev-pbmt192 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - gemma - gguf --- # Uploaded model - **Developed by:** gglabs - **License:** apache-2.0 - **Finetuned from model :** gemmathon/gemma-2b-ko-dev-pbmt192 This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
misri/jibMixRealisticXL_v130RisenFromAshes
misri
2024-07-01T10:55:08Z
1,271
0
diffusers
[ "diffusers", "safetensors", "license:unknown", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-07-01T09:25:19Z
--- license: unknown ---
wisdomik/QuiltNet-B-16-PMB
wisdomik
2024-03-20T17:16:07Z
1,270
3
open_clip
[ "open_clip", "clip", "zero-shot-image-classification", "vision", "language", "histopathology", "histology", "medical", "arxiv:2306.11207", "license:mit", "region:us" ]
zero-shot-image-classification
2023-06-20T00:42:00Z
--- tags: - zero-shot-image-classification - clip - vision - language - histopathology - histology - medical library_tag: open_clip license: mit widget: - src: >- https://quilt1m.github.io/img/BREST092.jpg candidate_labels: adipose tissue, debris tissue, lymphocytes tissue, mucus tissue, smooth muscle tissue, normal colon mucosa tissue, cancer-associated stroma tissue, colorectal adenocarcinoma epithelium tissue example_title: Tissue phenotyping - src: >- https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/squamous_cell_carcinoma_histopathology.jpeg candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology example_title: squamous cell carcinoma histopathology - src: >- https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/adenocarcinoma_histopathology.jpg candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology example_title: adenocarcinoma histopathology pipeline_tag: zero-shot-image-classification --- ## QuiltNet-B-16-PMB Description [QuiltNet-B-32/PMB](https://github.com/wisdomikezogwo/quilt1m/) is a ViT-B/16 image tower and PubMedBERT text tower vision-language foundation model trained on the [Quilt-1M](https://quilt1m.github.io/) dataset curated from representative histopathology videos. It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering. QuiltNet establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches: ![](barchart_zeroshot.png) # Citation ```bibtex @misc{ikezogwo2023quilt1m, title={Quilt-1M: One Million Image-Text Pairs for Histopathology}, author={Wisdom Oluchi Ikezogwo and Mehmet Saygin Seyfioglu and Fatemeh Ghezloo and Dylan Stefan Chan Geva and Fatwir Sheikh Mohammed and Pavan Kumar Anand and Ranjay Krishna and Linda Shapiro}, year={2023}, eprint={2306.11207}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ### Intended Use The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models. #### Primary intended uses The primary intended users of these models are AI researchers. We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision histopathology models. ### Out-of-Scope Use Cases **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the Quilt-1M dataset used in training of these models has additional considerations, see below. ## Training Data This model was trained with [QUILT-1M](https://quilt1m.github.io/) is an image-text dataset for histopathology. Curated from educational videos on Youtube QUILT-1M contributes the largest dataset for vision language modeling in histopathology. **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale histopathology datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. # Evaluation Evaluation done with code in the [CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark) and results can be found in the paper on a list of varying histology tasks and datasets. # Disclaimer It is important to note that the results obtained from this function are not intended to constitute medical advice or replace consultation with a qualified medical professional. The use of this function is solely at your own risk and should be consistent with applicable laws, regulations, and ethical considerations. We do not warrant or guarantee the accuracy, completeness, suitability, or usefulness of this function for any particular purpose, and we hereby disclaim any liability arising from any reliance placed on this function or any results obtained from its use. # Privacy In accordance with the privacy policy of Youtube, only Video IDs data is redistributed by us. It is strictly prohibited to redistribute any content apart from the Video IDs. Any distribution carried out must adhere to the laws and regulations applicable in your jurisdiction, including export control laws and embargoes.'
GritLM/GritLM-8x7B-KTO
GritLM
2024-06-14T13:52:15Z
1,270
3
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "custom_code", "dataset:GritLM/tulu2", "arxiv:2402.01306", "arxiv:2402.09906", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-17T16:24:34Z
--- pipeline_tag: text-generation inference: true license: apache-2.0 datasets: - GritLM/tulu2 --- # Model Summary A [**KTO**](https://arxiv.org/abs/2402.01306) version of https://huggingface.co/GritLM/GritLM-8x7B > GritLM is a generative representational instruction tuned language model. It unifies text representation (embedding) and text generation into a single model achieving state-of-the-art performance on both types of tasks. - **Repository:** [ContextualAI/gritlm](https://github.com/ContextualAI/gritlm) - **Paper:** https://arxiv.org/abs/2402.09906 - **Logs:** https://wandb.ai/muennighoff/gritlm/runs/0uui712t/overview - **Script:** https://github.com/ContextualAI/gritlm/blob/main/scripts/training/train_gritlm_7b.sh | Model | Description | |-------|-------------| | [GritLM 7B](https://hf.co/GritLM/GritLM-7B) | Mistral 7B finetuned using GRIT | | [GritLM 8x7B](https://hf.co/GritLM/GritLM-8x7B) | Mixtral 8x7B finetuned using GRIT | # Use The model usage is documented [here](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference). # Citation ```bibtex @misc{muennighoff2024generative, title={Generative Representational Instruction Tuning}, author={Niklas Muennighoff and Hongjin Su and Liang Wang and Nan Yang and Furu Wei and Tao Yu and Amanpreet Singh and Douwe Kiela}, year={2024}, eprint={2402.09906}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Classical/Yinka
Classical
2024-05-30T09:06:41Z
1,270
9
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "mteb", "model-index", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
2024-05-30T08:40:46Z
--- tags: - mteb model-index: - name: checkpoint-1431 results: - task: type: STS dataset: type: C-MTEB/AFQMC name: MTEB AFQMC config: default split: validation revision: None metrics: - type: cos_sim_pearson value: 56.306314279047875 - type: cos_sim_spearman value: 61.020227685004016 - type: euclidean_pearson value: 58.61821670933433 - type: euclidean_spearman value: 60.131457106640674 - type: manhattan_pearson value: 58.6189460369694 - type: manhattan_spearman value: 60.126350618526224 - task: type: STS dataset: type: C-MTEB/ATEC name: MTEB ATEC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 55.8612958476143 - type: cos_sim_spearman value: 59.01977664864512 - type: euclidean_pearson value: 62.028094897243655 - type: euclidean_spearman value: 58.6046814257705 - type: manhattan_pearson value: 62.02580042431887 - type: manhattan_spearman value: 58.60626890004892 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 49.496 - type: f1 value: 46.673963383873065 - task: type: STS dataset: type: C-MTEB/BQ name: MTEB BQ config: default split: test revision: None metrics: - type: cos_sim_pearson value: 70.73971622592535 - type: cos_sim_spearman value: 72.76102992060764 - type: euclidean_pearson value: 71.04525865868672 - type: euclidean_spearman value: 72.4032852155075 - type: manhattan_pearson value: 71.03693009336658 - type: manhattan_spearman value: 72.39635701224252 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringP2P name: MTEB CLSClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 56.34751074520767 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringS2S name: MTEB CLSClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 48.4856662121073 - task: type: Reranking dataset: type: C-MTEB/CMedQAv1-reranking name: MTEB CMedQAv1 config: default split: test revision: None metrics: - type: map value: 89.26384109024997 - type: mrr value: 91.27261904761905 - task: type: Reranking dataset: type: C-MTEB/CMedQAv2-reranking name: MTEB CMedQAv2 config: default split: test revision: None metrics: - type: map value: 90.0464058154547 - type: mrr value: 92.06480158730159 - task: type: Retrieval dataset: type: C-MTEB/CmedqaRetrieval name: MTEB CmedqaRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 27.236 - type: map_at_10 value: 40.778 - type: map_at_100 value: 42.692 - type: map_at_1000 value: 42.787 - type: map_at_3 value: 36.362 - type: map_at_5 value: 38.839 - type: mrr_at_1 value: 41.335 - type: mrr_at_10 value: 49.867 - type: mrr_at_100 value: 50.812999999999995 - type: mrr_at_1000 value: 50.848000000000006 - type: mrr_at_3 value: 47.354 - type: mrr_at_5 value: 48.718 - type: ndcg_at_1 value: 41.335 - type: ndcg_at_10 value: 47.642 - type: ndcg_at_100 value: 54.855 - type: ndcg_at_1000 value: 56.449000000000005 - type: ndcg_at_3 value: 42.203 - type: ndcg_at_5 value: 44.416 - type: precision_at_1 value: 41.335 - type: precision_at_10 value: 10.568 - type: precision_at_100 value: 1.6400000000000001 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 23.998 - type: precision_at_5 value: 17.389 - type: recall_at_1 value: 27.236 - type: recall_at_10 value: 58.80800000000001 - type: recall_at_100 value: 88.411 - type: recall_at_1000 value: 99.032 - type: recall_at_3 value: 42.253 - type: recall_at_5 value: 49.118 - task: type: PairClassification dataset: type: C-MTEB/CMNLI name: MTEB Cmnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 86.03728202044498 - type: cos_sim_ap value: 92.49469583272597 - type: cos_sim_f1 value: 86.74095974528088 - type: cos_sim_precision value: 84.43657294664601 - type: cos_sim_recall value: 89.17465513210195 - type: dot_accuracy value: 72.21888153938664 - type: dot_ap value: 80.59377163340332 - type: dot_f1 value: 74.96686040583258 - type: dot_precision value: 66.4737793851718 - type: dot_recall value: 85.94809445873275 - type: euclidean_accuracy value: 85.47203848466627 - type: euclidean_ap value: 91.89152584749868 - type: euclidean_f1 value: 86.38105975197294 - type: euclidean_precision value: 83.40953625081646 - type: euclidean_recall value: 89.5721299976619 - type: manhattan_accuracy value: 85.3758268190018 - type: manhattan_ap value: 91.88989707722311 - type: manhattan_f1 value: 86.39767519839052 - type: manhattan_precision value: 82.76231263383298 - type: manhattan_recall value: 90.36707972878185 - type: max_accuracy value: 86.03728202044498 - type: max_ap value: 92.49469583272597 - type: max_f1 value: 86.74095974528088 - task: type: Retrieval dataset: type: C-MTEB/CovidRetrieval name: MTEB CovidRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 74.34100000000001 - type: map_at_10 value: 82.49499999999999 - type: map_at_100 value: 82.64200000000001 - type: map_at_1000 value: 82.643 - type: map_at_3 value: 81.142 - type: map_at_5 value: 81.95400000000001 - type: mrr_at_1 value: 74.71 - type: mrr_at_10 value: 82.553 - type: mrr_at_100 value: 82.699 - type: mrr_at_1000 value: 82.70100000000001 - type: mrr_at_3 value: 81.279 - type: mrr_at_5 value: 82.069 - type: ndcg_at_1 value: 74.605 - type: ndcg_at_10 value: 85.946 - type: ndcg_at_100 value: 86.607 - type: ndcg_at_1000 value: 86.669 - type: ndcg_at_3 value: 83.263 - type: ndcg_at_5 value: 84.71600000000001 - type: precision_at_1 value: 74.605 - type: precision_at_10 value: 9.758 - type: precision_at_100 value: 1.005 - type: precision_at_1000 value: 0.101 - type: precision_at_3 value: 29.996000000000002 - type: precision_at_5 value: 18.736 - type: recall_at_1 value: 74.34100000000001 - type: recall_at_10 value: 96.523 - type: recall_at_100 value: 99.473 - type: recall_at_1000 value: 100.0 - type: recall_at_3 value: 89.278 - type: recall_at_5 value: 92.83500000000001 - task: type: Retrieval dataset: type: C-MTEB/DuRetrieval name: MTEB DuRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 26.950000000000003 - type: map_at_10 value: 82.408 - type: map_at_100 value: 85.057 - type: map_at_1000 value: 85.09100000000001 - type: map_at_3 value: 57.635999999999996 - type: map_at_5 value: 72.48 - type: mrr_at_1 value: 92.15 - type: mrr_at_10 value: 94.554 - type: mrr_at_100 value: 94.608 - type: mrr_at_1000 value: 94.61 - type: mrr_at_3 value: 94.292 - type: mrr_at_5 value: 94.459 - type: ndcg_at_1 value: 92.15 - type: ndcg_at_10 value: 89.108 - type: ndcg_at_100 value: 91.525 - type: ndcg_at_1000 value: 91.82900000000001 - type: ndcg_at_3 value: 88.44 - type: ndcg_at_5 value: 87.271 - type: precision_at_1 value: 92.15 - type: precision_at_10 value: 42.29 - type: precision_at_100 value: 4.812 - type: precision_at_1000 value: 0.48900000000000005 - type: precision_at_3 value: 79.14999999999999 - type: precision_at_5 value: 66.64 - type: recall_at_1 value: 26.950000000000003 - type: recall_at_10 value: 89.832 - type: recall_at_100 value: 97.921 - type: recall_at_1000 value: 99.471 - type: recall_at_3 value: 59.562000000000005 - type: recall_at_5 value: 76.533 - task: type: Retrieval dataset: type: C-MTEB/EcomRetrieval name: MTEB EcomRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 53.5 - type: map_at_10 value: 63.105999999999995 - type: map_at_100 value: 63.63100000000001 - type: map_at_1000 value: 63.641999999999996 - type: map_at_3 value: 60.617 - type: map_at_5 value: 62.132 - type: mrr_at_1 value: 53.5 - type: mrr_at_10 value: 63.105999999999995 - type: mrr_at_100 value: 63.63100000000001 - type: mrr_at_1000 value: 63.641999999999996 - type: mrr_at_3 value: 60.617 - type: mrr_at_5 value: 62.132 - type: ndcg_at_1 value: 53.5 - type: ndcg_at_10 value: 67.92200000000001 - type: ndcg_at_100 value: 70.486 - type: ndcg_at_1000 value: 70.777 - type: ndcg_at_3 value: 62.853 - type: ndcg_at_5 value: 65.59899999999999 - type: precision_at_1 value: 53.5 - type: precision_at_10 value: 8.309999999999999 - type: precision_at_100 value: 0.951 - type: precision_at_1000 value: 0.097 - type: precision_at_3 value: 23.1 - type: precision_at_5 value: 15.2 - type: recall_at_1 value: 53.5 - type: recall_at_10 value: 83.1 - type: recall_at_100 value: 95.1 - type: recall_at_1000 value: 97.39999999999999 - type: recall_at_3 value: 69.3 - type: recall_at_5 value: 76.0 - task: type: Classification dataset: type: C-MTEB/IFlyTek-classification name: MTEB IFlyTek config: default split: validation revision: None metrics: - type: accuracy value: 51.773759138130046 - type: f1 value: 40.38600802756481 - task: type: Classification dataset: type: C-MTEB/JDReview-classification name: MTEB JDReview config: default split: test revision: None metrics: - type: accuracy value: 88.48030018761726 - type: ap value: 59.2732541555627 - type: f1 value: 83.58836007358619 - task: type: STS dataset: type: C-MTEB/LCQMC name: MTEB LCQMC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 73.67511194245922 - type: cos_sim_spearman value: 79.43347759067298 - type: euclidean_pearson value: 79.04491504318766 - type: euclidean_spearman value: 79.14478545356785 - type: manhattan_pearson value: 79.03365022867428 - type: manhattan_spearman value: 79.13172717619908 - task: type: Retrieval dataset: type: C-MTEB/MMarcoRetrieval name: MTEB MMarcoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 67.184 - type: map_at_10 value: 76.24600000000001 - type: map_at_100 value: 76.563 - type: map_at_1000 value: 76.575 - type: map_at_3 value: 74.522 - type: map_at_5 value: 75.598 - type: mrr_at_1 value: 69.47 - type: mrr_at_10 value: 76.8 - type: mrr_at_100 value: 77.082 - type: mrr_at_1000 value: 77.093 - type: mrr_at_3 value: 75.29400000000001 - type: mrr_at_5 value: 76.24 - type: ndcg_at_1 value: 69.47 - type: ndcg_at_10 value: 79.81099999999999 - type: ndcg_at_100 value: 81.187 - type: ndcg_at_1000 value: 81.492 - type: ndcg_at_3 value: 76.536 - type: ndcg_at_5 value: 78.367 - type: precision_at_1 value: 69.47 - type: precision_at_10 value: 9.599 - type: precision_at_100 value: 1.026 - type: precision_at_1000 value: 0.105 - type: precision_at_3 value: 28.777 - type: precision_at_5 value: 18.232 - type: recall_at_1 value: 67.184 - type: recall_at_10 value: 90.211 - type: recall_at_100 value: 96.322 - type: recall_at_1000 value: 98.699 - type: recall_at_3 value: 81.556 - type: recall_at_5 value: 85.931 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.96032279757901 - type: f1 value: 73.48052314033545 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 84.64357767316744 - type: f1 value: 83.58250539497922 - task: type: Retrieval dataset: type: C-MTEB/MedicalRetrieval name: MTEB MedicalRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 56.00000000000001 - type: map_at_10 value: 62.066 - type: map_at_100 value: 62.553000000000004 - type: map_at_1000 value: 62.598 - type: map_at_3 value: 60.4 - type: map_at_5 value: 61.370000000000005 - type: mrr_at_1 value: 56.2 - type: mrr_at_10 value: 62.166 - type: mrr_at_100 value: 62.653000000000006 - type: mrr_at_1000 value: 62.699000000000005 - type: mrr_at_3 value: 60.5 - type: mrr_at_5 value: 61.47 - type: ndcg_at_1 value: 56.00000000000001 - type: ndcg_at_10 value: 65.199 - type: ndcg_at_100 value: 67.79899999999999 - type: ndcg_at_1000 value: 69.056 - type: ndcg_at_3 value: 61.814 - type: ndcg_at_5 value: 63.553000000000004 - type: precision_at_1 value: 56.00000000000001 - type: precision_at_10 value: 7.51 - type: precision_at_100 value: 0.878 - type: precision_at_1000 value: 0.098 - type: precision_at_3 value: 21.967 - type: precision_at_5 value: 14.02 - type: recall_at_1 value: 56.00000000000001 - type: recall_at_10 value: 75.1 - type: recall_at_100 value: 87.8 - type: recall_at_1000 value: 97.7 - type: recall_at_3 value: 65.9 - type: recall_at_5 value: 70.1 - task: type: Reranking dataset: type: C-MTEB/Mmarco-reranking name: MTEB MMarcoReranking config: default split: dev revision: None metrics: - type: map value: 32.74158258279793 - type: mrr value: 31.56071428571428 - task: type: Classification dataset: type: C-MTEB/MultilingualSentiment-classification name: MTEB MultilingualSentiment config: default split: validation revision: None metrics: - type: accuracy value: 78.96666666666667 - type: f1 value: 78.82528563818045 - task: type: PairClassification dataset: type: C-MTEB/OCNLI name: MTEB Ocnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 83.54087709799674 - type: cos_sim_ap value: 87.26170197077586 - type: cos_sim_f1 value: 84.7609561752988 - type: cos_sim_precision value: 80.20735155513667 - type: cos_sim_recall value: 89.86272439281943 - type: dot_accuracy value: 72.22523010286952 - type: dot_ap value: 79.51975358187732 - type: dot_f1 value: 76.32183908045977 - type: dot_precision value: 67.58957654723126 - type: dot_recall value: 87.64519535374869 - type: euclidean_accuracy value: 82.0249052517596 - type: euclidean_ap value: 85.32829948726406 - type: euclidean_f1 value: 83.24924318869829 - type: euclidean_precision value: 79.71014492753623 - type: euclidean_recall value: 87.11721224920802 - type: manhattan_accuracy value: 82.13318895506227 - type: manhattan_ap value: 85.28856869288006 - type: manhattan_f1 value: 83.34946757018393 - type: manhattan_precision value: 76.94369973190348 - type: manhattan_recall value: 90.91869060190075 - type: max_accuracy value: 83.54087709799674 - type: max_ap value: 87.26170197077586 - type: max_f1 value: 84.7609561752988 - task: type: Classification dataset: type: C-MTEB/OnlineShopping-classification name: MTEB OnlineShopping config: default split: test revision: None metrics: - type: accuracy value: 94.56 - type: ap value: 92.80848436710805 - type: f1 value: 94.54951966576111 - task: type: STS dataset: type: C-MTEB/PAWSX name: MTEB PAWSX config: default split: test revision: None metrics: - type: cos_sim_pearson value: 39.0866558287863 - type: cos_sim_spearman value: 45.9211126233312 - type: euclidean_pearson value: 44.86568743222145 - type: euclidean_spearman value: 45.63882757207507 - type: manhattan_pearson value: 44.89480036909126 - type: manhattan_spearman value: 45.65929449046206 - task: type: STS dataset: type: C-MTEB/QBQTC name: MTEB QBQTC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 43.04701793979569 - type: cos_sim_spearman value: 44.87491033760315 - type: euclidean_pearson value: 36.2004061032567 - type: euclidean_spearman value: 41.44823909683865 - type: manhattan_pearson value: 36.136113427955095 - type: manhattan_spearman value: 41.39225495993949 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: None metrics: - type: cos_sim_pearson value: 61.65611315777857 - type: cos_sim_spearman value: 64.4067673105648 - type: euclidean_pearson value: 61.814977248797184 - type: euclidean_spearman value: 63.99473350700169 - type: manhattan_pearson value: 61.684304629588624 - type: manhattan_spearman value: 63.97831213239316 - task: type: STS dataset: type: C-MTEB/STSB name: MTEB STSB config: default split: test revision: None metrics: - type: cos_sim_pearson value: 76.57324933064379 - type: cos_sim_spearman value: 79.23602286949782 - type: euclidean_pearson value: 80.28226284310948 - type: euclidean_spearman value: 80.32210477608423 - type: manhattan_pearson value: 80.27262188617811 - type: manhattan_spearman value: 80.31619185039723 - task: type: Reranking dataset: type: C-MTEB/T2Reranking name: MTEB T2Reranking config: default split: dev revision: None metrics: - type: map value: 67.05266891356277 - type: mrr value: 77.1906333623497 - task: type: Retrieval dataset: type: C-MTEB/T2Retrieval name: MTEB T2Retrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 28.212 - type: map_at_10 value: 78.932 - type: map_at_100 value: 82.51899999999999 - type: map_at_1000 value: 82.575 - type: map_at_3 value: 55.614 - type: map_at_5 value: 68.304 - type: mrr_at_1 value: 91.211 - type: mrr_at_10 value: 93.589 - type: mrr_at_100 value: 93.659 - type: mrr_at_1000 value: 93.662 - type: mrr_at_3 value: 93.218 - type: mrr_at_5 value: 93.453 - type: ndcg_at_1 value: 91.211 - type: ndcg_at_10 value: 86.24000000000001 - type: ndcg_at_100 value: 89.614 - type: ndcg_at_1000 value: 90.14 - type: ndcg_at_3 value: 87.589 - type: ndcg_at_5 value: 86.265 - type: precision_at_1 value: 91.211 - type: precision_at_10 value: 42.626 - type: precision_at_100 value: 5.043 - type: precision_at_1000 value: 0.517 - type: precision_at_3 value: 76.42 - type: precision_at_5 value: 64.045 - type: recall_at_1 value: 28.212 - type: recall_at_10 value: 85.223 - type: recall_at_100 value: 96.229 - type: recall_at_1000 value: 98.849 - type: recall_at_3 value: 57.30800000000001 - type: recall_at_5 value: 71.661 - task: type: Classification dataset: type: C-MTEB/TNews-classification name: MTEB TNews config: default split: validation revision: None metrics: - type: accuracy value: 54.385000000000005 - type: f1 value: 52.38762400903556 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringP2P name: MTEB ThuNewsClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 74.55283855942916 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringS2S name: MTEB ThuNewsClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 68.55115316700493 - task: type: Retrieval dataset: type: C-MTEB/VideoRetrieval name: MTEB VideoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 58.8 - type: map_at_10 value: 69.035 - type: map_at_100 value: 69.52000000000001 - type: map_at_1000 value: 69.529 - type: map_at_3 value: 67.417 - type: map_at_5 value: 68.407 - type: mrr_at_1 value: 58.8 - type: mrr_at_10 value: 69.035 - type: mrr_at_100 value: 69.52000000000001 - type: mrr_at_1000 value: 69.529 - type: mrr_at_3 value: 67.417 - type: mrr_at_5 value: 68.407 - type: ndcg_at_1 value: 58.8 - type: ndcg_at_10 value: 73.395 - type: ndcg_at_100 value: 75.62 - type: ndcg_at_1000 value: 75.90299999999999 - type: ndcg_at_3 value: 70.11800000000001 - type: ndcg_at_5 value: 71.87400000000001 - type: precision_at_1 value: 58.8 - type: precision_at_10 value: 8.68 - type: precision_at_100 value: 0.9690000000000001 - type: precision_at_1000 value: 0.099 - type: precision_at_3 value: 25.967000000000002 - type: precision_at_5 value: 16.42 - type: recall_at_1 value: 58.8 - type: recall_at_10 value: 86.8 - type: recall_at_100 value: 96.89999999999999 - type: recall_at_1000 value: 99.2 - type: recall_at_3 value: 77.9 - type: recall_at_5 value: 82.1 - task: type: Classification dataset: type: C-MTEB/waimai-classification name: MTEB Waimai config: default split: test revision: None metrics: - type: accuracy value: 89.42 - type: ap value: 75.35978503182068 - type: f1 value: 88.01006394348263 --- ## Yinka Yinka embedding 模型是在开原模型[stella-v3.5-mrl](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d)上续训的,采用了[piccolo2](https://huggingface.co/sensenova/piccolo-large-zh-v2)提到的多任务混合损失(multi-task hybrid loss training)。同样本模型也支持了可变的向量维度。 ## 使用方法 该模型的使用方法同[stella-v3.5-mrl](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d)一样, 无需任何前缀。 ```python from sentence_transformers import SentenceTransformer from sklearn.preprocessing import normalize model = SentenceTransformer("Classical/Yinka") # 注意先不要normalize! 选取前n维后再normalize vectors = model.encode(["text1", "text2"], normalize_embeddings=False) print(vectors.shape) # shape is [2,1792] n_dims = 768 cut_vecs = normalize(vectors[:, :n_dims]) ``` ## 结果 | Model Name | Model Size (GB) | Dimension | Sequence Length | Classification (9) | Clustering (4) | Pair Classification (2) | Reranking (4) | Retrieval (8) | STS (8) | Average (35) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [Yinka](https://huggingface.co/Classical/Yinka) | 1.21 | 1792 | 512 | 74.30 | 61.99 | 89.87 | 69.77 | 74.40 | 63.30 | 70.79 | | [stella-v3.5-mrl](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) |1.21 | 1792 | 512 | 71.56 | 54.39 | 88.09 | 68.45 | 73.51 | 62.48 | 68.56 | | [piccolo-large-zh-v2](https://huggingface.co/sensenova/piccolo-large-zh-v2) | 1.21 | 1792 | 512 | 74.59 | 62.17 | 90.24 | 70 | 74.36 | 63.5 | 70.95 | ## 训练细节 TODO ## Licence 本模型采用MIT licence.
John6666/celeste-pony-v2-sdxl
John6666
2024-06-30T07:05:35Z
1,270
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-30T07:00:52Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://civitai.com/models/532935/celeste-pony?modelVersionId=608484).
Harveenchadha/vakyansh-wav2vec2-bengali-bnm-200
Harveenchadha
2021-08-02T18:38:54Z
1,269
2
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
Entry not found
digiplay/Koji_v2.1_diffusers
digiplay
2023-11-30T17:57:12Z
1,269
4
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-10-12T05:27:26Z
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/41916?modelVersionId=90674
llava-hf/bakLlava-v1-hf
llava-hf
2024-06-28T12:22:10Z
1,268
48
transformers
[ "transformers", "safetensors", "llava", "pretraining", "image-to-text", "en", "dataset:liuhaotian/LLaVA-Instruct-150K", "region:us" ]
image-to-text
2023-12-05T07:58:20Z
--- language: - en datasets: - liuhaotian/LLaVA-Instruct-150K pipeline_tag: image-to-text inference: false arxiv: 2304.08485 --- # BakLLaVA Model Card BakLlava is a model that is derived from the original Llava architecture, that uses Mistral-7b as a text backbone. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b7e345f92b20f7a38bf47a/V5lpOHWGGYJ2yPpEo_8i1.png) Below is the model card of BakLlava model 7b, which is copied from the original BakLlava model card that you can find [here](https://huggingface.co/SkunkworksAI/BakLLaVA-1). > BakLLaVA 1 is a Mistral 7B base augmented with the LLaVA 1.5 architecture. In this first version, we showcase that a Mistral 7B base outperforms Llama 2 13B on several benchmarks. You can run BakLLaVA-1 on our repo. We are currently updating it to make it easier for you to finetune and inference. (https://github.com/SkunkworksAI/BakLLaVA). > Note: BakLLaVA-1 is fully open-source but was trained on certain data that includes LLaVA's corpus which is not commercially permissive. We will fix this in the upcoming release. > BakLLaVA 2 is cooking with a significantly larger (commercially viable) dataset and a novel architecture that expands beyond the current LLaVA method. BakLLaVA-2 will do away with the restrictions of BakLLaVA-1. ## How to use the model First, make sure to have `transformers >= 4.35.3`. The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images: Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1qsl6cd2c8gGtEW1xV5io7S8NHh-Cp1TV?usp=sharing) Or check out our Spaces demo! [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-md-dark.svg)](https://huggingface.co/spaces/llava-hf/llava-4bit) ### Using `pipeline`: ```python from transformers import pipeline from PIL import Image import requests model_id = "llava-hf/bakLlava-v1-hf" pipe = pipeline("image-to-text", model=model_id) url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg" image = Image.open(requests.get(url, stream=True).raw) prompt = "USER: <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT:" outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200}) print(outputs) >>> {"generated_text": "\nUSER: What does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud\nASSISTANT: Lava"} ``` ### Using pure `transformers`: Below is an example script to run generation in `float16` precision on a GPU device: ```python import requests from PIL import Image import torch from transformers import AutoProcessor, LlavaForConditionalGeneration model_id = "llava-hf/bakLlava-v1-hf" prompt = "USER: <image>\nWhat are these?\nASSISTANT:" image_file = "http://images.cocodataset.org/val2017/000000039769.jpg" model = LlavaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, ).to(0) processor = AutoProcessor.from_pretrained(model_id) raw_image = Image.open(requests.get(image_file, stream=True).raw) inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16) output = model.generate(**inputs, max_new_tokens=200, do_sample=False) print(processor.decode(output[0][2:], skip_special_tokens=True)) ``` ### Model optimization #### 4-bit quantization through `bitsandbytes` library First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with: ```diff model = LlavaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, + load_in_4bit=True ) ``` #### Use Flash-Attention 2 to further speed-up generation First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with: ```diff model = LlavaForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, + use_flash_attention_2=True ).to(0) ``` # Evaluations ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b7e345f92b20f7a38bf47a/qdYubrBmF7ztAHgdfkkwG.png) # Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 450K academic-task-oriented VQA data mixture. - 40K ShareGPT data. - Additional private data (permissive) ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
giux78/zefiro-7b-dpo-qlora-ITA-v0.7
giux78
2024-02-14T19:12:49Z
1,268
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-14T19:09:00Z
Entry not found
TechxGenus/Meta-Llama-3-70B-Instruct-AWQ
TechxGenus
2024-04-19T15:21:49Z
1,268
10
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2024-04-19T12:50:05Z
--- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: other license_name: llama3 license_link: LICENSE extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- AWQ quantized version of Meta-Llama-3-70B-Instruct model. --- ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-70B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
skumar9/Llama-medx_v3.1
skumar9
2024-05-02T04:42:56Z
1,268
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-02T04:38:23Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/3x3x3mixxl-v2-sdxl-spo
John6666
2024-06-29T00:45:19Z
1,268
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "SPO", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-29T00:39:56Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony - SPO --- Original model is [here](https://civitai.com/models/464044?modelVersionId=605542).
Helsinki-NLP/opus-mt-mk-en
Helsinki-NLP
2023-08-16T12:01:06Z
1,267
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "mk", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-mk-en * source languages: mk * target languages: en * OPUS readme: [mk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.mk.en | 59.8 | 0.720 |
somosnlp-hackathon-2022/paraphrase-spanish-distilroberta
somosnlp-hackathon-2022
2022-04-02T18:33:17Z
1,267
13
sentence-transformers
[ "sentence-transformers", "pytorch", "roberta", "feature-extraction", "sentence-similarity", "transformers", "es", "dataset:hackathon-pln-es/parallel-sentences", "arxiv:2004.09813", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2022-03-30T17:58:23Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers language: - es datasets: - hackathon-pln-es/parallel-sentences widget: - text: "A ver si nos tenemos que poner todos en huelga hasta cobrar lo que queramos." - text: "La huelga es el método de lucha más eficaz para conseguir mejoras en el salario." - text: "Tendremos que optar por hacer una huelga para cobrar lo que queremos." - text: "Queda descartada la huelga aunque no cobremos lo que queramos." --- # paraphrase-spanish-distilroberta This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. We follow a **teacher-student** transfer learning approach to train an `bertin-roberta-base-spanish` model using parallel EN-ES sentence pairs. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Este es un ejemplo", "Cada oración es transformada"] model = SentenceTransformer('hackathon-pln-es/paraphrase-spanish-distilroberta') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['Este es un ejemplo", "Cada oración es transformada'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta') model = AutoModel.from_pretrained('hackathon-pln-es/paraphrase-spanish-distilroberta') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Evaluation Results Similarity Evaluation on STS-2017.es-en.txt and STS-2017.es-es.txt (translated manually for evaluation purposes) We measure the semantic textual similarity (STS) between sentence pairs in different languages: ### ES-ES | cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman | | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | 0.8495 | 0.8579 | 0.8675 | 0.8474 | 0.8676 | 0.8478 | 0.8277 | 0.8258 | ### ES-EN | cosine_pearson | cosine_spearman | manhattan_pearson | manhattan_spearman | euclidean_pearson | euclidean_spearman | dot_pearson | dot_spearman | | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | 0.8344 | 0.8448 | 0.8279 | 0.8168 | 0.8282 | 0.8159 | 0.8083 | 0.8145 | ------ ## Intended uses Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## Background This model is a bilingual Spanish-English model trained according to instructions in the paper [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/pdf/2004.09813.pdf) and the [documentation](https://www.sbert.net/examples/training/multilingual/README.html) accompanying its companion python package. We have used the strongest available pretrained English Bi-Encoder ([paraphrase-mpnet-base-v2](https://www.sbert.net/docs/pretrained_models.html#sentence-embedding-models)) as a teacher model, and the pretrained Spanish [BERTIN](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) as the student model. We developped this model during the [Hackathon 2022 NLP - Spanish](https://somosnlp.org/hackathon), organized by hackathon-pln-es Organization. ### Training data We use the concatenation from multiple datasets with sentence pairs (EN-ES). We could check out the dataset that was used during training: [parallel-sentences](https://huggingface.co/datasets/hackathon-pln-es/parallel-sentences) | Dataset | |--------------------------------------------------------| | AllNLI - ES (SNLI + MultiNLI)| | EuroParl | | JW300 | | News Commentary | | Open Subtitles | | TED 2020 | | Tatoeba | | WikiMatrix | ## Authors - [Anibal Pérez](https://huggingface.co/Anarpego), - [Emilio Tomás Ariza](https://huggingface.co/medardodt), - [Lautaro Gesuelli Pinto](https://huggingface.co/lautaro) - [Mauricio Mazuecos](https://huggingface.co/mmazuecos)
ArthurConmy/redwood_attn_2l
ArthurConmy
2023-03-20T00:30:19Z
1,267
0
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
2023-02-24T22:29:18Z
Entry not found
leondz/refutation_detector_distilbert
leondz
2023-06-06T00:14:54Z
1,267
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "en", "arxiv:2304.13734", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-06-02T20:15:04Z
--- license: cc-by-4.0 language: - en --- These are responses designed to capture a model refuting a false claim. They're prompt results of OpenAI gpt-3.5-turbo run on June 1 2023. Prompts are constructed by prepending "Explain why" to claims made in the True-False dataset provided alongside [The Internal State of an LLM Knows When its Lying](https://arxiv.org/abs/2304.13734). Only the first sentence of the response is included (from nltk.sent_tokenize). The original labels are used, where 0 corresponds to a false claim. That is, the 0 labels should be almost all refutations. Spurious and missing refutations were removed by hand, corresponding to about 10% of the data. This were caused by either an incorrect model assertion, or errors in the source data.