modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
alnrg2arg/blockchainlabs_7B_merged_test2_4_prune
alnrg2arg
"2024-01-24T14:25:34Z"
2,397
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "pruning", "alnrg2arg/blockchainlabs_7B_merged_test2_4", "mlabonne/NeuralBeagle14-7B", "udkai/Turdus", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-18T04:35:23Z"
--- license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit - pruning - alnrg2arg/blockchainlabs_7B_merged_test2_4 - mlabonne/NeuralBeagle14-7B - udkai/Turdus --- # blockchainlabs_7B_merged_test2_4_prune blockchainlabs_7B_merged_test2_4_prune is a pruned model based on alnrg2arg/blockchainlabs_7B_merged_test2_4, which is a merged model using following models using [mergekit](https://github.com/cg123/mergekit): * [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) * [udkai/Turdus](https://huggingface.co/udkai/Turdus) Pruning Kit I used: [wanda](https://github.com/locuslab/wanda?tab=readme-ov-file#ablation-on-obs-weight-update) ## 🧩 Configuration ```json { "_name_or_path": "alnrg2arg/blockchainlabs_7B_merged_test2_4_prun", "architectures": [ "MistralForCausalLM" ], "attention_dropout": 0.0, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 32768, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 10000.0, "sliding_window": 4096, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.36.2", "use_cache": false, "vocab_size": 32000 } ```
legraphista/internlm2-math-plus-1_8b-IMat-GGUF
legraphista
"2024-05-27T16:16:38Z"
2,397
2
gguf
[ "gguf", "math", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "text-generation", "en", "zh", "base_model:internlm/internlm2-math-plus-1_8b", "license:other", "region:us" ]
text-generation
"2024-05-27T13:41:43Z"
--- base_model: internlm/internlm2-math-plus-1_8b inference: false language: - en - zh library_name: gguf license: other pipeline_tag: text-generation quantized_by: legraphista tags: - math - quantized - GGUF - imatrix - quantization - imat - imatrix - static --- # internlm2-math-plus-1_8b-IMat-GGUF _Llama.cpp imatrix quantization of internlm/internlm2-math-plus-1_8b_ Original Model: [internlm/internlm2-math-plus-1_8b](https://huggingface.co/internlm/internlm2-math-plus-1_8b) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3008](https://github.com/ggerganov/llama.cpp/releases/tag/b3008) IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw) - [internlm2-math-plus-1_8b-IMat-GGUF](#internlm2-math-plus-1-8b-imat-gguf) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [internlm2-math-plus-1_8b.Q8_0.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.Q8_0.gguf) | Q8_0 | 2.01GB | ✅ Available | ⚪ Static | 📦 No | [internlm2-math-plus-1_8b.Q6_K.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.Q6_K.gguf) | Q6_K | 1.55GB | ✅ Available | ⚪ Static | 📦 No | [internlm2-math-plus-1_8b.Q4_K.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.Q4_K.gguf) | Q4_K | 1.17GB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.Q3_K.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.Q3_K.gguf) | Q3_K | 964.41MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.Q2_K.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.Q2_K.gguf) | Q2_K | 771.89MB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [internlm2-math-plus-1_8b.FP16.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.FP16.gguf) | F16 | 3.78GB | ✅ Available | ⚪ Static | 📦 No | [internlm2-math-plus-1_8b.BF16.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.BF16.gguf) | BF16 | 3.78GB | ✅ Available | ⚪ Static | 📦 No | [internlm2-math-plus-1_8b.Q5_K.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.Q5_K.gguf) | Q5_K | 1.36GB | ✅ Available | ⚪ Static | 📦 No | [internlm2-math-plus-1_8b.Q5_K_S.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.Q5_K_S.gguf) | Q5_K_S | 1.33GB | ✅ Available | ⚪ Static | 📦 No | [internlm2-math-plus-1_8b.Q4_K_S.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.Q4_K_S.gguf) | Q4_K_S | 1.12GB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.Q3_K_L.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.Q3_K_L.gguf) | Q3_K_L | 1.03GB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.Q3_K_S.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.Q3_K_S.gguf) | Q3_K_S | 888.26MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.Q2_K_S.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.Q2_K_S.gguf) | Q2_K_S | 727.45MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ4_NL.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ4_NL.gguf) | IQ4_NL | 1.11GB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ4_XS.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ4_XS.gguf) | IQ4_XS | 1.06GB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ3_M.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ3_M.gguf) | IQ3_M | 915.00MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ3_S.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ3_S.gguf) | IQ3_S | 888.26MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ3_XS.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ3_XS.gguf) | IQ3_XS | 852.87MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ3_XXS.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ3_XXS.gguf) | IQ3_XXS | 787.59MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ2_M.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ2_M.gguf) | IQ2_M | 719.96MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ2_S.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ2_S.gguf) | IQ2_S | 679.06MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ2_XS.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ2_XS.gguf) | IQ2_XS | 635.43MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ2_XXS.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ2_XXS.gguf) | IQ2_XXS | 591.39MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ1_M.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ1_M.gguf) | IQ1_M | 540.28MB | ✅ Available | 🟢 IMatrix | 📦 No | [internlm2-math-plus-1_8b.IQ1_S.gguf](https://huggingface.co/legraphista/internlm2-math-plus-1_8b-IMat-GGUF/blob/main/internlm2-math-plus-1_8b.IQ1_S.gguf) | IQ1_S | 509.60MB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/internlm2-math-plus-1_8b-IMat-GGUF --include "internlm2-math-plus-1_8b.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/internlm2-math-plus-1_8b-IMat-GGUF --include "internlm2-math-plus-1_8b.Q8_0/*" --local-dir internlm2-math-plus-1_8b.Q8_0 # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <s><|im_start|>user Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|> <|im_start|>assistant Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|> <|im_start|>user What about solving an 2x + 3 = 7 equation?<|im_end|> ``` ### Chat template with system prompt ``` <s><|im_start|>system You are a helpful AI.<|im_end|> <|im_start|>user Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|> <|im_start|>assistant Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|> <|im_start|>user What about solving an 2x + 3 = 7 equation?<|im_end|> ``` ### Llama.cpp ``` llama.cpp/main -m internlm2-math-plus-1_8b.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `internlm2-math-plus-1_8b.Q8_0`) 3. Run `gguf-split --merge internlm2-math-plus-1_8b.Q8_0/internlm2-math-plus-1_8b.Q8_0-00001-of-XXXXX.gguf internlm2-math-plus-1_8b.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
Echelon-AI/hindi-medbot-llama3-GGUF
Echelon-AI
"2024-06-20T12:27:01Z"
2,397
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "arxiv:2212.04089", "base_model:Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1", "base_model:HPAI-BSC/Llama3-Aloe-8B-Alpha", "endpoints_compatible", "region:us" ]
null
"2024-06-19T19:17:34Z"
--- base_model: - Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1 - HPAI-BSC/Llama3-Aloe-8B-Alpha library_name: transformers tags: - mergekit - merge --- # llama3-hindi-medbotlm-v0.3 This is a GGUF of a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [HPAI-BSC/Llama3-Aloe-8B-Alpha](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha) as a base. ### Models Merged The following models were included in the merge: * [Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1](https://huggingface.co/Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1 parameters: weight: 0.40 - model: HPAI-BSC/Llama3-Aloe-8B-Alpha parameters: weight: 0.60 base_model: HPAI-BSC/Llama3-Aloe-8B-Alpha merge_method: task_arithmetic dtype: bfloat16 ```
mradermacher/llama-3-8b-chat-doctor-GGUF
mradermacher
"2024-06-04T12:31:18Z"
2,396
0
transformers
[ "transformers", "gguf", "en", "base_model:arthitsaha/llama-3-8b-chat-doctor", "endpoints_compatible", "region:us" ]
null
"2024-06-04T11:29:14Z"
--- base_model: arthitsaha/llama-3-8b-chat-doctor language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/arthitsaha/llama-3-8b-chat-doctor <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-8b-chat-doctor-GGUF/resolve/main/llama-3-8b-chat-doctor.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
llm-jp/llm-jp-13b-v2.0
llm-jp
"2024-04-30T02:28:39Z"
2,395
12
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "ja", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-23T02:51:00Z"
--- license: apache-2.0 language: - en - ja programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript library_name: transformers pipeline_tag: text-generation inference: false --- # llm-jp-13b-v2.0 This repository provides large language models developed by [LLM-jp](https://llm-jp.nii.ac.jp/), a collaborative project launched in Japan. | Model Variant | | :--- | |**Instruction models**| | [llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) | | [llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) | | [llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-instruct-full-ac_001_16x-dolly-ichikara_004_001_single-oasst-oasst2-v2.0) | | | | :--- | |**Pre-trained models**| | [llm-jp-13b-v2.0](https://huggingface.co/llm-jp/llm-jp-13b-v2.0) | Checkpoints format: Hugging Face Transformers ## Required Libraries and Their Versions - torch>=2.3.0 - transformers>=4.40.1 - tokenizers>=0.19.1 - accelerate>=0.29.3 - flash-attn>=2.5.8 ## Usage ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-v2.0") model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-v2.0", device_map="auto", torch_dtype=torch.bfloat16) text = "自然言語処理とは何か" tokenized_input = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt").to(model.device) with torch.no_grad(): output = model.generate( tokenized_input, max_new_tokens=100, do_sample=True, top_p=0.95, temperature=0.7, repetition_penalty=1.05, )[0] print(tokenizer.decode(output)) ``` ## Model Details - **Model type:** Transformer-based Language Model - **Total seen tokens:** 256B |Model|Params|Layers|Hidden size|Heads|Context length| |:---:|:---:|:---:|:---:|:---:|:---:| |13b model|13b|40|5120|40|4096| ## Training - **Pre-training:** - **Hardware:** 128 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/)) - **Software:** Megatron-LM - **Instruction tuning:** - **Hardware:** 8 A100 40GB GPUs ([mdx cluster](https://mdx.jp/en/)) - **Software:** [TRL](https://github.com/huggingface/trl) and [DeepSpeed](https://github.com/microsoft/DeepSpeed) ## Tokenizer The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model. The vocabulary entries were converted from [`llm-jp-tokenizer v2.2 (100k: code20K_en40K_ja60K.ver2.2)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.2). Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary). - **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model - **Training algorithm:** Marging Code/English/Japanese vocabularies constructed with SentencePiece Unigram byte-fallback and reestimating scores with the EM-algorithm. - **Training data:** A subset of the datasets for model pre-training - **Vocabulary size:** 96,867 (mixed vocabulary of Japanese, English, and source code) - The acutal size of vocabulary in the pretrained model is 97,024 due to round-up to multiples of 256. ## Datasets ### Pre-training The models have been pre-trained using a blend of the following datasets. | Language | Dataset | Tokens| |:---|:---|---:| |Japanese|[Wikipedia](https://huggingface.co/datasets/wikipedia)|1.4B ||[Common Crawl](https://gitlab.llm-jp.nii.ac.jp/datasets/llm-jp-corpus-v2)|130.7B |English|[Wikipedia](https://huggingface.co/datasets/wikipedia)|4.7B ||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|110.3B |Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|8.7B ### Instruction tuning The models have been fine-tuned on the following datasets. | Language | Dataset | description | |:---|:---|:---| |Japanese|[ichikara-instruction-004-001](https://liat-aip.sakura.ne.jp/wp/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf%e4%bd%9c%e6%88%90/llm%e3%81%ae%e3%81%9f%e3%82%81%e3%81%ae%e6%97%a5%e6%9c%ac%e8%aa%9e%e3%82%a4%e3%83%b3%e3%82%b9%e3%83%88%e3%83%a9%e3%82%af%e3%82%b7%e3%83%a7%e3%83%b3%e3%83%87%e3%83%bc%e3%82%bf-%e5%85%ac%e9%96%8b/)| A manually constructed Japanese instruction dataset | | |[answer-carefully-001](https://liat-aip.sakura.ne.jp/wp/answercarefully-dataset/)| A manually constructed Japanese instruction dataset focusing on LLMs' safety | | |[databricks-dolly-15k-ja](https://huggingface.co/datasets/llm-jp/databricks-dolly-15k-ja)| [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) translated into Japanese using DeepL | | |[oasst1-21k-ja](https://huggingface.co/datasets/llm-jp/oasst1-21k-ja)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) translated into Japanese using DeepL | | |[oasst2-33k-ja](https://huggingface.co/datasets/llm-jp/oasst2-33k-ja)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) translated into Japanese using DeepL | |English |[databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) | - | | |[oasst1-21k-en](https://huggingface.co/datasets/llm-jp/oasst1-21k-en)| A subset of [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) | | |[oasst2-33k-en](https://huggingface.co/datasets/llm-jp/oasst2-33k-en)| A subset of [oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) | ## Evaluation You can view the evaluation results of several LLMs on this [leaderboard](http://wandb.me/llm-jp-leaderboard). We used [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) (v1.3.0) for the evaluation. Besides, we used LLM-as-a-judge frameworks, [Japanese Vicuna QA Benchmark](https://github.com/ku-nlp/ja-vicuna-qa-benchmark/) and [Japanese MT Bench](https://github.com/Stability-AI/FastChat/tree/jp-stable/fastchat/llm_judge), for evaluation. For details, please refer to [our technical blog](https://llm-jp.nii.ac.jp/blog/2024/04/30/v2.0-release.html) (in Japanese). ## Risks and Limitations The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations. ## Send Questions to llm-jp(at)nii.ac.jp ## License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## Model Card Authors *The names are listed in alphabetical order.* Namgi Han, Tatsuya Hiraoka, Hirokazu Kiyomaru, Takashi Kodama, and Hiroshi Matsuda.
mradermacher/NeuralPoppy-EVO-L3-8B-GGUF
mradermacher
"2024-06-03T06:33:47Z"
2,395
0
transformers
[ "transformers", "gguf", "en", "base_model:zeroblu3/NeuralPoppy-EVO-L3-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-03T04:21:27Z"
--- base_model: zeroblu3/NeuralPoppy-EVO-L3-8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/zeroblu3/NeuralPoppy-EVO-L3-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/NeuralPoppy-EVO-L3-8B-GGUF/resolve/main/NeuralPoppy-EVO-L3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MaziyarPanahi/mergekit-slerp-sclthpf-GGUF
MaziyarPanahi
"2024-06-17T18:22:40Z"
2,395
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:WizardLM/WizardMath-7B-V1.1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-sclthpf" ]
text-generation
"2024-06-17T17:54:59Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:NousResearch/Hermes-2-Pro-Mistral-7B - base_model:WizardLM/WizardMath-7B-V1.1 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-sclthpf-GGUF base_model: mergekit-community/mergekit-slerp-sclthpf inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-sclthpf-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-sclthpf-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-sclthpf](https://huggingface.co/mergekit-community/mergekit-slerp-sclthpf) ## Description [MaziyarPanahi/mergekit-slerp-sclthpf-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-sclthpf-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-sclthpf](https://huggingface.co/mergekit-community/mergekit-slerp-sclthpf). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
gaunernst/vit_tiny_patch8_112.arcface_ms1mv3
gaunernst
"2024-04-22T14:04:21Z"
2,394
0
timm
[ "timm", "safetensors", "image-feature-extraction", "dataset:gaunernst/ms1mv3-recordio", "region:us" ]
image-feature-extraction
"2024-04-22T14:02:44Z"
--- datasets: - gaunernst/ms1mv3-recordio library_name: timm tags: - image-feature-extraction - timm --- # Model card for gaunernst/vit_tiny_patch8_112.arcface_ms1mv3 A Vision Transformer (ViT) for face recognition, trained on MS1MV3 dataset. The model was trained using this repo: https://github.com/gau-nernst/timm-face. It is fully compatible with `timm`. ## Usage ```python import timm import torch.nn.functional as F model = timm.create_model("hf_hub:gaunernst/vit_tiny_patch8_112.arcface_ms1mv3", pretrained=True).eval() embs = model(torch.randn(1, 3, 112, 112)) # output shape (1, 512) embs = F.normalize(embs, dim=1) # model output is not normalized ```
autopilot-ai/EthicalEye
autopilot-ai
"2023-07-11T20:11:30Z"
2,392
3
transformers
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "text-classification", "en", "fr", "hi", "gu", "bn", "ml", "mr", "pa", "it", "es", "kn", "as", "af", "ru", "ro", "sq", "ar", "am", "az", "bs", "bh", "bg", "bo", "ca", "ce", "zh", "cr", "hr", "cs", "da", "de", "nl", "el", "et", "eo", "fi", "fj", "fa", "gl", "ga", "ha", "ht", "he", "hu", "hy", "id", "is", "ja", "jv", "ka", "kk", "km", "ko", "ks", "ku", "ky", "la", "lb", "lt", "lv", "mk", "mn", "ms", "mi", "mt", "ne", "no", "or", "om", "ps", "pl", "pt", "qu", "sa", "sm", "gd", "sr", "sn", "sd", "si", "sk", "sl", "so", "su", "sw", "sv", "tg", "ta", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-06-01T10:36:08Z"
--- license: apache-2.0 requirements: - sentencepiece: >- (if not installed install using `pip install sentencepiece`, and restart runtime) library_name: transformers pipeline_tag: text-classification language: - en - fr - hi - gu - bn - ml - mr - pa - it - es - kn - as - af - ru - ro - sq - ar - am - az - bs - bh - bg - bo - ca - ce - zh - cr - hr - cs - da - de - nl - el - et - eo - fi - fj - fa - gl - ga - ha - ht - he - hu - hy - id - is - ja - jv - ka - kk - km - ko - ks - ku - ky - la - lb - lt - lv - mk - mn - ms - mi - mt - ne - 'no' - or - om - ps - pl - pt - qu - sa - sm - gd - sr - sn - sd - si - sk - sl - so - su - sw - sv - tg - ta --- ## Details - Model Name: Ethical Eye - Description: Ethical Eye is an open-source AI model developed by AutopilotAI. It is designed to flag and analyze user-generated content for harmful or unethical behavior, providing a last layer of decision-making to assist AI systems in promoting ethical and moral actions. The model leverages advanced techniques such as text classification, toxicity analysis, and cross-lingual NLP to detect abuse, obscene language, and harmful or unethical comments in multiple languages. ## How to use ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("autopilot-ai/EthicalEye") model = AutoModelForSequenceClassification.from_pretrained("autopilot-ai/EthicalEye") ``` ## Intended Use - Primary Use Case: The Ethical Eye model is primarily intended to be used as a tool to flag or block users exhibiting harmful or unethical behavior on various platforms. It aims to assist developers, especially those with limited experience in NLP, in enforcing ethical standards and creating a safer environment for users. - User Expertise: The model is designed to be accessible to developers with various levels of NLP expertise, including those with limited experience in the field. - Limitations: While Ethical Eye provides valuable insights and analysis, it is essential to note that it should be used as an aid and not as the sole determinant of ethical decision-making. It may have limitations in understanding context-specific nuances and may require continuous improvement and customization for specific domains or languages. ## Model Details - Architecture: Ethical Eye is built using PyTorch and utilizes the Transformers library. It employs the XLM-Roberta architecture, which enables cross-lingual understanding and transfer learning. - Developed by: [Khush Patel](https://www.linkedin.com/in/khush-patel-kp/), [Jayveersinh Raj](https://www.linkedin.com/in/jayveersinh-raj-67694222a/) - License: The Ethical Eye model is released under the Apache 2.0 license, granting users the freedom to use, modify, and distribute the model according to the terms of the license. ## Use Cases - Content Moderation: Ethical Eye can be integrated into content moderation systems to automatically flag and block user-generated content that contains abusive language, hate speech, or other forms of harmful or unethical behavior. It helps platforms maintain a safe and respectful environment for their users. - Social Media Platforms: Social media platforms can utilize Ethical Eye to automatically detect and filter out toxic comments, obscenities, and offensive content in multiple languages. This helps to create a more positive and inclusive online community. - Chatbots and Virtual Assistants: By incorporating Ethical Eye into chatbots and virtual assistants, AI systems can ensure that their responses align with ethical guidelines. It helps prevent AI agents from engaging in inappropriate or offensive conversations with users. - Online Forums and Discussion Boards: Ethical Eye can be applied to online forums and discussion boards to monitor user interactions and identify potential instances of harassment, bullying, or unethical behavior. This allows moderators to take appropriate actions to maintain a healthy and respectful environment. - E-commerce Platforms: E-commerce platforms can utilize Ethical Eye to automatically identify and block reviews or comments that contain false information, spam, or unethical practices. This helps maintain the integrity of the platform and ensures honest and reliable user feedback. - Educational Platforms: Ethical Eye can be used in educational platforms to flag and address instances of cyberbullying, inappropriate language, or offensive content in student discussions and comments. It supports the creation of a safe and respectful learning environment. - AI Reinforcement Learning: The Ethical Eye model can serve as a critic in reinforcement learning scenarios, providing feedback on the ethical implications of actions taken by AI agents. It assists in developing AI systems that not only optimize for task performance but also align with ethical guidelines and societal norms. ## Considerations for Deployment - Hardware Requirements: The Ethical Eye model can be deployed on hardware configurations suitable for running deep learning models. Specific requirements may depend on the scale of deployment and the desired performance. - Dependencies: The model relies on PyTorch, Transformers, and XLM-Roberta libraries. Refer to the model documentation for specific version requirements. - Integration: Ethical Eye can be integrated into existing AI systems and platforms using the provided APIs and guidelines. Additional customization may be necessary to adapt the model to specific requirements. - Ethical and Legal Considerations: While Ethical Eye aims to promote ethical behavior, it is important to acknowledge that it may have limitations and biases inherent in its training data. Developers should exercise caution and consider the legal and ethical implications of relying solely on the model's outputs without human oversight.
maywell/Synatra-7B-Instruct-v0.2
maywell
"2023-10-24T12:56:23Z"
2,392
6
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-12T02:29:48Z"
--- language: - ko library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- # **Synatra-7B-Instruct-v0.2** Made by StableFluffy **Contact (Do not Contact for personal things.)** Discord : is.maywell Telegram : AlzarTakkarsen ## License This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me. ## Model Details **Base Model** [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) **Trained On** A6000 48GB * 8 ## TODO - RP 기반 튜닝 모델 제작 - 데이터셋 정제 - 언어 이해능력 개선 - 상식 보완 - 토크나이저 변경 ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id. E.g. ``` text = "<s>[INST] 아이작 뉴턴의 업적을 알려줘. [/INST]" ``` # **Model Benchmark** ## Ko-LLM-Leaderboard | Model | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | Avg | --- | --- | --- | --- | --- | --- | --- | kyujinpy/KoT-platypus2-13B(No.1 at 2023/10/12) | 43.69 | 53.05 | 42.29 | 43.34 | 65.38 | 49.55 | Synatra-V0.1-7B-Instruct | 41.72 | 49.28 | 43.27 | 43.75 | 39.32 | 43.47 | **Synatra-7B-Instruct-v0.2** | **41.81** | **49.35** | **43.99** | **45.77** | **42.96** | **44.78** MMLU에서는 우세하나 Ko-CommonGen V2 에서 크게 약한 모습을 보임. # **Implementation Code** Since, chat_template already contains insturction format above. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-V0.1-7B") tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-V0.1-7B") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` If you run it on oobabooga your prompt would look like this. ``` [INST] 링컨에 대해서 알려줘. [/INST] ``` > Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) ---
maywell/Synatra-7B-v0.3-base
maywell
"2023-10-29T11:17:06Z"
2,392
6
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-28T00:56:03Z"
--- language: - ko library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- # **Synatra-7B-v0.3-base🐧** ![Synatra-7B-Instruct-v0.3](./Synatra.png) ## Support Me 시나트라는 개인 프로젝트로, 1인의 자원으로 개발되고 있습니다. 모델이 마음에 드셨다면 약간의 연구비 지원은 어떨까요? [<img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy me a Coffee" width="217" height="50">](https://www.buymeacoffee.com/mwell) Wanna be a sponser? Contact me on Telegram **AlzarTakkarsen** # **License** This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me. # **Model Details** **Base Model** [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) **Trained On** A6000 48GB * 8 **Instruction format** It follows [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) format and **Alpaca(No-Input)** format. **TODO** - ~~``RP 기반 튜닝 모델 제작``~~ ✅ - ~~``데이터셋 정제``~~ ✅ - 언어 이해능력 개선 - ~~``상식 보완``~~ ✅ - 토크나이저 변경 # **Model Benchmark** ## Ko-LLM-Leaderboard On Benchmarking... # **Implementation Code** Since, chat_template already contains insturction format above. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-7B-v0.3-base") tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-7B-v0.3-base") messages = [ {"role": "user", "content": "바나나는 원래 하얀색이야?"}, ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ```
d-matrix/gpt2
d-matrix
"2024-06-12T00:07:16Z"
2,392
3
null
[ "text-generation", "en", "dataset:wikitext", "dataset:ptb_text_only", "license:apache-2.0", "model-index", "region:us" ]
text-generation
"2023-11-13T20:30:48Z"
--- license: apache-2.0 datasets: - wikitext - ptb_text_only language: - en metrics: - perplexity pipeline_tag: text-generation model-index: - name: distilgpt2 results: - task: type: text-generation dataset: name: penn_treebank type: ptb_text_only metrics: - name: perlexity@distilgpt2:BASELINE type: dmx-perlexity value: 63.45857238769531 - name: perlexity@distilgpt2:BASIC type: dmx-perlexity value: 64.36720275878906 - task: type: text-generation dataset: name: wikitext2 type: wikitext-2-raw-v1 metrics: - name: perlexity@distilgpt2:BASELINE type: dmx-perlexity value: 46.05925369262695 - name: perlexity@distilgpt2:BASIC type: dmx-perlexity value: 46.570838928222656 --- This is a d-Matrix functional reference of the GPT2 model family, with the following *revisions*: - [`distilgpt2`](https://huggingface.co/distilbert/distilgpt2) - [`gpt2`](https://huggingface.co/openai-community/gpt2) - [`gpt2-medium`](https://huggingface.co/openai-community/gpt2-medium) - [`gpt2-large`](https://huggingface.co/openai-community/gpt2-large) - [`gpt2-xl`](https://huggingface.co/openai-community/gpt2-xl) The reference provides the following functional *configurations*: Configuration | Explanation :-- | :-- **`BASELINE`** | a reference functionally equivalent to the original model **`BASIC`** | all linear algebraic operands quantized to `BFP16-64`, and all other operations transformed to approximated kernel simulations ### Usage Install d-Matrix [ML Tools](https://github.com/d-matrix-ai/dmx-mltools) first. ```sh pip install dmx-mltools ``` The following is an example model and its evaluation. ```python from mltools.dmx import pipeline pipe = pipeline( task="text-generation", model="d-matrix/gpt2", revision="gpt2-xl", # see above for other variants dmx_config="BASELINE", # see above for other variants ) results = pipe.evaluate( metric="d-matrix/dmx_perplexity", dataset="wikitext", dataset_version="wikitext-2-raw-v1", ) ``` ### Evaluation results - `perplexity` on `penn_treebank` Revision \ Configuration | **`BASELINE`** | **`BASIC`** :-- | --: | --: `distilgpt2` | 63.46 | 64.13 `gpt2` | 35.77 | 35.93 `gpt2-medium` | 27.06 | 27.10 `gpt2-large` | 23.03 | 23.04 `gpt2-xl` | 21.01 | 21.02 - `perplexity` on `wikitext2` Revision \ Configuration | **`BASELINE`** | **`BASIC`** :-- | --: | --: `distilgpt2` | 46.06 | 46.44 `gpt2` | 29.94 | 30.08 `gpt2-medium` | 21.71 | 21.73 `gpt2-large` | 19.42| 19.43 `gpt2-xl` | 17.40| 17.40 - `perplexity` on `wikitext103` Revision \ Configuration | **`BASELINE`** | **`BASIC`** :-- | --: | --: `distilgpt2` | 46.06 | 46.44 `gpt2` | 29.94 |30.08 `gpt2-medium` | 21.71 | 21.73 `gpt2-large` | 19.43 | 19.43 `gpt2-xl` | 17.40 | 17.40
psyche/kogpt
psyche
"2023-11-18T10:17:25Z"
2,391
4
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "generation", "en", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-15T00:04:20Z"
--- language: - en - ko tags: - generation license: apache-2.0 --- Pretrained GPT2 with expanded n_ctx up to 2048(also with expanded embedding dimension to 1536) in Korean. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_psyche__kogpt) | Metric | Value | |-----------------------|---------------------------| | Avg. | 24.27 | | ARC (25-shot) | 21.16 | | HellaSwag (10-shot) | 28.11 | | MMLU (5-shot) | 26.56 | | TruthfulQA (0-shot) | 42.06 | | Winogrande (5-shot) | 49.09 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 2.89 |
mosaicml/mpt-7b-8k
mosaicml
"2024-03-05T20:23:35Z"
2,391
26
transformers
[ "transformers", "pytorch", "mpt", "text-generation", "Composer", "MosaicML", "llm-foundry", "StreamingDatasets", "custom_code", "dataset:mc4", "dataset:c4", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:bigcode/the-stack", "dataset:allenai/s2orc", "arxiv:2108.12409", "arxiv:2302.13971", "arxiv:2205.14135", "arxiv:2010.04245", "arxiv:1909.08053", "arxiv:2302.06675", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-30T19:56:15Z"
--- license: apache-2.0 tags: - Composer - MosaicML - llm-foundry - StreamingDatasets datasets: - mc4 - c4 - togethercomputer/RedPajama-Data-1T - bigcode/the-stack - allenai/s2orc inference: false --- # MPT-7B-8k MPT-7B-8k is a decoder-style transformer pretrained starting from MPT-7B, but updating the sequence length to 8k and training for an additional 500B tokens, resulting in a total of 1.5T tokens of text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-7B-8k is part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)). Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer). This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference. ### How is this model different? MPT-7B-8k is * **Licensed for the possibility of commercial use.** * **Trained on a large amount of data** (1.5T tokens like [XGen](https://huggingface.co/Salesforce/xgen-7b-8k-base) vs. 1T for [LLaMA](https://arxiv.org/abs/2302.13971), 1T for [MPT-7B](https://www.mosaicml.com/blog/mpt-7b), 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)). * **Prepared to handle long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409). With ALiBi, the model can extrapolate beyond the 8k training sequence length to up to 10k, and with a few million tokens it can be finetuned to extrapolate much further. * **Capable of fast training and inference** via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer) * **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry) ### Models finetuned off MPT-7B-8k: The following models are finetuned on MPT-7B-8k: * [MPT-7B-8k-Instruct](https://huggingface.co/mosaicml/mpt-7b-8k-instruct): a model for long-form instruction following (especially summarization and question-answering). Built by finetuning MPT-7B-8k on several carefully curated datasets. * License: Apache 2.0 * [MPT-7B-8k-Chat](https://huggingface.co/mosaicml/mpt-7b-8k-chat): a chatbot-like model for dialogue generation. Built by finetuning MPT-7B-8k on approximately 1.5B tokens of chat data. * License: _CC-By-NC-SA-4.0_ ## Model Date July 18, 2023 ## Model License Apache-2.0 ## Documentation * [Blog post: MPT-7B-8k](https://www.mosaicml.com/blog/long-context-mpt-7b-8k) * [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/) * Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)! ## How to Use This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning. ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'mosaicml/mpt-7b-8k', trust_remote_code=True ) ``` Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package. `MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more. To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision: ```python import torch import transformers name = 'mosaicml/mpt-7b-8k' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.attn_config['attn_impl'] = 'triton' config.init_device = 'cuda:0' # For fast initialization directly on GPU! model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, torch_dtype=torch.bfloat16, # Load model weights in bfloat16 trust_remote_code=True ) ``` Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example: ```python import transformers name = 'mosaicml/mpt-7b-8k' config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True) config.max_seq_len = 10000 # (input + output) tokens can now be up to 10000 model = transformers.AutoModelForCausalLM.from_pretrained( name, config=config, trust_remote_code=True ) ``` This model was trained with the MPT-7B-8k tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-7b-8k') ``` The model can then be used, for example, within a text-generation pipeline. Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html). ```python from transformers import pipeline with torch.autocast('cuda', dtype=torch.bfloat16): inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda') outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # or using the HF pipeline pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe('Here is a recipe for vegan banana bread:\n', max_new_tokens=100, do_sample=True, use_cache=True)) ``` ## Model Description The architecture is a modification of a standard decoder-only transformer. The model has been modified from a standard transformer in the following ways: * It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) * It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings * It does not use biases | Hyperparameter | Value | |----------------|-------| |n_parameters | 6.7B | |n_layers | 32 | | n_heads | 32 | | d_model | 4096 | | vocab size | 50432 | | sequence length | 2048 | ## Training Data ### Streaming Datasets Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training. StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset. ### Data Mix The model was trained for ___T tokens. First it was trained for 1T tokens (with batch size 1760 and sequence length 2048) on the following data mix: #### Data Mix for Original 1T Tokens Used to Train MPT-7B | Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs | |-------------|----------------------------|------------|----------------------------|--------| | mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 | | C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 | | RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 | | The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 | | RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 | | The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 | | S2ORC | 48.85 B | 0.033 | 33 B | 0.68 | | RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 | | RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 | | RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 | #### Data Mix for Additional 500B Tokens Used to Further Train MPT-7B-8k We took 80B tokens from document samples that were longer than 4096 tokens, and 120B tokens with varying document sample lengths that matched the "baseline" length distribution for a total of 200B tokens in a single dataset. We then trained MPT-7B for 500B tokens with a maximum sequence length of 8192, resulting in MPT-7B-8k. Since we trained for 500B tokens using 200B tokens, nearly every subset was trained on for exactly 2.5 epochs. | Sequence Length Distribution | Number of Tokens in Source (Billion) | Proportion | Effective Number of Tokens (Billion) | Epochs | |---|---|---|---|---| | mC4 3.1.0 - English (200+ words) - Baseline | 33.60 | 16.80% | 84.00 | 2.50 | | mC4 3.1.0 - English (200+ words) - ≥4096 tokens | 23.04 | 11.52% | 57.60 | 2.50 | | c4 - English - SemDedup 80% - Baseline | 30.12 | 15.06% | 75.30 | 2.50 | | c4 - English - SemDedup 80% - ≥4096 tokens | 0.92 | 0.46% | 2.30 | 2.50 | | RedPajama - CommonCrawl - Baseline | 8.52 | 4.26% | 21.30 | 2.50 | | RedPajama - CommonCrawl - ≥4096 tokens | 12.80 | 6.40% | 32.00 | 2.50 | | The Stack - Selected Languages - Baseline | 30.00 | 15.00% | 75.00 | 2.50 | | The Stack - Selected Languages - ≥4096 tokens | 10.00 | 5.00% | 25.00 | 2.50 | | RedPajama - Wikipedia - Baseline | 3.60 | 1.80% | 9.00 | 2.50 | | RedPajama - Wikipedia - ≥4096 tokens | 1.04 | 0.52% | 2.60 | 2.50 | | The Stack - Markdown - Baseline | 4.50 | 2.25% | 11.25 | 2.50 | | The Stack - Markdown - ≥4096 tokens | 8.00 | 4.00% | 20.00 | 2.50 | | Semantic Scholar ORC - Baseline | 3.30 | 1.65% | 8.25 | 2.50 | | Semantic Scholar ORC - ≥4096 tokens | 8.00 | 4.00% | 20.00 | 2.50 | | RedPajama - Books - Baseline | 3.00 | 1.50% | 7.50 | 2.50 | | RedPajama - Books - ≥4096 tokens | 8.00 | 4.00% | 20.00 | 2.50 | | RedPajama - arXiv - Baseline | 1.92 | 0.96% | 4.80 | 2.50 | | RedPajama - arXiv - ≥4096 tokens | 5.40 | 2.70% | 13.50 | 2.50 | | RedPajama - StackExchange - Baseline | 1.44 | 0.72% | 3.60 | 2.50 | | RedPajama - StackExchange - ≥4096 tokens | 1.52 | 1.40% | 7.00 | 4.60 | | N Training Tokens | 200 | 100.00% | | 2.5 epochs * 200B = 500B tokens | Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length. The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics, most of which are relevant for tokenizing code: (1) It was trained on a diverse mix of data that includes code (The Pile) (2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces (3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters. The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points. ### Training Configuration This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer. ## Limitations and Biases _The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_ MPT-7B-8k is **not** intended for deployment without finetuning. It should not be used for human-facing interactions without further guardrails and user consent. MPT-7B-8k can produce factually incorrect output, and should not be relied on to produce factually accurate information. MPT-7B-8k was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## MosaicML Platform If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://www.mosaicml.com/get-started?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b-8k). ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Citation Please cite this model using the following format: ``` @online{MosaicML2023Introducing, author = {MosaicML NLP Team}, title = {Introducing MPT-7B: A New Standard for Open-Source, ly Usable LLMs}, year = {2023}, url = {www.mosaicml.com/blog/mpt-7b}, note = {Accessed: 2023-03-28}, % change this date urldate = {2023-03-28} % change this date } ```
SakuraLLM/Sakura-32B-Qwen2beta-v0.9.1-GGUF
SakuraLLM
"2024-05-16T15:25:39Z"
2,391
1
null
[ "gguf", "license:cc-by-nc-sa-4.0", "region:us" ]
null
"2024-05-16T14:06:06Z"
--- license: cc-by-nc-sa-4.0 ---
mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF
mradermacher
"2024-06-17T10:33:37Z"
2,391
0
transformers
[ "transformers", "gguf", "tr", "base_model:Metin/LLaMA-3-8B-Instruct-Abliterated-TR", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-16T16:18:06Z"
--- base_model: Metin/LLaMA-3-8B-Instruct-Abliterated-TR language: - tr library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Metin/LLaMA-3-8B-Instruct-Abliterated-TR <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/LLaMA-3-8B-Instruct-Abliterated-TR-GGUF/resolve/main/LLaMA-3-8B-Instruct-Abliterated-TR.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Locutusque/Hercules-2.5-Mistral-7B
Locutusque
"2024-02-12T16:59:28Z"
2,390
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "not-for-all-audiences", "chemistry", "math", "code", "physics", "dataset:Locutusque/hercules-v2.0", "dataset:Locutusque/hercules-v2.5", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-10T08:23:41Z"
--- license: apache-2.0 library_name: transformers tags: - not-for-all-audiences - chemistry - math - code - physics base_model: mistralai/Mistral-7B-v0.1 datasets: - Locutusque/hercules-v2.0 - Locutusque/hercules-v2.5 model-index: - name: Hercules-2.5-Mistral-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.03 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.79 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.49 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 43.44 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 49.05 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hercules-2.5-Mistral-7B name: Open LLM Leaderboard --- # Model Card: Hercules-2.5-Mistral-7B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6437292ecd93f4c9a34b0d47/aaxvEOjNxHKZ7rRGPolW-.png) ## Model Description Hercules-2.5-Mistral-7B is a fine-tuned language model derived from Mistralai/Mistral-7B-v0.1. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. The dataset used for fine-tuning, also named Hercules-v2.5, expands upon the diverse capabilities of OpenHermes-2.5 with contributions from numerous curated datasets. This fine-tuning has hercules-v2.5 with enhanced abilities in: - Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology. - Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values. - Domain-Specific Knowledge: Engaging in informative and educational conversations about Biology, Chemistry, Physics, Mathematics, Medicine, Computer Science, and more. ## Intended Uses & Potential Bias Hercules-2.5-Mistral-7B is well-suited to the following applications: - Specialized Chatbots: Creating knowledgeable chatbots and conversational agents in scientific and technical fields. - Instructional Assistants: Supporting users with educational and step-by-step guidance in various disciplines. - Code Generation and Execution: Facilitating code execution through function calls, aiding in software development and prototyping. **Important Note: Although Hercules-v2.5 is carefully constructed, it's important to be aware that the underlying data sources may contain biases or reflect harmful stereotypes. Use this model with caution and consider additional measures to mitigate potential biases in its responses.** ## Limitations and Risks - Toxicity: The dataset may still contain toxic or harmful examples despite cleaning efforts. - Hallucinations and Factual Errors: Like other language models, Hercules-2.0-Mistral-7B may generate incorrect or misleading information, especially in specialized domains where it lacks sufficient expertise. - Potential for Misuse: The ability to engage in technical conversations and execute function calls could be misused for malicious purposes. ## Evaluation Metrics To provide suitable benchmarks for Hercules-2.5-Mistral-7B, consider using a combination of the following metrics: - Instruction Following: Task-specific evaluation datasets for instruction following in relevant domains (e.g., datasets specifically focused on math problems, code generation, etc.). - Function Calling: Evaluate the model's accuracy in interpreting and executing function calls with varying inputs and outputs. - Conversational Quality: Assess the model's ability to maintain coherence, naturalness, and informativeness across conversational turns. ## Training Data Hercules-2.5-Mistral-7B is fine-tuned from the following sources: - cognitivecomputations/dolphin (first 300k examples) - Evol Instruct 70K && 140K - teknium/GPT4-LLM-Cleaned - jondurbin/airoboros-3.2 - AlekseyKorshuk/camel-chatml - CollectiveCognition/chats-data-2023-09-22 - Nebulous/lmsys-chat-1m-smortmodelsonly - glaiveai/glaive-code-assistant-v2 - glaiveai/glaive-code-assistant - glaiveai/glaive-function-calling-v2 - garage-bAInd/Open-Platypus - meta-math/MetaMathQA - teknium/GPTeacher-General-Instruct - GPTeacher roleplay datasets - BI55/MedText - pubmed_qa labeled subset - M4-ai/LDJnr_combined_inout_format - Unnatural Instructions - CollectiveCognition/chats-data-2023-09-27 - CollectiveCognition/chats-data-2023-10-16 ## Training Procedure - This model was trained on 8 kaggle TPUs, using torch xla SPMD for high MXU efficiency. There was no expense on my end (meaning you can reproduce this too!) - A learning rate of 2e-06 with the Adam optimizer. A linear scheduler was used, with an end factor of 0.3. A low learning rate was used to prevent exploding gradients. - No mixed precision was used, with the default dtype being bfloat16. - Trained on 200,000 examples of Hercules-v2.0 and 100,000 examples of Hercules-v2.5 - No model parameters were frozen. - This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like: ```<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}</s>``` This model was fine-tuned using the TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment # Updates - **🔥 Earned a score of nearly 64 on Open LLM Leaderboard, outperforming most merge-free SFT mistral fine-tunes 🔥** # Quants exl2 by @bartowski https://huggingface.co/bartowski/Hercules-2.5-Mistral-7B-exl2 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__Hercules-2.5-Mistral-7B) | Metric |Value| |---------------------------------|----:| |Avg. |63.59| |AI2 Reasoning Challenge (25-Shot)|62.03| |HellaSwag (10-Shot) |83.79| |MMLU (5-Shot) |63.49| |TruthfulQA (0-shot) |43.44| |Winogrande (5-shot) |79.72| |GSM8k (5-shot) |49.05|
StephanAkkerman/FinTwitBERT-sentiment
StephanAkkerman
"2024-02-21T11:33:22Z"
2,388
6
transformers
[ "transformers", "safetensors", "bert", "text-classification", "NLP", "BERT", "FinBERT", "FinTwitBERT", "sentiment", "finance", "financial-analysis", "sentiment-analysis", "financial-sentiment-analysis", "twitter", "tweets", "tweet-analysis", "stocks", "stock-market", "crypto", "cryptocurrency", "en", "dataset:TimKoornstra/financial-tweets-sentiment", "dataset:TimKoornstra/synthetic-financial-tweets-sentiment", "base_model:StephanAkkerman/FinTwitBERT", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-12-13T16:34:16Z"
--- license: mit datasets: - TimKoornstra/financial-tweets-sentiment - TimKoornstra/synthetic-financial-tweets-sentiment language: - en metrics: - accuracy - f1 pipeline_tag: text-classification tags: - NLP - BERT - FinBERT - FinTwitBERT - sentiment - finance - financial-analysis - sentiment-analysis - financial-sentiment-analysis - twitter - tweets - tweet-analysis - stocks - stock-market - crypto - cryptocurrency base_model: StephanAkkerman/FinTwitBERT widget: - text: Nice 9% pre market move for $para, pump my calls Uncle Buffett 🤑 example_title: Bullish Crypto Tweet - text: It is about damn time that my $ARB and $ETH bags pump FFS. 🚀 example_title: Bullish Crypto Tweet 2 - text: $SPY $SPX closed higher 8th consecutive weeks. Last time it closed 9th straight was 20 years ago. example_title: Bullish Stock Tweet - text: $TCBP Lowest float stock in the market. Float just 325k. Don’t sell for pennies, this one will be a monster. Still early example_title: Bullish Stock Tweet 2 - text: Italian companies braced for more political uncertainty example_title: Bearish News #model-index: #- name: FinTwitBERT-sentiment # results: --- # FinTwitBERT-sentiment FinTwitBERT-sentiment is a finetuned model for classifying the sentiment of financial tweets. It uses [FinTwitBERT](https://huggingface.co/StephanAkkerman/FinTwitBERT) as a base model, which has been pre-trained on 10 million financial tweets. This approach ensures that the FinTwitBERT-sentiment has seen enough financial tweets, which have an informal nature, compared to other financial texts, such as news headlines. Therefore this model performs great on informal financial texts, seen on social media. ## Intended Uses FinTwitBERT-sentiment is intended for classifying financial tweets or other financial social media texts. ## Dataset FinTwitBERT-sentiment has been trained on two datasets. One being a collection of several financial tweet datasets and the other a synthetic dataset created out of the first. - [TimKoornstra/financial-tweets-sentiment](https://huggingface.co/datasets/TimKoornstra/financial-tweets-sentiment): 38,091 human-labeled tweets - [TimKoornstra/synthetic-financial-tweets-sentiment](https://huggingface.co/datasets/TimKoornstra/synthetic-financial-tweets-sentiment): 1,428,771 synethtic tweets ## More Information For a comprehensive overview, including the training setup and analysis of the model, visit the [FinTwitBERT GitHub repository](https://github.com/TimKoornstra/FinTwitBERT). ## Usage Using [HuggingFace's transformers library](https://huggingface.co/docs/transformers/index) the model and tokenizers can be converted into a pipeline for text classification. ```python from transformers import pipeline # Create a sentiment analysis pipeline pipe = pipeline( "sentiment-analysis", model="StephanAkkerman/FinTwitBERT-sentiment", ) # Get the predicted sentiment print(pipe("Nice 9% pre market move for $para, pump my calls Uncle Buffett 🤑")) ``` ## Citing & Authors If you use FinTwitBERT or FinTwitBERT-sentiment in your research, please cite us as follows, noting that both authors contributed equally to this work: ``` @misc{FinTwitBERT, author = {Stephan Akkerman, Tim Koornstra}, title = {FinTwitBERT: A Specialized Language Model for Financial Tweets}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/TimKoornstra/FinTwitBERT}} } ``` Additionally, if you utilize the sentiment classifier, please cite: ``` @misc{FinTwitBERT-sentiment, author = {Stephan Akkerman, Tim Koornstra}, title = {FinTwitBERT-sentiment: A Sentiment Classifier for Financial Tweets}, year = {2023}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/StephanAkkerman/FinTwitBERT-sentiment}} } ``` ## License This project is licensed under the MIT License. See the [LICENSE](https://choosealicense.com/licenses/mit/) file for details.
mradermacher/Stheno-1.2-L2-13B-i1-GGUF
mradermacher
"2024-06-06T21:53:58Z"
2,388
1
transformers
[ "transformers", "gguf", "en", "base_model:Sao10K/Stheno-1.2-L2-13B", "license:llama2", "endpoints_compatible", "region:us" ]
null
"2024-06-04T17:59:41Z"
--- base_model: Sao10K/Stheno-1.2-L2-13B language: - en library_name: transformers license: llama2 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Sao10K/Stheno-1.2-L2-13B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-1.2-L2-13B-i1-GGUF/resolve/main/Stheno-1.2-L2-13B.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
mradermacher/QiangGuoAI-V1.1-1.8B-GGUF
mradermacher
"2024-06-06T15:48:47Z"
2,388
1
transformers
[ "transformers", "gguf", "llama-factory", "en", "base_model:anonymous-guest/QiangGuoAI-V1.1-1.8B", "endpoints_compatible", "region:us" ]
null
"2024-06-06T15:29:47Z"
--- base_model: anonymous-guest/QiangGuoAI-V1.1-1.8B language: - en library_name: transformers quantized_by: mradermacher tags: - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/anonymous-guest/QiangGuoAI-V1.1-1.8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.IQ3_XS.gguf) | IQ3_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.IQ3_S.gguf) | IQ3_S | 1.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.Q3_K_S.gguf) | Q3_K_S | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.IQ3_M.gguf) | IQ3_M | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.Q3_K_M.gguf) | Q3_K_M | 1.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.Q3_K_L.gguf) | Q3_K_L | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.IQ4_XS.gguf) | IQ4_XS | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.Q4_K_S.gguf) | Q4_K_S | 1.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.Q4_K_M.gguf) | Q4_K_M | 1.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.Q5_K_S.gguf) | Q5_K_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.Q5_K_M.gguf) | Q5_K_M | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.Q6_K.gguf) | Q6_K | 1.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.Q8_0.gguf) | Q8_0 | 2.1 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/QiangGuoAI-V1.1-1.8B-GGUF/resolve/main/QiangGuoAI-V1.1-1.8B.f16.gguf) | f16 | 3.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
LeBenchmark/wav2vec2-FR-7K-large
LeBenchmark
"2023-09-14T09:58:38Z"
2,387
11
transformers
[ "transformers", "pytorch", "safetensors", "wav2vec2", "feature-extraction", "fr", "arxiv:2309.05472", "license:apache-2.0", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:04Z"
--- language: "fr" thumbnail: tags: - wav2vec2 license: "apache-2.0" --- # LeBenchmark: wav2vec2 large model trained on 7K hours of French speech LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech. It comes with 2 versions, in which, the later version (LeBenchmark 2.0) is an extended version of the first version in terms of both numbers of pre-trained SSL models, and numbers of downstream tasks. For more information on the different benchmarks that can be used to evaluate the wav2vec2 models, please refer to our paper at: [LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech](https://arxiv.org/abs/2309.05472) ## Model and data descriptions We release four different models that can be found under our HuggingFace organization. Four different wav2vec2 architectures *Light*, *Base*, *Large* and *xLarge* are coupled with our small (1K), medium (3K), large (7K), and extra large (14K) corpus. In short: ## *Lebenchmark 2.0:* - [wav2vec2-FR-14K-xlarge](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-xlarge): xLarge wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown). - [wav2vec2-FR-14K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-large): Large wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown). - [wav2vec2-FR-14K-light](https://huggingface.co/LeBenchmark/wav2vec2-FR-14K-light): Light wav2vec2 trained on 14K hours of French speech (5.4K Males / 2.4K Females / 6.8K unknown). ## *Lebenchmark:* - [wav2vec2-FR-7K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large): Large wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown). - [wav2vec2-FR-7K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-base): Base wav2vec2 trained on 7.6K hours of French speech (1.8K Males / 1.0K Females / 4.8K unknown). - [wav2vec2-FR-3K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-large): Large wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown). - [wav2vec2-FR-3K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-3K-base): Base wav2vec2 trained on 2.9K hours of French speech (1.8K Males / 1.0K Females / 0.1K unknown). - [wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base): Base wav2vec2 trained on 2.6K hours of French speech (**no spontaneous speech**). - [wav2vec2-FR-1K-large](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-large): Large wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females). - [wav2vec2-FR-1K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-1K-base): Base wav2vec2 trained on 1K hours of French speech (0.5K Males / 0.5K Females). ## Intended uses & limitations Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced. ## Fine-tune with Fairseq for ASR with CTC As our wav2vec2 models were trained with Fairseq, then can be used in the different tools that they provide to fine-tune the model for ASR with CTC. The full procedure has been nicely summarized in [this blogpost](https://huggingface.co/blog/fine-tune-wav2vec2-english). Please note that due to the nature of CTC, speech-to-text results aren't expected to be state-of-the-art. Moreover, future features might appear depending on the involvement of Fairseq and HuggingFace on this part. ## Integrate to SpeechBrain for ASR, Speaker, Source Separation ... Pretrained wav2vec models recently gained in popularity. At the same time, [SpeechBrain toolkit](https://speechbrain.github.io) came out, proposing a new and simpler way of dealing with state-of-the-art speech & deep-learning technologies. While it currently is in beta, SpeechBrain offers two different ways of nicely integrating wav2vec2 models that were trained with Fairseq i.e our LeBenchmark models! 1. Extract wav2vec2 features on-the-fly (with a frozen wav2vec2 encoder) to be combined with any speech-related architecture. Examples are: E2E ASR with CTC+Att+Language Models; Speaker Recognition or Verification, Source Separation ... 2. *Experimental:* To fully benefit from wav2vec2, the best solution remains to fine-tune the model while you train your downstream task. This is very simply allowed within SpeechBrain as just a flag needs to be turned on. Thus, our wav2vec2 models can be fine-tuned while training your favorite ASR pipeline or Speaker Recognizer. **If interested, simply follow this [tutorial](https://colab.research.google.com/drive/17Hu1pxqhfMisjkSgmM2CnZxfqDyn2hSY?usp=sharing)** ## Referencing LeBenchmark ``` @misc{parcollet2023lebenchmark, title={LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech}, author={Titouan Parcollet and Ha Nguyen and Solene Evain and Marcely Zanon Boito and Adrien Pupier and Salima Mdhaffar and Hang Le and Sina Alisamir and Natalia Tomashenko and Marco Dinarelli and Shucong Zhang and Alexandre Allauzen and Maximin Coavoux and Yannick Esteve and Mickael Rouvier and Jerome Goulian and Benjamin Lecouteux and Francois Portet and Solange Rossato and Fabien Ringeval and Didier Schwab and Laurent Besacier}, year={2023}, eprint={2309.05472}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
KoboldAI/OPT-6.7B-Nerybus-Mix
KoboldAI
"2023-02-13T14:56:10Z"
2,387
20
transformers
[ "transformers", "pytorch", "opt", "text-generation", "en", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-02-13T14:21:14Z"
--- license: other language: - en inference: false --- # OPT-6.7B-Nerybus-Mix This is an experimental model containing a ***parameter-wise 50/50 blend (weighted average)*** of the weights of *NerysV2-6.7B* and *ErebusV1-6.7B* Preliminary testing produces pretty coherent outputs, however, it seems less impressive than the 2.7B variant of Nerybus, as both 6.7B source models appear more similar than their 2.7B counterparts. # License The two models used for this blend, *NerysV2-6.7B* and *ErebusV1-6.7B* are made by **Mr. Seeker**. - https://huggingface.co/KoboldAI/OPT-6.7B-Erebus - https://huggingface.co/KoboldAI/OPT-6B-nerys-v2 The base OPT-6.7B model is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved. # Evaluation Results No formal evaluation is available for this model at this time. This blend was created in FP16, due to available memory constraints. It is recommend to use this model with the KoboldAI software. All feedback and comments can be directed to Concedo on the KoboldAI discord.
dltjdgh0928/lsh_finetune_v0.11
dltjdgh0928
"2023-10-31T09:37:21Z"
2,387
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-31T09:29:43Z"
--- license: apache-2.0 --- mistral_finetune_test mistral_finetune_test mistral_finetune_test mistral_finetune_test mistral_finetune_test mistral_finetune_test mistral_finetune_test mistral_finetune_test mistral_finetune_test mistral_finetune_test mistral_finetune_test mistral_finetune_test
facebook/timesformer-hr-finetuned-ssv2
facebook
"2022-12-12T12:52:33Z"
2,385
2
transformers
[ "transformers", "pytorch", "timesformer", "video-classification", "vision", "arxiv:2102.05095", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2022-10-07T22:41:47Z"
--- license: "cc-by-nc-4.0" tags: - vision - video-classification --- # TimeSformer (high-resolution variant, fine-tuned on Something Something v2) TimeSformer model pre-trained on [Something Something v2](https://developer.qualcomm.com/software/ai-datasets/something-something). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer). Disclaimer: The team releasing TimeSformer did not write a model card for this model so this model card has been written by [fcakyon](https://github.com/fcakyon). ## Intended uses & limitations You can use the raw model for video classification into one of the 174 possible Something Something v2 labels. ### How to use Here is how to use this model to classify a video: ```python from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 448, 448)) processor = AutoImageProcessor.from_pretrained("facebook/timesformer-hr-finetuned-ssv2") model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-hr-finetuned-ssv2") inputs = feature_extractor(images=video, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/timesformer.html#). ### BibTeX entry and citation info ```bibtex @inproceedings{bertasius2021space, title={Is Space-Time Attention All You Need for Video Understanding?}, author={Bertasius, Gedas and Wang, Heng and Torresani, Lorenzo}, booktitle={International Conference on Machine Learning}, pages={813--824}, year={2021}, organization={PMLR} } ```
Seznam/simcse-dist-mpnet-paracrawl-cs-en
Seznam
"2023-11-02T21:09:38Z"
2,385
3
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "cs", "en", "arxiv:2104.08821", "license:cc-by-4.0", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2023-11-02T09:33:04Z"
--- license: cc-by-4.0 language: - cs - en pipeline_tag: sentence-similarity --- ## SimCSE SimCSE-DistMPNet-Paracrawl is the [Seznam/dist-mpnet-paracrawl-cs-en](https://huggingface.co/Seznam/dist-mpnet-paracrawl-cs-en) model fine-tuned with the [SimCSE](https://arxiv.org/abs/2104.08821) objective. This model was created at Seznam.cz as part of a project to create high-quality small Czech semantic embedding models. These models perform well across various natural language processing tasks, including similarity search, retrieval, clustering, and classification. For further details or evaluation results, please visit the associated [paper]() or [GitHub repository](https://github.com/seznam/czech-semantic-embedding-models). ## How to Use You can load and use the model like this: ```python import torch from transformers import AutoModel, AutoTokenizer model_name = "Seznam/retromae-small-cs" # Hugging Face link tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) input_texts = [ "Dnes je výborné počasí na procházku po parku.", "Večer si oblíbím dobrý film a uvařím si čaj." ] # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = outputs.last_hidden_state[:, 0] # Extract CLS token embeddings similarity = torch.nn.functional.cosine_similarity(embeddings[0], embeddings[1], dim=0) ```
mradermacher/JOSIEv4o-8b-GGUF
mradermacher
"2024-06-18T01:15:37Z"
2,385
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:Isaak-Carter/JOSIEv4o-8b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T00:47:24Z"
--- base_model: Isaak-Carter/JOSIEv4o-8b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Isaak-Carter/JOSIEv4o-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/JOSIEv4o-8b-GGUF/resolve/main/JOSIEv4o-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
QuantFactory/Qwen2-1.5b-Instruct-Replete-Adapted-GGUF
QuantFactory
"2024-06-25T16:34:16Z"
2,384
0
null
[ "gguf", "region:us" ]
null
"2024-06-25T16:11:59Z"
Entry not found
google/bert_uncased_L-2_H-512_A-8
google
"2021-05-19T17:29:08Z"
2,383
0
transformers
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
mradermacher/ossamai-v0.2-merged16-GGUF
mradermacher
"2024-06-02T04:24:23Z"
2,383
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:blepdoge/ossamai-v0.2-merged16", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-02T03:56:24Z"
--- base_model: blepdoge/ossamai-v0.2-merged16 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/blepdoge/ossamai-v0.2-merged16 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ossamai-v0.2-merged16-GGUF/resolve/main/ossamai-v0.2-merged16.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/L3-Inca-8B-v0.5-GGUF
mradermacher
"2024-06-16T06:15:13Z"
2,383
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Ppoyaa/L3-Inca-8B-v0.5", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-16T05:36:06Z"
--- base_model: Ppoyaa/L3-Inca-8B-v0.5 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Ppoyaa/L3-Inca-8B-v0.5 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.5-GGUF/resolve/main/L3-Inca-8B-v0.5.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
speechbrain/vad-crdnn-libriparty
speechbrain
"2024-02-25T23:23:58Z"
2,382
24
speechbrain
[ "speechbrain", "VAD", "SAD", "Voice Activity Detection", "Speech Activity Detection", "Speaker Diarization", "pytorch", "CRDNN", "LibriSpeech", "LibryParty", "en", "dataset:Urbansound8k", "arxiv:2106.04624", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: "en" thumbnail: tags: - speechbrain - VAD - SAD - Voice Activity Detection - Speech Activity Detection - Speaker Diarization - pytorch - CRDNN - LibriSpeech - LibryParty datasets: - Urbansound8k metrics: - Accuracy --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Voice Activity Detection with a (small) CRDNN model trained on Libriparty This repository provides all the necessary tools to perform voice activity detection with SpeechBrain using a model pretrained on Libriparty. The pre-trained system can process short and long speech recordings and outputs the segments where speech activity is detected. The output of the system looks like this: ``` segment_001 0.00 2.57 NON_SPEECH segment_002 2.57 8.20 SPEECH segment_003 8.20 9.10 NON_SPEECH segment_004 9.10 10.93 SPEECH segment_005 10.93 12.00 NON_SPEECH segment_006 12.00 14.40 SPEECH segment_007 14.40 15.00 NON_SPEECH segment_008 15.00 17.70 SPEECH ``` The system expects input recordings sampled at 16kHz (single channel). If your signal has a different sample rate, resample it (e.g., using torchaudio or sox) before using the interface. For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). # Results The model performance on the LibriParty test set is: | Release | hyperparams file | Test Precision | Test Recall | Test F-Score | Model link | GPUs | |:-------------:|:---------------------------:| -----:| -----:| --------:| :-----------:| :-----------:| | 2021-09-09 | train.yaml | 0.9518 | 0.9437 | 0.9477 | [Model](https://drive.google.com/drive/folders/1YLYGuiyuTH0D7fXOOp6cMddfQoM74o-Y?usp=sharing) | 1xV100 16GB ## Pipeline description This system is composed of a CRDNN that outputs posteriors probabilities with a value close to one for speech frames and close to zero for non-speech segments. A threshold is applied on top of the posteriors to detect candidate speech boundaries. Depending on the active options, these boundaries can be post-processed (e.g, merging close segments, removing short segments, etc) to further improve the performance. See more details below. ## Install SpeechBrain ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform Voice Activity Detection ``` from speechbrain.inference.VAD import VAD VAD = VAD.from_hparams(source="speechbrain/vad-crdnn-libriparty", savedir="pretrained_models/vad-crdnn-libriparty") boundaries = VAD.get_speech_segments("speechbrain/vad-crdnn-libriparty/example_vad.wav") # Print the output VAD.save_boundaries(boundaries) ``` The output is a tensor that contains the beginning/end second of each detected speech segment. You can save the boundaries on a file with: ``` VAD.save_boundaries(boundaries, save_path='VAD_file.txt') ``` Sometimes it is useful to jointly visualize the VAD output with the input signal itself. This is helpful to quickly figure out if the VAD is doing or not a good job. To do it: ``` import torchaudio upsampled_boundaries = VAD.upsample_boundaries(boundaries, 'example_vad.wav') torchaudio.save('vad_final.wav', upsampled_boundaries.cpu(), 16000) ``` This creates a "VAD signal" with the same dimensionality as the original signal. You can now open *vad_final.wav* and *pretrained_model_checkpoints/example_vad.wav* with software like audacity to visualize them jointly. ### VAD pipeline details The pipeline for detecting the speech segments is the following: 1. Compute posteriors probabilities at the frame level. 2. Apply a threshold on the posterior probability. 3. Derive candidate speech segments on top of that. 4. Apply energy VAD within each candidate segment (optional). This might break down long sentences into short one based on the energy content. 5. Merge segments that are too close. 6. Remove segments that are too short. 7. Double-check speech segments (optional). This could is a final check to make sure the detected segments are actually speech ones. We designed the VAD such that you can have access to all of these steps (this might help to debug): ```python from speechbrain.inference.VAD import VAD VAD = VAD.from_hparams(source="speechbrain/vad-crdnn-libriparty", savedir="pretrained_models/vad-crdnn-libriparty") # 1- Let's compute frame-level posteriors first audio_file = "example.wav" prob_chunks = VAD.get_speech_prob_file(audio_file) # 2- Let's apply a threshold on top of the posteriors prob_th = VAD.apply_threshold(prob_chunks).float() # 3- Let's now derive the candidate speech segments boundaries = VAD.get_boundaries(prob_th) # 4- Apply energy VAD within each candidate speech segment (optional) boundaries = VAD.energy_VAD(audio_file,boundaries) # 5- Merge segments that are too close boundaries = VAD.merge_close_segments(boundaries, close_th=0.250) # 6- Remove segments that are too short boundaries = VAD.remove_short_segments(boundaries, len_th=0.250) # 7- Double-check speech segments (optional). boundaries = VAD.double_check_speech_segments(boundaries, audio_file, speech_th=0.5) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (ea17d22). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: Training heavily relies on data augmentation. Make sure you have downloaded all the datasets needed: - LibriParty: https://drive.google.com/file/d/1--cAS5ePojMwNY5fewioXAv9YlYAWzIJ/view?usp=sharing - Musan: https://www.openslr.org/resources/17/musan.tar.gz - CommonLanguage: https://zenodo.org/record/5036977/files/CommonLanguage.tar.gz?download=1 ``` cd recipes/LibriParty/VAD python train.py hparams/train.yaml --data_folder=/path/to/LibriParty --musan_folder=/path/to/musan/ --commonlanguage_folder=/path/to/common_voice_kpd ``` ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
nvidia/stt_en_fastconformer_transducer_large
nvidia
"2023-06-08T02:54:17Z"
2,382
4
nemo
[ "nemo", "automatic-speech-recognition", "speech", "audio", "Transducer", "FastConformer", "Transformer", "pytorch", "NeMo", "hf-asr-leaderboard", "en", "arxiv:2305.05084", "license:cc-by-4.0", "model-index", "region:us" ]
automatic-speech-recognition
"2023-06-08T00:12:03Z"
--- language: - en library_name: nemo datasets: - librispeech_asr - fisher_corpus - Switchboard-1 - WSJ-0 - WSJ-1 - National-Singapore-Corpus-Part-1 - National-Singapore-Corpus-Part-6 - vctk - VoxPopuli-(EN) - Europarl-ASR-(EN) - Multilingual-LibriSpeech-(2000-hours) - mozilla-foundation/common_voice_8_0 - MLCommons/peoples_speech thumbnail: null tags: - automatic-speech-recognition - speech - audio - Transducer - FastConformer - Transformer - pytorch - NeMo - hf-asr-leaderboard license: cc-by-4.0 widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac model-index: - name: stt_en_fastconformer_transducer_large results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 1.8 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 3.8 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Multilingual LibriSpeech type: facebook/multilingual_librispeech config: english split: test args: language: en metrics: - name: Test WER type: wer value: 5.8 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Mozilla Common Voice 7.0 type: mozilla-foundation/common_voice_7_0 config: en split: test args: language: en metrics: - name: Test WER type: wer value: 7.5 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Wall Street Journal 92 type: wsj_0 args: language: en metrics: - name: Test WER type: wer value: 1.4 - task: type: Automatic Speech Recognition name: automatic-speech-recognition dataset: name: Wall Street Journal 93 type: wsj_1 args: language: en metrics: - name: Test WER type: wer value: 2.4 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: National Singapore Corpus type: nsc_part_1 split: test args: language: en metrics: - name: Test WER type: wer value: 5.5 --- # NVIDIA FastConformer-Transducer Large (en) <style> img { display: inline; } </style> | [![Model architecture](https://img.shields.io/badge/Model_Arch-FastConformer--Transducer-lightgrey#model-badge)](#model-architecture) | [![Model size](https://img.shields.io/badge/Params-114M-lightgrey#model-badge)](#model-architecture) | [![Language](https://img.shields.io/badge/Language-en-lightgrey#model-badge)](#datasets) This model transcribes speech in lower case English alphabet. It is a "large" version of FastConformer Transducer (around 114M parameters) model. See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) for complete architecture details. ## NVIDIA NeMo: Training To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version. ``` pip install nemo_toolkit['all'] ``` ## How to Use this Model The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset. ### Automatically instantiate the model ```python import nemo.collections.asr as nemo_asr asr_model = nemo_asr.models.EncDecRNNTBPEModel.from_pretrained(model_name="nvidia/stt_en_fastconformer_transducer_large") ``` ### Transcribing using Python First, let's get a sample ``` wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav ``` Then simply do: ``` asr_model.transcribe(['2086-149220-0033.wav']) ``` ### Transcribing many audio files ```shell python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="nvidia/stt_en_fastconformer_transducer_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>" ``` ### Input This model accepts 16000 Hz Mono-channel Audio (wav files) as input. ### Output This model provides transcribed speech as a string for a given audio sample. ## Model Architecture FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with a Transducer decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer). ## Training The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_transducer/speech_to_text_rnnt_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/fast-conformer_transducer_bpe.yaml). The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py). ### Datasets The model in this collection is trained on a composite dataset (NeMo ASRSet En) comprising several thousand hours of English speech: - Librispeech 960 hours of English speech - Fisher Corpus - Switchboard-1 Dataset - WSJ-0 and WSJ-1 - National Speech Corpus (Part 1, Part 6) - VCTK - VoxPopuli (EN) - Europarl-ASR (EN) - Multilingual Librispeech (MLS EN) - 2,000 hrs subset - Mozilla Common Voice (v7.0) - People's Speech - 12,000 hrs subset ## Performance The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general. The following tables summarizes the performance of the available models in this collection with the Transducer decoder. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding. |**Version**|**Tokenizer**|**Vocabulary Size**|**LS test-other**|**LS test-clean**|**WSJ Eval92**|**WSJ Dev93**|**NSC Part 1**|**MLS Test**|**MCV Test 7.0**| Train Dataset | |---------|-----------------------|-----------------|---------------|---------------|------------|-----------|-----|-------|------|------| | 1.18.0 | SentencePiece Unigram | 1024 | 3.8 | 1.8 | 1.4 | 2.4 | 5.5 | 5.8 | 7.5 | NeMo ASRSET 3.0 | ## Limitations Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech. ## NVIDIA Riva: Deployment [NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded. Additionally, Riva provides: * World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours * Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization * Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva). Check out [Riva live demo](https://developer.nvidia.com/riva#demos). ## References [1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084) [2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece) [3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) ## Licence License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
omeryentur/phi-3-sql
omeryentur
"2024-06-23T11:40:19Z"
2,382
0
transformers
[ "transformers", "pytorch", "gguf", "mistral", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
"2024-05-29T10:39:32Z"
``` Prompt =""" <|system|> {table_info} <|user|> {question} <|sql|>""" ```
madatnlp/mist-enko-lora-2950
madatnlp
"2023-12-17T01:03:02Z"
2,381
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "lora", "en", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-16T23:59:12Z"
--- license: apache-2.0 language: - en - ko tags: - lora --- <br> basemodel: <br> - mistral-7b <br> rank: <br> 128 <br> alpha: <br> 16<br> Model Details Description This model is further pretrained model based on Mistral-7b-v0.1. dataset used from ai-hub various translation dataset
TheBloke/Open_Gpt4_8x7B_v0.2-GGUF
TheBloke
"2024-01-12T12:06:55Z"
2,381
9
transformers
[ "transformers", "gguf", "mixtral", "merge", "moe", "base_model:rombodawg/Open_Gpt4_8x7B_v0.2", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
"2024-01-12T11:40:11Z"
--- base_model: rombodawg/Open_Gpt4_8x7B_v0.2 inference: false license: apache-2.0 model_creator: rombo dawg model_name: Open Gpt4 8X7B V0.2 model_type: mixtral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - merge - moe --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Open Gpt4 8X7B V0.2 - GGUF - Model creator: [rombo dawg](https://huggingface.co/rombodawg) - Original model: [Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- description start --> ## Description This repo contains GGUF format model files for [rombo dawg's Open Gpt4 8X7B V0.2](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF) * [rombo dawg's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [open_gpt4_8x7b_v0.2.Q2_K.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q2_K.gguf) | Q2_K | 2 | 17.17 GB| 19.67 GB | smallest, significant quality loss - not recommended for most purposes | | [open_gpt4_8x7b_v0.2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q3_K_M.gguf) | Q3_K_M | 3 | 22.48 GB| 24.98 GB | very small, high quality loss | | [open_gpt4_8x7b_v0.2.Q4_0.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [open_gpt4_8x7b_v0.2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q4_K_M.gguf) | Q4_K_M | 4 | 28.38 GB| 30.88 GB | medium, balanced quality - recommended | | [open_gpt4_8x7b_v0.2.Q5_0.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [open_gpt4_8x7b_v0.2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q5_K_M.gguf) | Q5_K_M | 5 | 33.23 GB| 35.73 GB | large, very low quality loss - recommended | | [open_gpt4_8x7b_v0.2.Q6_K.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss | | [open_gpt4_8x7b_v0.2.Q8_0.gguf](https://huggingface.co/TheBloke/Open_Gpt4_8x7B_v0.2-GGUF/blob/main/open_gpt4_8x7b_v0.2.Q8_0.gguf) | Q8_0 | 8 | 49.62 GB| 52.12 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Open_Gpt4_8x7B_v0.2-GGUF and below it, a specific filename to download, such as: open_gpt4_8x7b_v0.2.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GGUF open_gpt4_8x7b_v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Open_Gpt4_8x7B_v0.2-GGUF open_gpt4_8x7b_v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m open_gpt4_8x7b_v0.2.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./open_gpt4_8x7b_v0.2.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./open_gpt4_8x7b_v0.2.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: rombo dawg's Open Gpt4 8X7B V0.2 Open_Gpt4_v0.2 This is the un-quantized fp16 version for training and merging. If you want the quantized version for inference please refer to the repo bellow: - https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2_q8_0_gguf ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/T7QKB0fKNHQvNqAjm8zrH.jpeg) This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-dpo-8x7b-v0.2 with MixtralOrochi8x7B being the Base model. I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct, Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. 😊 This is the second iteration of this model, using better models in the merger to improve performance (hopefully). Base model: - https://huggingface.co/smelborp/MixtralOrochi8x7B Merged models: - https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1 - https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2 Instruct template: Alpaca Merger config: ``` models: - model: Mixtral-8x7B-Instruct-v0.1 parameters: density: .5 weight: 1 - model: bagel-dpo-8x7b-v0.2 parameters: density: .5 weight: .7 merge_method: ties base_model: MixtralOrochi8x7B parameters: normalize: true int8_mask: true dtype: float16 ``` <!-- original-model-card end -->
nm-testing/tinyllama-oneshot-w8a8-static-v2
nm-testing
"2024-06-17T19:55:22Z"
2,381
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-07T18:09:15Z"
Entry not found
mradermacher/AvvoChat_AITA_v02-GGUF
mradermacher
"2024-06-14T21:32:04Z"
2,381
0
transformers
[ "transformers", "gguf", "en", "base_model:AndreaAlessandrelli4/AvvoChat_AITA_v02", "endpoints_compatible", "region:us" ]
null
"2024-06-14T20:24:29Z"
--- base_model: AndreaAlessandrelli4/AvvoChat_AITA_v02 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AndreaAlessandrelli4/AvvoChat_AITA_v02 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/AvvoChat_AITA_v02-GGUF/resolve/main/AvvoChat_AITA_v02.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
timm/tf_efficientnet_lite2.in1k
timm
"2023-04-27T21:38:22Z"
2,380
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T00:13:43Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnet_lite2.in1k A EfficientNet-Lite image classification model. Trained on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 6.1 - GMACs: 0.9 - Activations (M): 12.9 - Image size: 260 x 260 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnet_lite2.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_lite2.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 16, 130, 130]) # torch.Size([1, 24, 65, 65]) # torch.Size([1, 48, 33, 33]) # torch.Size([1, 120, 17, 17]) # torch.Size([1, 352, 9, 9]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_lite2.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 9, 9) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
jebcarter/Psyonic-Rose-20B-GGUF
jebcarter
"2023-12-23T04:50:39Z"
2,380
4
null
[ "gguf", "license:other", "region:us" ]
null
"2023-12-23T02:35:30Z"
--- license: other license_name: microsoft-research-license license_link: LICENSE ---
MaziyarPanahi/YorkShire11-GGUF
MaziyarPanahi
"2024-06-15T23:44:16Z"
2,380
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:vicgalle/CarbonBeagle-11B", "base_model:Sao10K/Fimbulvetr-11B-v2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/YorkShire11" ]
text-generation
"2024-06-15T23:23:05Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - llama - text-generation - mergekit - merge - base_model:vicgalle/CarbonBeagle-11B - base_model:Sao10K/Fimbulvetr-11B-v2 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: YorkShire11-GGUF base_model: mergekit-community/YorkShire11 inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/YorkShire11-GGUF](https://huggingface.co/MaziyarPanahi/YorkShire11-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/YorkShire11](https://huggingface.co/mergekit-community/YorkShire11) ## Description [MaziyarPanahi/YorkShire11-GGUF](https://huggingface.co/MaziyarPanahi/YorkShire11-GGUF) contains GGUF format model files for [mergekit-community/YorkShire11](https://huggingface.co/mergekit-community/YorkShire11). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
Tincando/fiction_story_generator
Tincando
"2023-09-06T18:00:03Z"
2,378
4
transformers
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "arxiv:1805.04833", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-06-02T18:09:35Z"
--- tags: - generated_from_trainer model-index: - name: fiction_story_generator results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT-Neo za Generiranje Fiktivnih Priča/ GPT-Neo for Fiction Story Generation Ovaj model je fino podešena verzija EleutherAI-jevog GPT-Neo-125M modela, optimiziran za generiranje fikcijskih priča. Obučen je na skupu podataka dostupnom na https://github.com/facebookresearch/fairseq/tree/main/examples/stories. ## Opis modela - Naziv Modela: GPT-Neo-Fiction - Student: Tin Kanjovsky/Tincando - Mentor: izv.prof.dr.sc. Darko Etinger - Verzija Modela: 1.0 ## Upotrebe i ograničenja Model je dizajniran za generiranje kreativnih fiktivnih priča. Može se koristiti u razne svrhe, uključujući, ali ne ograničavajući se na: - Pripovijedanje: Generiranje zanimljivih i maštovitih fiktivnih priča. - Generiranje Sadržaja: Stvaranje sadržaja za blogove, web stranice ili druge medije s elementom pripovijedanja. - Kreativno Pisanje: Pomoć autorima i piscima pri razmišljanju o idejama i razvijanju narativa. ## Performanse Modela - Podaci za Obuku: Model je obučen na raznolikom skupu podataka fiktivnih priča i prompteva. - Metrike Evaluacije: Performanse metrika, kao što su perpleksnost ili BLEU skorovi, mogu varirati ovisno o konkretnom zadatku i skupu podataka. ## Ograničenja - Kvaliteta Sadržaja: Iako model može generirati kreativne priče, kvaliteta i koherentnost izlaza mogu varirati, a povremeno može proizvesti besmislene ili neprimjerene sadržaje. - Pristranost: Model može pokazivati pristranosti prisutne u skupu podataka za obuku, stoga je važno biti oprezan prilikom korištenja za osjetljive teme ili sadržaje. - Duljina Izlaza: Model može generirati tekst različite duljine i ne uvijek će proizvesti željenu duljinu izlaza. - Podaci za Fino Podešavanje: Kvaliteta generiranih priča ovisi o kvaliteti i raznolikosti skupa podataka za fino podešavanje. ## Upotreba ``` from transformers import GPTNeoForCausalLM, GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained(Tincando/fiction_story_generator) model = GPTNeoForCausalLM.from_pretrained(Tincando/fiction_story_generator) # Generate a fiction story input_prompt = "[WP] I can't believe I died the same way twice." input_ids = tokenizer(input_prompt, add_special_tokens=False, return_tensors="pt").input_ids output = model.generate(input_ids, max_length=300, temperature=0.9, top_k=2, top_p=0.9, repetition_penalty=1.2, do_sample=True, num_return_sequences=2 ) generated_story = tokenizer.batch_decode(output,clean_up_tokenization_spaces=True)[0] print(generated_story) ``` ## Etika Prilikom korištenja ovog modela, razmotrite sljedeće etičke smjernice: - Moderacija Sadržaja: Implementirajte moderaciju sadržaja kako biste osigurali da generirane priče ne krše smjernice ili standarde zajednice. - Pristranost i Pravednost: Budite svjesni potencijalnih pristranosti u izlazu modela i poduzmite korake za njihovo ublažavanje. - Privatnost: Izbjegavajte upotrebu osobnih ili osjetljivih informacija kao ulaznih poticaja. - Pravna Usklađenost: Pazite da generirani sadržaj bude u skladu s autorskim pravima i zakonima o intelektualnom vlasništvu. ## Citiranje Ako koristite GPT-Neo-Fiction u svojem radu, molimo razmislite o citiranju originalnog GPT-Neo modela i skupa podataka koji su korišteni za fino podešavanje: - [GPT-Neo Paper](https://github.com/EleutherAI/gpt-neo) - [Fairseq Repository](https://github.com/facebookresearch/fairseq/tree/main/examples/stories) - [Hierarchical Neural Story Generation](https://arxiv.org/abs/1805.04833) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 3.0842 | 1.0 | 34075 | 3.1408 | | 3.0026 | 2.0 | 68150 | 3.1275 | | 2.9344 | 3.0 | 102225 | 3.1270 | | 2.8932 | 4.0 | 136300 | 3.1306 | | 2.8517 | 5.0 | 170375 | 3.1357 | ### Framework versions - Transformers 4.28.0 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
ai-forever/RuM2M100-418M
ai-forever
"2023-10-22T08:46:07Z"
2,378
1
transformers
[ "transformers", "pytorch", "m2m_100", "text2text-generation", "spellchecking", "NLP", "M2M100", "natural language generation", "ru", "arxiv:2308.09435", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-07-26T14:45:34Z"
--- license: mit language: - ru tags: - spellchecking - NLP - M2M100 - pytorch - natural language generation --- # RuM2M100-418M model ### Summary The model corrects spelling errors and typos by bringing all the words in the text to the norm of the Russian language. The proofreader was trained based on the [M2M100-418M](https://huggingface.co/facebook/m2m100_418M) model. An extensive dataset with “artificial” errors was taken as a training corpus: the corpus was assembled on the basis of the Russian-language Wikipedia and transcripts of Russian-language videos, then typos and spelling errors were automatically introduced into it using the functionality of the [SAGE library](https://github.com/ai-forever/sage). ### Public references - [SAGE library announcement](https://youtu.be/yFfkV0Qjuu0), DataFest 2023 - [Paper about synthetic error generation methods](https://www.dialog-21.ru/media/5914/martynovnplusetal056.pdf), Dialogue 2023 - [Paper about SAGE and our best solution](https://arxiv.org/abs/2308.09435), Review EACL 2024 ### Examples | Input | Output | | --- | --- | | Думю ешцъа лет череа 10 ретроспективно просматривотьэ то будкетцц мне невероя тна ин те р но | Думаю, еш цъа лет через 10 ретроспективно просматривать, що буде ТЦ. Мне невероятна нтерно. | | Основая цель мероприятия - практическая отработка навыков по оказанию помощи гражданам, попавшим в ДТП, а также повышение и совершенствование уровня профессиональной подготовки сотрудников МЧС при проведении аварийно-спасательных работ по ликвидации последствий дорожно-транспортных проишествий, сокращение временных показателей реагирования. | Основная цель мероприятия - практическая отработка навыков по оказанию помощи гражданам, попавшим в ДТП, а также повышение и совершенствование уровня профессиональной подготовки сотрудников МЧС при проведении аварийно-спасательных работ по ликвидации последствий дорожно-транспортных происшествий, сокращение временных показателей реагирования. | | прийдя в МГТУ я был удивлен никого необноружив там… | прийдя в МГТУ я был удивлен никого не обнаружив там... | ## Metrics ### Quality Below are automatic metrics for determining the correctness of the spell checkers. We compare our solution with both open automatic spell checkers and the ChatGPT family of models on all four available datasets: - **RUSpellRU**: texts collected from ([LiveJournal](https://www.livejournal.com/media)), with manually corrected typos and errors; - **MultidomainGold**: examples from 7 text sources, including the open web, news, social media, reviews, subtitles, policy documents and literary works; - **MedSpellChecker**: texts with errors from medical anamnesis; - **GitHubTypoCorpusRu**: spelling errors and typos in commits from [GitHub](https://github.com); **RUSpellRU** | Model | Precision | Recall | F1 | | --- | --- | --- | --- | | M2M100-418M | 57.7 | 61.2 | 59.4 | | ChatGPT gpt-3.5-turbo-0301 | 55.8 | 75.3 | 64.1 | | ChatGPT gpt-4-0314 | 57.0 | 75.9 | 63.9 | | ChatGPT text-davinci-003 | 55.9 | 75.3 | 64.2 | | Yandex.Speller | 83.0 | 59.8 | 69.5 | | JamSpell | 42.1 | 32.8 | 36.9 | | HunSpell | 31.3 | 34.9 | 33.0 | **MultidomainGold** | Model | Precision | Recall | F1 | | --- | --- | --- | --- | | M2M100-418M | 32.8 | 56.3 | 41.5 | | ChatGPT gpt-3.5-turbo-0301 | 33.8 | 72.1 | 46.0 | | ChatGPT gpt-4-0314 | 34.0 | 73.2 | 46.4 | | ChatGPT text-davinci-003 | 33.6 | 72.0 | 45.8 | | Yandex.Speller | 52.9 | 51.4 | 52.2 | | JamSpell | 25.7 | 30.6 | 28.0 | | HunSpell | 16.2 | 40.1 | 23.0 | **MedSpellChecker** | Модель | Precision | Recall | F1 | | --- | --- | --- | --- | | M2M100-418M | 23.2 | 64.5 | 34.1 | | ChatGPT gpt-3.5-turbo-0301 | 53.2 | 67.6 | 59.6 | | ChatGPT gpt-4-0314 | 54.2 | 69.4 | 60.9 | | ChatGPT text-davinci-003 | 47.8 | 68.4 | 56.3 | | Yandex.Speller | 80.6 | 47.8 | 60.0 | | JamSpell | 24.6 | 29.7 | 26.9 | | HunSpell | 10.3 | 40.2 | 16.4 | **GitHubTypoCorpusRu** | Модель | Precision | Recall | F1 | | --- | --- | --- | --- | | M2M100-418M | 27.5 | 42.6 | 33.4 | | ChatGPT gpt-3.5-turbo-0301 | 43.8 | 57.0 | 49.6 | | ChatGPT gpt-4-0314 | 45.2 | 58.2 | 51.0 | | ChatGPT text-davinci-003 | 46.5 | 58.1 | 51.7 | | Yandex.Speller | 67.7 | 37.5 | 48.3 | | JamSpell | 49.5 | 29.9 | 37.3 | | HunSpell | 28.5 | 30.7 | 29.6 | ## How to use ```python from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer path_to_model = "ai-forever/RuM2M100-418M" model = M2M100ForConditionalGeneration.from_pretrained(path_to_model) tokenizer = M2M100Tokenizer.from_pretrained(path_to_model, src_lang="ru", tgt_lang="ru") sentence = "прийдя в МГТУ я был удивлен никого необноружив там…" encodings = tokenizer(sentence, return_tensors="pt") generated_tokens = model.generate( **encodings, forced_bos_token_id=tokenizer.get_lang_id("ru")) answer = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(answer) # ["прийдя в МГТУ я был удивлен никого не обнаружив там..."] ``` ## Resources - [SAGE library](https://github.com/ai-forever/sage), GitHub - [ruM2M100-1.2B](https://huggingface.co/ai-forever/RuM2M100-1.2B), HuggingFace - [ruM2M100-418M](https://huggingface.co/ai-forever/RuM2M100-420M), HuggingFace - [FredT5-large-spell](https://huggingface.co/ai-forever/FRED-T5-large-spell), HuggingFace - [T5-large-spell](https://huggingface.co/ai-forever/T5-large-spell), HuggingFace ## License Model [M2M100-418M](https://huggingface.co/facebook/m2m100_418M), on the basis of which our solution is made, and its source code are supplied under the MIT open license. Our solution also comes with MIT license. ## Specifications - File size: 2 Gb; - Framework: pytorch - Format: AI Service - Version: v1.0 - Developer: SberDevices, AGI NLP ## Contacts [email protected]
sanghwa-na/llama2-13b.kor
sanghwa-na
"2023-10-27T16:31:25Z"
2,378
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-2", "instruct", "instruction", "ko", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-27T12:09:20Z"
--- language: - ko tags: - llama-2 - instruct - instruction pipeline_tag: text-generation license: llama2 --- # llama2-13b.kor ### Model Details - Developed by: Sanghwa Na - Backbone Model: [LLaMA-2](https://github.com/facebookresearch/llama/tree/main) - Library: [transformers](https://github.com/huggingface/transformers) ### Used Datasets - Orca-style dataset - Platypus ### Prompt Template ``` ### Instruction: {Instruction} ### Answer: {Answer} ``` ### License meta-license
mradermacher/astrollama3-8b-chat-GGUF
mradermacher
"2024-06-05T07:53:22Z"
2,378
1
transformers
[ "transformers", "gguf", "trl", "sft", "generated_from_trainer", "en", "base_model:TirthankarSlg/astrollama3-8b-chat", "endpoints_compatible", "region:us" ]
null
"2024-06-05T07:25:52Z"
--- base_model: TirthankarSlg/astrollama3-8b-chat language: - en library_name: transformers quantized_by: mradermacher tags: - trl - sft - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/TirthankarSlg/astrollama3-8b-chat <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/astrollama3-8b-chat-GGUF/resolve/main/astrollama3-8b-chat.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF
mradermacher
"2024-06-05T08:13:01Z"
2,377
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:crestf411/L3-8B-sunfall-abliterated-v0.2", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-05T05:56:09Z"
--- base_model: crestf411/L3-8B-sunfall-abliterated-v0.2 language: - en library_name: transformers license: llama3 license_link: LICENSE license_name: llama3 quantized_by: mradermacher tags: - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/crestf411/L3-8B-sunfall-abliterated-v0.2 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.2-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
valhalla/gpt-neo-random-tiny
valhalla
"2021-04-07T16:38:40Z"
2,376
0
transformers
[ "transformers", "pytorch", "gpt_neo", "feature-extraction", "endpoints_compatible", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
**This model is uploaded for testing purpose. It's random model not trained on anything**
dpv/finetuned-gpt2-tiny
dpv
"2023-06-23T15:12:26Z"
2,376
1
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "dataset:roneneldan/TinyStories", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-23T14:58:18Z"
--- license: mit tags: - generated_from_trainer datasets: - roneneldan/TinyStories model-index: - name: finetuned-gpt2-tiny results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-gpt2-tiny This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the roneneldan/TinyStories dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
kyujinpy/Ko-PlatYi-6B
kyujinpy
"2023-12-09T13:21:33Z"
2,376
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-02T19:09:59Z"
--- language: - ko datasets: - kyujinpy/KOR-OpenOrca-Platypus-v3 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- # **Ko-PlatYi-6B** <img src='./Ko-PlatYi.png' width=256> ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Ko-PlatYi-6B is an auto-regressive language model based on the Yi-34B transformer architecture. **Blog Link** Blog: [Coming soon...] Github: [Coming soon...] **Base Model** [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) **Training Dataset** [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3). # **Model Benchmark** ## Open leaderboard > Follow up as [link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | CommonGen-V2 | | --- | --- | --- | --- | --- | --- | --- | | Ko-PlatYi-6B-O | 49.00 | 43.52 | 53.59 | 47.47 | 41.01 | 59.39 | | Ko-PlatYi-6B-kiwi | 48.75 | 41.98 | 53.61 | 46.10 | 38.30 | 63.75 | | Ko-PlatYi-6B-gu | 48.76 | 42.75 | 54.00 | 44.66 | 41.22 | 61.16 | | **Ko-PlatYi-6B** | 49.97 | 43.00 | 53.55 | 46.50 | 40.31 | 66.47 | | Yi-Ko-6B | 48.79 | 41.04 | 53.39 | 46.28 | 41.64 | 61.63 --- ## AI-Harness Evaluation > AI-Harness evaluation; [link](https://github.com/Beomi/ko-lm-evaluation-harness) | Model | BoolQ | Copa | HellaSwag | Sentineg | | --- | --- | --- | --- | --- | | | *Zero-shot* |||| | Ko-PlatYi-6B-O | 0.3343 | 0.7687 | 0.4833 | 0.5794 | | Ko-PlatYi-6B-kiwi | 0.3343 | 0.7665 | 0.4746 | **0.6248** | | Ko-PlatYi-6B-gu | **0.7077** | **0.7696** | 0.4797 | 0.3979 | | **Ko-PlatYi-6B** | 0.3343 | 0.7684 | **0.4917** | 0.5226 | | Yi-Ko-6B | **0.7070** | 0.7696 | **0.5009** | 0.4044 | --- # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/Ko-PlatYi-6B" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ```
K024/mt5-zh-ja-en-trimmed
K024
"2022-03-24T14:57:22Z"
2,375
47
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "translation", "zh", "ja", "en", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- language: - zh - ja - en tags: - translation widget: - text: "ja2zh: 吾輩は猫である。名前はまだ無い。" license: cc-by-nc-sa-4.0 --- This model is finetuned from [mt5-base](https://huggingface.co/google/mt5-base). The model vocabulary is trimmed to ~1/3 by selecting top 85000 tokens in the training data. The code to trim the vocabulary can be found [here](https://gist.github.com/K024/4a100a0f4f4b07208958e0f3244da6ad). Usage: ```python from transformers import ( T5Tokenizer, MT5ForConditionalGeneration, Text2TextGenerationPipeline, ) path = "K024/mt5-zh-ja-en-trimmed" pipe = Text2TextGenerationPipeline( model=MT5ForConditionalGeneration.from_pretrained(path), tokenizer=T5Tokenizer.from_pretrained(path), ) sentence = "ja2zh: 吾輩は猫である。名前はまだ無い。" res = pipe(sentence, max_length=100, num_beams=4) res[0]['generated_text'] ``` Training data: ``` wikimedia-en-ja wikimedia-en-zh wikimedia-ja-zh wikititles-ja-en wikititles-zh-en wikimatrix-ja-zh news-commentary-en-ja news-commentary-en-zh news-commentary-ja-zh ted2020-en-ja ted2020-en-zh ted2020-ja-zh ``` License: [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
facebook/timesformer-hr-finetuned-k400
facebook
"2022-12-12T12:52:40Z"
2,375
2
transformers
[ "transformers", "pytorch", "timesformer", "video-classification", "vision", "arxiv:2102.05095", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2022-10-07T22:11:12Z"
--- license: "cc-by-nc-4.0" tags: - vision - video-classification --- # TimeSformer (high-resolution variant, fine-tuned on Kinetics-400) TimeSformer model pre-trained on [Kinetics-400](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer). Disclaimer: The team releasing TimeSformer did not write a model card for this model so this model card has been written by [fcakyon](https://github.com/fcakyon). ## Intended uses & limitations You can use the raw model for video classification into one of the 400 possible Kinetics-400 labels. ### How to use Here is how to use this model to classify a video: ```python from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 448, 448)) processor = AutoImageProcessor.from_pretrained("facebook/timesformer-hr-finetuned-k400") model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-hr-finetuned-k400") inputs = processor(images=video, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/timesformer.html#). ### BibTeX entry and citation info ```bibtex @inproceedings{bertasius2021space, title={Is Space-Time Attention All You Need for Video Understanding?}, author={Bertasius, Gedas and Wang, Heng and Torresani, Lorenzo}, booktitle={International Conference on Machine Learning}, pages={813--824}, year={2021}, organization={PMLR} } ```
winninghealth/WiNGPT2-Llama-3-8B-Chat
winninghealth
"2024-04-25T05:48:32Z"
2,375
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "medical", "conversational", "en", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-23T06:17:54Z"
--- license: apache-2.0 language: - en - zh tags: - medical --- ## WiNGPT2 [WiNGPT](https://github.com/winninghealth/WiNGPT2) 是一个基于GPT的医疗垂直领域大模型,旨在将专业的医学知识、医疗信息、数据融会贯通,为医疗行业提供智能化的医疗问答、诊断支持和医学知识等信息服务,提高诊疗效率和医疗服务质量。 ## 更新日志 [2024/04/23] 更新 WiNGPT2-Llama-3-8B-Base 和 WiNGPT2-Llama-3-8B-Chat 模型(中文增强/多语言)与测评结果 [2024/04/01] 更新 WiNEval 测评结果 [2024/03/05] 开源7B/14B-Chat-4bit模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Chat-AWQ)WiNGPT2-7B-Chat-4bit和[🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Chat-AWQ)WiNGPT2-14B-Chat-4bit。 [2023/12/20] 新增用户微信群二维码,有效期到12月27日,扫码进群。 [2023/12/18] 发布卫宁健康医疗模型测评方案 WiNEval-MCKQuiz的评测结果。 [2023/12/12] 开源 WiNGPT2 14B模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Base)WiNGPT2-14B-Base 和 [🤗](https://huggingface.co/winninghealth/WiNGPT2-14B-Chat)WiNGPT2-14B-Chat。 [2023/11/02] [34B模型平台测试](https://wingpt.winning.com.cn/) 和 [欢迎加入微信讨论群](https://github.com/winninghealth/WiNGPT2/blob/main/assets/WiNGPT_GROUP.JPG) [2023/10/13] 更新一个简单的[Chatbot示例](#部署),可以进行简单的多轮对话。 [2023/09/26] 开源 WiNGPT2 与7B模型权重: [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Base)WiNGPT2-7B-Base 和 [🤗](https://huggingface.co/winninghealth/WiNGPT2-7B-Chat)WiNGPT2-7B-Chat。 ## 如何使用 ### 推理 ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "WiNGPT-Llama-3-8B-Chat" device = "cuda" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path).to(device) model = model.eval() text = 'User:WiNGPT, 你好<|end_of_text|>\n Assistant:' inputs = tokenizer.encode(text, return_tensors="pt").to(device) outputs = model.generate(inputs, repetition_penalty=1.1, max_new_tokens=1024) response = tokenizer.decode(outputs[0]) print(response) ## 输出结果:你好!今天我能为你做些什么?<|end_of_text|> ``` ### 提示 WiNGPT-Llama-3-8B-Chat 使用了自定义的提示格式: 用户角色:System/User/Assistant chat_template: ```jinja2 "{% for message in messages %}{% if message['role'] == 'system' %}System:{% endif %}{% if message['role'] == 'user' %}User:{% endif %}{% if message['role'] == 'assistant' %}Assistant:{% endif %}{{ message['content'] }}<|end_of_text|>\n {% endfor %}Assistant:" ``` **指令提示**示例: ``` User:WiNGPT, 你好<|end_of_text|>\n Assistant: ``` **多轮对话**示例: ``` User:WiNGPT, 你好<|end_of_text|>\n Assistant:你好!今天我能为你做些什么?<|end_of_text|>\n User:你是谁?<|end_of_text|>\n Assistant: ``` **翻译功能**示例: ``` System:作为医疗领域的智能助手,WiNGPT将提供中英翻译服务。用户输入的中文或英文内容将由WiNGPT进行准确的翻译,以满足用户的语言需求。<|end_of_text|>\n User:Life is short, you know, and time is so swift; Rivers are wide, so wide, and ships sail far.<|end_of_text|>\n Assistant: ``` ## 模型卡 #### 训练配置与参数 | 名称 | 训练策略 | 长度 | 精度 | 学习率 | Weight_decay | Epochs | GPUs | | ----------------------- | ------------------ | ---- | ---- | ------ | ------------ | ------ | ------ | | WiNGPT2-Llama-3-8B-Base | 继续预训练 (20G) | 8192 | bf16 | 5e-5 | 0.05 | 2 | A100*8 | | WiNGPT2-Llama-3-8B-Chat | 微调/对齐 (50万条) | 8192 | bf16 | 5e-6 | 0.01 | 4 | A100*8 | #### 训练数据 预训练数据约20G,指令微调对齐数据约50万条,[详细内容](https://github.com/winninghealth/WiNGPT2?tab=readme-ov-file#%E8%AE%AD%E7%BB%83%E6%95%B0%E6%8D%AE) 。 ## 中文医疗评测 - WiNEval 更新时间:2024-04-23 | | Type | MCKQuiz | MSceQA | | ----------------------------- | ---------------------- | ------- | ------ | | **WiNGPT-Llama-3-8B-Base** | Continued Pre-training | 66.3 | / | | Meta-Llama-3-8B | Pre-training | 37 | / | | | | | | | **WiNGPT-Llama-3-8B-Chat** | Finetuning/Alignment | 65.2 | 79.8 | | Meta-Llama-3-8B-Instruct | Finetuning/Alignment | 49.8 | 76.3 | | Meta-Llama-3-70B-Instruct-AWQ | Finetuning/Alignment | 73.5 | 78.6 | | | | | | *MCKQuiz(客观题):17个科目分类13060选择题;输入问题和选项,让模型输出答案。根据标准答案判断对错,统计准确率。* *MSceQA(主观题):由细分领域场景题目构成,包含八大业务场景,17个一级分类和32个二级分类。使用人工/模型对模型的回答进行准确性、相关性、一致性、完整性、权威性评价,并参照标准答案对模型生成的答案进行评分。* [其他WiNEval评测结果](https://github.com/winninghealth/WiNGPT2?tab=readme-ov-file#2-%E5%8D%AB%E5%AE%81%E5%81%A5%E5%BA%B7%E5%8C%BB%E7%96%97%E6%A8%A1%E5%9E%8B%E6%B5%8B%E8%AF%84%E6%96%B9%E6%A1%88-winevalzero-shot) ### 企业服务 [通过WiNGPT测试平台申请密钥或与我们取得联系](https://wingpt.winning.com.cn/) ## 局限性与免责声明 (a) WiNGPT2 是一个专业医疗领域的大语言模型,可为一般用户提供拟人化AI医生问诊和问答功能,以及一般医学领域的知识问答。对于专业医疗人士,WiNGPT2 提供关于患者病情的诊断、用药和健康建议等方面的回答的建议仅供参考。 (b) 您应理解 WiNGPT2 仅提供信息和建议,不能替代医疗专业人士的意见、诊断或治疗建议。在使用 WiNGPT2 的信息之前,请寻求医生或其他医疗专业人员的建议,并独立评估所提供的信息。 (c) WiNGPT2 的信息可能存在错误或不准确。卫宁健康不对 WiNGPT2 的准确性、可靠性、完整性、质量、安全性、及时性、性能或适用性提供任何明示或暗示的保证。使用 WiNGPT2 所产生的结果和决策由您自行承担。第三方原因而给您造成的损害结果承担责任。 ## 许可证 1. 本项目授权协议为 Apache License 2.0,模型权重需要遵守基础模型 [Llama-3-8B](https://github.com/meta-llama/llama3) 相关协议及其[许可证](https://llama.meta.com/llama3/license),详细内容参照其网站。 2. 使用本项目包括模型权重时请引用本项目:https://github.com/winninghealth/WiNGPT2 ## 联系我们 网站:https://www.winning.com.cn 邮箱:[email protected]
mohsenfayyaz/toxicity-classifier
mohsenfayyaz
"2021-05-19T23:46:31Z"
2,374
4
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
[BERT base model (uncased)](https://huggingface.co/bert-base-uncased) fine tuned on [Jigsaw Unintended Bias in Toxicity Classification](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification)
AdamCodd/tinybert-sentiment-amazon
AdamCodd
"2024-01-18T14:16:40Z"
2,374
0
transformers
[ "transformers", "onnx", "safetensors", "bert", "text-classification", "dataset:amazon_polarity", "base_model:prajjwal1/bert-tiny", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-11-07T14:17:54Z"
--- datasets: - amazon_polarity base_model: prajjwal1/bert-tiny model-index: - name: amazon_polarity results: - task: type: text-classification name: Text Classification dataset: name: amazon_polarity type: sentiment args: default metrics: - type: accuracy value: 0.942 name: Accuracy - type: loss value: 0.153 name: Loss - type: f1 value: 0.940 name: F1 --- # tinybert-sentiment-amazon This model is a fine-tuned version of [bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on [amazon-polarity dataset](https://huggingface.co/datasets/amazon_polarity). It achieves the following results on the evaluation set: * Loss: 0.153 * Accuracy: 0.942 * F1_score: 0.940 ## Model description TinyBERT is 7.5 times smaller and 9.4 times faster on inference compared to its teacher BERT model (while DistilBERT is 40% smaller and 1.6 times faster than BERT). This model was trained using the entire dataset (3.6M of samples) in constrast to the [distilbert model](https://huggingface.co/AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon) which was trained on only 10% of the dataset. ## Intended uses & limitations While this model may not be as accurate as the distilbert model, its performance should be enough for most use cases. ```python from transformers import pipeline # Create the pipeline sentiment_classifier = pipeline('text-classification', model='AdamCodd/tinybert-sentiment-amazon') # Now you can use the pipeline to classify emotions result = sentiment_classifier("This product doesn't fit me at all.") print(result) #[{'label': 'negative', 'score': 0.9969743490219116}] ``` ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1270 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 150 - num_epochs: 1 - weight_decay: 0.01 ### Framework versions - Transformers 4.35.0 - Pytorch lightning 2.1.0 - Tokenizers 0.14.1 If you want to support me, you can [here](https://ko-fi.com/adamcodd).
hfl/chinese-llama-2-7b-64k-gguf
hfl
"2024-01-24T02:53:35Z"
2,374
2
null
[ "gguf", "zh", "en", "license:apache-2.0", "region:us" ]
null
"2023-12-21T05:45:22Z"
--- license: apache-2.0 language: - zh - en --- # Chinese-LLaMA-2-7B-64K This repository contains GGUF-v3 version (llama.cpp compatible) of **Chinese-LLaMA-2-7B-64K**, which is tuned on Chinese-LLaMA-2-7B with **YaRN method**. ## Performance Metric: PPL, lower is better | Quant | original | imatrix (`-im`) | |-----|------|------| | Q2_K | 11.5424 +/- 0.24106 | 12.1599 +/- 0.26050 | | Q3_K | 10.0152 +/- 0.21296 | 9.9269 +/- 0.21335 | | Q4_0 | 9.7500 +/- 0.20872 | - | | Q4_K | 9.7687 +/- 0.21133 | 9.7239 +/- 0.20999 | | Q5_0 | 9.4647 +/- 0.20280 | - | | Q5_K | 9.6229 +/- 0.20829 | 9.5673 +/- 0.20675 | | Q6_K | 9.5996 +/- 0.20816 | 9.5753 +/- 0.20734 | | Q8_0 | 9.4078 +/- 0.20378 | - | | F16 | 9.5750 +/- 0.20735 | - | *The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).* ## Others For full model in HuggingFace format, please see: https://huggingface.co/hfl/chinese-llama-2-7b-64k Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details.
stablediffusionapi/artium-v20
stablediffusionapi
"2024-01-17T20:14:37Z"
2,374
1
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-01-17T20:13:05Z"
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # Artium v2.0 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/10093968341705520349.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "artium-v20" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/artium-v20) Model link: [View model](https://modelslab.com/models/artium-v20) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "artium-v20", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
nitrosocke/elden-ring-diffusion
nitrosocke
"2023-05-16T09:21:07Z"
2,373
320
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2022-10-05T22:55:13Z"
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image --- **Elden Ring Diffusion** This is the fine-tuned Stable Diffusion model trained on the game art from Elden Ring. Use the tokens **_elden ring style_** in your prompts for the effect. You can download the latest version here: [eldenRing-v3-pruned.ckpt](https://huggingface.co/nitrosocke/elden-ring-diffusion/resolve/main/eldenRing-v3-pruned.ckpt) **If you enjoy my work, please consider supporting me** [![Become A Patreon](https://badgen.net/badge/become/a%20patron/F96854)](https://patreon.com/user?u=79196446) ### 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python #!pip install diffusers transformers scipy torch from diffusers import StableDiffusionPipeline import torch model_id = "nitrosocke/elden-ring-diffusion" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a magical princess with golden hair, elden ring style" image = pipe(prompt).images[0] image.save("./magical_princess.png") ``` **Portraits rendered with the model:** ![Portrait Samples](https://huggingface.co/nitrosocke/elden-ring-diffusion/resolve/main/eldenring-portraits-small.jpg) **Landscape Shots rendered with the model:** ![Landscape Samples](https://huggingface.co/nitrosocke/elden-ring-diffusion/resolve/main/eldenring-landscapes-small.jpg) **Sample images used for training:** ![Training Samples](https://huggingface.co/nitrosocke/elden-ring-diffusion/resolve/main/eldenring-samples-small.jpg) This model was trained using the diffusers based dreambooth training and prior-preservation loss in 3.000 steps. #### Prompt and settings for portraits: **elden ring style portrait of a beautiful woman highly detailed 8k elden ring style** _Steps: 35, Sampler: DDIM, CFG scale: 7, Seed: 3289503259, Size: 512x704_ #### Prompt and settings for landscapes: **elden ring style dark blue night (castle) on a cliff dark night (giant birds) elden ring style Negative prompt: bright day** _Steps: 30, Sampler: DDIM, CFG scale: 7, Seed: 350813576, Size: 1024x576_ ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
leo911kim/Exodia-7B
leo911kim
"2023-10-13T08:15:03Z"
2,373
1
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-13T05:24:34Z"
--- license: mit --- Master of Merging [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/yeongwooki3) The Large Language Model, or LLM, represents a groundbreaking advancement in the realm of artificial intelligence. By fusing together insights and data from various individual models, the LLM is designed to harness the best of each while mitigating their individual weaknesses. This amalgamation allows the LLM to demonstrate unparalleled capability in understanding context, generating accurate content, and adapting to diverse tasks. The integrated approach ensures that users benefit from increased accuracy, wider knowledge coverage, and a more nuanced understanding of both structured and unstructured data. Essentially, the LLM epitomizes the next step in the evolution of AI, bringing about a model that is greater than the sum of its parts.
fluidapp/meta-llama-3-8b-instruct-gguf
fluidapp
"2024-07-02T20:51:28Z"
2,373
0
null
[ "gguf", "license:llama3", "region:us" ]
null
"2024-05-20T22:24:28Z"
--- license: llama3 --- Fork of https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF (Using llama.cpp commit ffe6665 for quantization.)
QuantFactory/CabraMistral-v3-7b-32k-GGUF
QuantFactory
"2024-06-18T06:23:54Z"
2,373
0
null
[ "gguf", "text-generation", "pt", "base_model:botbot-ai/CabraMistral-v3-7b-32k", "license:apache-2.0", "model-index", "region:us" ]
text-generation
"2024-06-12T15:18:21Z"
--- language: - pt license: apache-2.0 base_model: botbot-ai/CabraMistral-v3-7b-32k model-index: - name: CabraMistral-v3-7b-32k results: - task: type: text-generation name: Text Generation dataset: name: ENEM Challenge (No Images) type: eduagarcia/enem_challenge split: train args: num_few_shot: 3 metrics: - type: acc value: 58.64 name: accuracy source: url: >- https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraMistral-v3-7b-32k name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BLUEX (No Images) type: eduagarcia-temp/BLUEX_without_images split: train args: num_few_shot: 3 metrics: - type: acc value: 45.62 name: accuracy source: url: >- https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraMistral-v3-7b-32k name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: OAB Exams type: eduagarcia/oab_exams split: train args: num_few_shot: 3 metrics: - type: acc value: 41.46 name: accuracy source: url: >- https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraMistral-v3-7b-32k name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 RTE type: assin2 split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 86.14 name: f1-macro source: url: >- https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraMistral-v3-7b-32k name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 STS type: eduagarcia/portuguese_benchmark split: test args: num_few_shot: 15 metrics: - type: pearson value: 68.06 name: pearson source: url: >- https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraMistral-v3-7b-32k name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: FaQuAD NLI type: ruanchaves/faquad-nli split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 47.46 name: f1-macro source: url: >- https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraMistral-v3-7b-32k name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HateBR Binary type: ruanchaves/hatebr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 70.46 name: f1-macro source: url: >- https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraMistral-v3-7b-32k name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: PT Hate Speech Binary type: hate_speech_portuguese split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 62.39 name: f1-macro source: url: >- https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraMistral-v3-7b-32k name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: tweetSentBR type: eduagarcia/tweetsentbr_fewshot split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 65.71 name: f1-macro source: url: >- https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=botbot-ai/CabraMistral-v3-7b-32k name: Open Portuguese LLM Leaderboard pipeline_tag: text-generation --- # QuantFactory/CabraMistral-v3-7b-32k-GGUF This is quantized version of [botbot-ai/CabraMistral-v3-7b-32k](https://huggingface.co/botbot-ai/CabraMistral-v3-7b-32k) created using llama.cpp # Model Description <img src="https://uploads-ssl.webflow.com/65f77c0240ae1c68f8192771/660b1a4d574293d8a1ce48ca_cabra1.png" width="400" height="400"> Esse modelo é um finetune do [Mistral 7b Instruct 0.3](https://huggingface.co/mistralai/mistral-7b-instruct-v0.3) com o dataset BotBot Cabra 10k. Esse modelo é optimizado para português. **Conheça os nossos outros modelos: [Cabra](https://huggingface.co/collections/botbot-ai/models-6604c2069ceef04f834ba99b).** ## Detalhes do Modelo ### Modelo: Mistral 7b Instruct 0.3 Mistral-7B-v0.3 é um modelo de transformador, com as seguintes escolhas arquitetônicas: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ### dataset: Cabra 10k Dataset interno para finetuning. Vamos lançar em breve. ### Exemplo ``` <s> [INST] who is Elon Musk? [/INST]Elon Musk é um empreendedor, inventor e capitalista americano. Ele é o fundador, CEO e CTO da SpaceX, CEO da Neuralink e fundador do The Boring Company. Musk também é o proprietário do Twitter.</s> ``` ### Paramentros de trainamento ``` - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 3 ``` ### Framework - Transformers 4.39.0.dev0 - Pytorch 2.1.2+cu118 - Datasets 2.14.6 - Tokenizers 0.15.2 ### Evals # Open Portuguese LLM Leaderboard Evaluation Results Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/botbot-ai/CabraMistral-v3-7b-32k) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard) | Metric | Value | |--------------------------|---------| |Average |**60.66**| |ENEM Challenge (No Images)| 58.64| |BLUEX (No Images) | 45.62| |OAB Exams | 41.46| |Assin2 RTE | 86.14| |Assin2 STS | 68.06| |FaQuAD NLI | 47.46| |HateBR Binary | 70.46| |PT Hate Speech Binary | 62.39| |tweetSentBR | 65.71|
FPTAI/vibert-base-cased
FPTAI
"2021-05-19T11:15:49Z"
2,372
9
transformers
[ "transformers", "pytorch", "jax", "bert", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:04Z"
Entry not found
h2oai/h2o-danube-1.8b-sft
h2oai
"2024-04-05T09:44:51Z"
2,372
11
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "conversational", "en", "dataset:Open-Orca/OpenOrca", "dataset:OpenAssistant/oasst2", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:meta-math/MetaMathQA", "arxiv:2401.16818", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-25T20:19:01Z"
--- language: - en library_name: transformers license: apache-2.0 tags: - gpt - llm - large language model - h2o-llmstudio thumbnail: >- https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico datasets: - Open-Orca/OpenOrca - OpenAssistant/oasst2 - HuggingFaceH4/ultrachat_200k - meta-math/MetaMathQA widget: - messages: - role: user content: Why is drinking water so healthy? pipeline_tag: text-generation --- # Model Card ## Summary h2o-danube-1.8b-sft is a chat fine-tuned model by H2O.ai with 1.8 billion parameters. We release three versions of this model: | Model Name | Description | |:-----------------------------------------------------------------------------------|:----------------| | [h2oai/h2o-danube-1.8b-base](https://huggingface.co/h2oai/h2o-danube-1.8b-base) | Base model | | [h2oai/h2o-danube-1.8b-sft](https://huggingface.co/h2oai/h2o-danube-1.8b-sft) | SFT tuned | | [h2oai/h2o-danube-1.8b-chat](https://huggingface.co/h2oai/h2o-danube-1.8b-chat) | SFT + DPO tuned | This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). ## Model Architecture We adjust the Llama 2 architecture for a total of around 1.8b parameters. For details, please refer to our [Technical Report](https://arxiv.org/abs/2401.16818). We use the original Llama 2 tokenizer with a vocabulary size of 32,000 and train our model up to a context length of 16,384. We incorporate the sliding window attention from mistral with a size of 4,096. The details of the model architecture are: | Hyperparameter | Value | |:----------------|:-------| | n_layers | 24 | | n_heads | 32 | | n_query_groups | 8 | | n_embd | 2560 | | vocab size | 32000 | | sequence length | 16384 | ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed. ```bash pip install transformers==4.36.1 ``` ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="h2oai/h2o-danube-1.8b-sft", torch_dtype=torch.bfloat16, device_map="auto", ) # We use the HF Tokenizer chat template to format each message # https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ {"role": "user", "content": "Why is drinking water so healthy?"}, ] prompt = pipe.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) res = pipe( prompt, max_new_tokens=256, ) print(res[0]["generated_text"]) # <|system|>You are a friendly chatbot</s><|prompt|>Why is drinking water so healthy?</s><|answer|> Drinking water is healthy for several reasons: [...] ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` MistralForCausalLM( (model): MistralModel( (embed_tokens): Embedding(32000, 2560, padding_idx=0) (layers): ModuleList( (0-23): 24 x MistralDecoderLayer( (self_attn): MistralAttention( (q_proj): Linear(in_features=2560, out_features=2560, bias=False) (k_proj): Linear(in_features=2560, out_features=640, bias=False) (v_proj): Linear(in_features=2560, out_features=640, bias=False) (o_proj): Linear(in_features=2560, out_features=2560, bias=False) (rotary_emb): MistralRotaryEmbedding() ) (mlp): MistralMLP( (gate_proj): Linear(in_features=2560, out_features=6912, bias=False) (up_proj): Linear(in_features=2560, out_features=6912, bias=False) (down_proj): Linear(in_features=6912, out_features=2560, bias=False) (act_fn): SiLU() ) (input_layernorm): MistralRMSNorm() (post_attention_layernorm): MistralRMSNorm() ) ) (norm): MistralRMSNorm() ) (lm_head): Linear(in_features=2560, out_features=32000, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF
mradermacher
"2024-06-08T04:31:02Z"
2,372
2
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:Hastagaras/Jamet-8B-L3-MK.V-Blackroot", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-07T19:38:05Z"
--- base_model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Jamet-8B-L3-MK.V-Blackroot-GGUF/resolve/main/Jamet-8B-L3-MK.V-Blackroot.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
actuaryzhang/product-taxonomy-llava-v1.6-13b
actuaryzhang
"2024-04-17T20:29:16Z"
2,371
0
transformers
[ "transformers", "safetensors", "llava_llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-17T20:26:06Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
facebook/timesformer-hr-finetuned-k600
facebook
"2022-12-12T12:53:13Z"
2,370
3
transformers
[ "transformers", "pytorch", "timesformer", "video-classification", "vision", "arxiv:2102.05095", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2022-10-07T22:51:20Z"
--- license: "cc-by-nc-4.0" tags: - vision - video-classification --- # TimeSformer (base-sized model, fine-tuned on Kinetics-600) TimeSformer model pre-trained on [Kinetics-600](https://www.deepmind.com/open-source/kinetics). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer). Disclaimer: The team releasing TimeSformer did not write a model card for this model so this model card has been written by [fcakyon](https://github.com/fcakyon). ## Intended uses & limitations You can use the raw model for video classification into one of the 600 possible Kinetics-600 labels. ### How to use Here is how to use this model to classify a video: ```python from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(16, 3, 448, 448)) processor = AutoImageProcessor.from_pretrained("facebook/timesformer-hr-finetuned-k600") model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-hr-finetuned-k600") inputs = processor(images=video, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/timesformer.html#). ### BibTeX entry and citation info ```bibtex @inproceedings{bertasius2021space, title={Is Space-Time Attention All You Need for Video Understanding?}, author={Bertasius, Gedas and Wang, Heng and Torresani, Lorenzo}, booktitle={International Conference on Machine Learning}, pages={813--824}, year={2021}, organization={PMLR} } ```
hvein/5DhwFYtWCHEVhGHreyYeRwg1NWBHrdqXEzfPyLh5Q4efrCTi_vgg
hvein
"2024-03-05T20:04:20Z"
2,369
0
keras
[ "keras", "region:us" ]
null
"2024-02-07T22:10:35Z"
Entry not found
mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF
mradermacher
"2024-06-17T07:22:54Z"
2,367
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "not-for-all-audiences", "nsfw", "rp", "roleplay", "role-play", "en", "base_model:Casual-Autopsy/L3-Umbral-Mind-RP-v0.6.2-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-16T23:48:58Z"
--- base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.6.2-8B language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - not-for-all-audiences - nsfw - rp - roleplay - role-play --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.6.2-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v0.6.2-8B-GGUF/resolve/main/L3-Umbral-Mind-RP-v0.6.2-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
rinna/japanese-gpt-neox-3.6b-instruction-ppo
rinna
"2024-04-03T07:27:03Z"
2,366
69
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "ja", "lm", "nlp", "dataset:Anthropic/hh-rlhf", "arxiv:2203.02155", "arxiv:1707.06347", "arxiv:2404.01657", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-30T01:50:48Z"
--- language: ja thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png tags: - ja - gpt_neox - text-generation - lm - nlp license: mit datasets: - Anthropic/hh-rlhf inference: false --- # japanese-gpt-neox-3.6b-instruction-ppo ![rinna-icon](./rinna.png) # Overview This repository provides a Japanese GPT-NeoX model of 3.6 billion parameters. The model is based on [`rinna/japanese-gpt-neox-3.6b-instruction-sft-v2`](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2) and has been aligned to serve as an instruction-following conversational agent. * **Model architecture** A 36-layer, 2816-hidden-size transformer-based language model. * **RLHF** Following the [OpenAI InstructGPT paper](https://arxiv.org/abs/2203.02155), **Reinforcement Learning from Human Feedback** (RLHF) has been applied to aligning the model's behaviour with input instructions. Particularly, the model has been trained in two stages, i.e. **Supervised Fine-Tuning** (SFT) and [PPO](https://arxiv.org/abs/1707.06347)-based **Reinforcement Learning** (RL). * The first SFT stage produces [`rinna/japanese-gpt-neox-3.6b-instruction-sft-v2`](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2). * The second RL stage produces this model. * **PPO vs. SFT evaluation** We conducted human evaluation and ChatGPT-based automated evaluation on 100 prompts to assess the *performance gain from reinforcement learning*. | [PPO](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo) vs. [SFT](https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2) | win | tie | loss | | :---: | :---: | :---: | :---: | | Human evaluation | **47**% | 30% | 23% | | ChatGPT auto. evaluation | **63**% | 3% | 34% | * **Reinforcement learning** We used [CarperAI/trlx](https://github.com/CarperAI/trlx) and its implementation of the PPO algorithm for the RL stage. The RL data is the subset of the following dataset and has been translated into Japanese. * [Anthropic HH RLHF data](https://huggingface.co/datasets/Anthropic/hh-rlhf) * **Model Series** | Variant | Link | | :-- | :--| | 3.6B PPO | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo | | 3.6B SFT-v2 | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft-v2 | | 3.6B SFT | https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-sft | | 3.6B pretrained | https://huggingface.co/rinna/japanese-gpt-neox-3.6b | * **Contributors** [Tianyu Zhao](https://huggingface.co/tianyuz) and [Kei Sawada](https://huggingface.co/keisawada) # Limitations * We found this verison of PPO model tends to generate repeated text more often than its SFT counterpart, and thus we set `repetition_penalty=1.1` for better generation performance. (*The same generation hyper-parameters are applied to the SFT model in aforementioned evaluation experiments.*) You can also explore other hyperparameter combinations that yield higher generation randomness/diversity for better generation quality, e.g. `temperature=0.9, repetition_penalty=1.0`. # I/O Format A special format has been adopted to construct inputs. * An input prompt is formatted as a conversation between `ユーザー` and `システム`. * Each input utterance consists of (1) its speaker (`"ユーザー"` or `"システム"`), (2) a colon (`":"`), (3) a whitespace (`" "`), and (4) utterance text (e.g. `"世界で一番高い山は?"`). * The input prompt should be ended with `"システム: "` to acknowledge the model to generate a response. * Since the model's tokenizer does not recognize `"\n"`, a special newline symbol `"<NL>"` is used instead. * All the newlines in input and output utterances should be replaced with `"<NL>"`. * All the utterances in the input prompt should be separated by `"<NL>"`. Following is an example to construct an input from a conversation. ~~~python prompt = [ { "speaker": "ユーザー", "text": "コンタクトレンズを慣れるにはどうすればよいですか?" }, { "speaker": "システム", "text": "これについて具体的に説明していただけますか?何が難しいのでしょうか?" }, { "speaker": "ユーザー", "text": "目が痛いのです。" }, { "speaker": "システム", "text": "分かりました、コンタクトレンズをつけると目がかゆくなるということですね。思った以上にレンズを外す必要があるでしょうか?" }, { "speaker": "ユーザー", "text": "いえ、レンズは外しませんが、目が赤くなるんです。" } ] prompt = [ f"{uttr['speaker']}: {uttr['text']}" for uttr in prompt ] prompt = "<NL>".join(prompt) prompt = ( prompt + "<NL>" + "システム: " ) print(prompt) # "ユーザー: コンタクトレンズを慣れるにはどうすればよいですか?<NL>システム: これについて具体的に説明していただけますか?何が難しいのでしょうか?<NL>ユーザー: 目が痛いのです。<NL>システム: 分かりました、コンタクトレンズをつけると目がかゆくなるということですね。思った以上にレンズを外す必要があるでしょうか?<NL>ユーザー: いえ、レンズは外しませんが、目が赤くなるんです。<NL>システム: " ~~~ # How to use the model ~~~~python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-ppo", use_fast=False) model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-ppo") if torch.cuda.is_available(): model = model.to("cuda") token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), do_sample=True, max_new_tokens=128, temperature=0.7, repetition_penalty=1.1, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0][token_ids.size(1):]) output = output.replace("<NL>", "\n") print(output) """それは、コンタクトレンズが目に合わないために起こることがあります。レンズが目の表面に長時間触れ続けることが原因となることがあります。また、コンタクトレンズが汚れている可能性もあります。コンタクトレンズケースを定期的に洗浄したり、コンタクトレンズを正しくフィットさせるようにしたりすることが役立ちます。</s>""" ~~~~ # Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. * The tokenizer has a vocabulary size of 32,000. * It uses sentencepiece's byte fallback feature to decompose unknown text pieces into UTF-8 byte pieces and to avoid producing `<UNK>` tokens. * sentencepiece's `--add_dummy_prefix` option was turned off so that a leading whitespace will not be prepended automatically. ~~~ print(tokenizer.tokenize("吾輩は猫である")) # ['吾', '輩', 'は', '猫', 'である'] # instead of ['▁', '吾', '輩', 'は', '猫', 'である'] as in rinna/japanese-gpt-1b ~~~ * sentencepiece's `--remove_extra_whitespaces` option was turned off so that leading, trailing, and duplicate whitespaces are reserved. ~~~ print(tokenizer.tokenize(" 吾輩は 猫である ")) # ['▁', '▁', '吾', '輩', 'は', '▁', '▁', '猫', 'である', '▁', '▁', '▁'] # instead of ['▁', '吾', '輩', 'は', '▁猫', 'である'] as in rinna/japanese-gpt-1b ~~~ * Don't forget to set `use_fast=False` to make the above features function correctly. ~~~ good_tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b", use_fast=False) bad_tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b") print(good_tokenizer.decode(good_tokenizer.encode("გამარჯობა 吾輩は 猫である "))) # 'გამარჯობა 吾輩は 猫である </s>' print(bad_tokenizer.decode(bad_tokenizer.encode("გამარჯობა 吾輩は 猫である "))) # 'გამარ[UNK]ობა 吾輩は 猫である </s>' ~~~ # How to cite ~~~ @misc{rinna-japanese-gpt-neox-3.6b-instruction-ppo, title = {rinna/japanese-gpt-neox-3.6b-instruction-ppo}, author = {Zhao, Tianyu and Sawada, Kei} url = {https://huggingface.co/rinna/japanese-gpt-neox-3.6b-instruction-ppo}, } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, url = {https://arxiv.org/abs/2404.01657}, } ~~~ # Licenese [The MIT license](https://opensource.org/licenses/MIT)
Yukang/Llama-2-7b-longlora-32k-ft
Yukang
"2023-09-24T09:40:44Z"
2,366
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2309.12307", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-12T09:46:31Z"
# LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models <font size=6><div align='center' > <a href=http://arxiv.org/abs/2309.12307>**Paper**</a> | <a href="https://huggingface.co/Yukang">**Models**</a> | <a href="https://github.com/dvlab-research/LongLoRA">**Code**</a> </div></font> **LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models [[Paper](http://arxiv.org/abs/2309.12307)]** <br /> [Yukang Chen](https://scholar.google.com/citations?user=6p0ygKUAAAAJ&hl=en), [Shengju Qian](https://scholar.google.com/citations?user=QNnWmasAAAAJ), [Haotian Tang](https://scholar.google.com/citations?user=WxL13BAAAAAJ&hl), [Xin Lai](https://scholar.google.com/citations?user=tqNDPA4AAAAJ&hl=zh-CN), [Zhijian Liu](https://scholar.google.com/citations?user=3coYSTUAAAAJ&hl=en), [Song Han](https://scholar.google.com/citations?user=E0iCaa4AAAAJ&hl=zh-CN), [Jiaya Jia](https://scholar.google.com/citations?user=XPAkzTEAAAAJ&hl=en)<br /> ## Abstract We present LongLoRA, an efficient fine-tuning approach that extends the context sizes of pre-trained large language models (LLMs), with limited computation cost. Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources. In this paper, we speed up the context extension of LLMs in two aspects. On the one hand, although dense global attention is needed during inference, fine-tuning the model can be effectively and efficiently done by sparse local attention. The proposed shift short attention effectively enables context extension, leading to non-trivial computation saving with similar performance to fine-tuning with vanilla attention. On the other hand, we find that LoRA for context extension works well under the premise of trainable embedding and normalization. LongLoRA demonstrates strong empirical results on various tasks on LLaMA2 models from 7B/13B to 70B. LongLoRA adopts LLaMA2 7B from 4k context to 100k, or LLaMA2 70B to 32k on a single 8x A100 machine. LongLoRA extends models' context while retaining their original architectures, and is compatible with most existing techniques, like FlashAttention-2. In addition, to make LongLoRA practical, we collect a dataset, LongQA, for supervised fine-tuning. It contains more than 3k long context question-answer pairs. For more details, please refer to the [paper](http://arxiv.org/abs/2309.12307). ## Highlights **LongLoRA** speed up the context extension of pre-trained large language models in both attention-level and weight-level. 1. The proposed shifted short attention is easy to implement, compatible with Flash-Attention, and not required during inference. 2. We release all our models, including models from 7B to 70B, context length from 8k to 100k, including [LLaMA2-LongLoRA-7B-100k](https://huggingface.co/Yukang/Llama-2-7b-longlora-100k-ft), [LLaMA2-LongLoRA-13B-64k](https://huggingface.co/Yukang/Llama-2-13b-longlora-64k), and [LLaMA2-LongLoRA-70B-32k](https://huggingface.co/Yukang/Llama-2-70b-longlora-32k). 3. We build up a long-context QA dataset, LongQA, for supervised fine-tuning (SFT). We release 13B and 70B 32k models with SFT, [Llama-2-13b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-13b-chat-longlora-32k-sft) and [Llama-2-70b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k-sft). We will further release the dataset next week. ## Released models ### Models with supervised fine-tuning | Model | Size | Context | Train | Link | |:----------------------------------|------|---------|---------|-------------------------------------------------------------------------| | Llama-2-13b-chat-longlora-32k-sft | 13B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-chat-longlora-32k-sft) | | Llama-2-70b-chat-longlora-32k-sft | 70B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k-sft) | ### Models with context extension via fully fine-tuning | Model | Size | Context | Train | Link | |:----------------------------|------|---------|-------|-------------------------------------------------------------------| | Llama-2-7b-longlora-8k-ft | 7B | 8192 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-8k-ft) | | Llama-2-7b-longlora-16k-ft | 7B | 16384 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-16k-ft) | | Llama-2-7b-longlora-32k-ft | 7B | 32768 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-32k-ft) | | Llama-2-7b-longlora-100k-ft | 7B | 100000 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-100k-ft) | | Llama-2-13b-longlora-8k-ft | 13B | 8192 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-8k-ft) | | Llama-2-13b-longlora-16k-ft | 13B | 16384 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-16k-ft) | | Llama-2-13b-longlora-32k-ft | 13B | 32768 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-32k-ft) | ### Models with context extension via improved LoRA fine-tuning | Model | Size | Context | Train | Link | |:----------------------------|------|---------|-------|-------------------------------------------------------------------| | Llama-2-7b-longlora-8k | 7B | 8192 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-8k) | | Llama-2-7b-longlora-16k | 7B | 16384 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-16k) | | Llama-2-7b-longlora-32k | 7B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-32k) | | Llama-2-13b-longlora-8k | 13B | 8192 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-8k) | | Llama-2-13b-longlora-16k | 13B | 16384 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-16k) | | Llama-2-13b-longlora-32k | 13B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-32k) | | Llama-2-13b-longlora-64k | 13B | 65536 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-64k) | | Llama-2-70b-longlora-32k | 70B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-70b-longlora-32k) | | Llama-2-70b-chat-longlora-32k | 70B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k) | ## Citation If you find this project useful in your research, please consider citing: ``` @article{longlora, title={LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models}, author={Yukang Chen and Shengju Qian and Haotian Tang and Xin Lai and Zhijian Liu and Song Han and Jiaya Jia}, journal={arXiv:2309.12307}, year={2023} } ``` ## Acknowledgement - This work is built upon the [LLaMA2](https://ai.meta.com/llama) as the pre-trained models. - This work is based on [DeepSpeed](https://github.com/microsoft/DeepSpeed), [peft](https://github.com/huggingface/peft), and [Flash-Attention2](https://github.com/Dao-AILab/flash-attention) for acceleration. - The perplexity evaluation code is modified upon [Landmark Attention](https://github.com/epfml/landmark-attention). - We use [LongChat](https://github.com/DachengLi1/LongChat) for the retrieval evaluation.
zengu/benkei
zengu
"2023-05-14T16:59:59Z"
2,364
0
diffusers
[ "diffusers", "safetensors", "region:us" ]
null
"2023-03-16T16:12:32Z"
Entry not found
mradermacher/SauerkrautLM-1.5b-i1-GGUF
mradermacher
"2024-06-13T16:30:35Z"
2,364
0
transformers
[ "transformers", "gguf", "spectrum", "continuous pretraining", "sft", "dpo", "de", "en", "base_model:VAGOsolutions/SauerkrautLM-1.5b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-13T15:56:08Z"
--- base_model: VAGOsolutions/SauerkrautLM-1.5b language: - de - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - spectrum - continuous pretraining - sft - dpo --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/VAGOsolutions/SauerkrautLM-1.5b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/SauerkrautLM-1.5b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/SauerkrautLM-1.5b-i1-GGUF/resolve/main/SauerkrautLM-1.5b.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
MaziyarPanahi/Topsecret-GGUF
MaziyarPanahi
"2024-06-15T23:12:31Z"
2,364
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B", "base_model:mergekit-community/mergekit-slerp-ebgdloh", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/Topsecret" ]
text-generation
"2024-06-15T22:59:07Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B - base_model:mergekit-community/mergekit-slerp-ebgdloh - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: Topsecret-GGUF base_model: mergekit-community/Topsecret inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Topsecret-GGUF](https://huggingface.co/MaziyarPanahi/Topsecret-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/Topsecret](https://huggingface.co/mergekit-community/Topsecret) ## Description [MaziyarPanahi/Topsecret-GGUF](https://huggingface.co/MaziyarPanahi/Topsecret-GGUF) contains GGUF format model files for [mergekit-community/Topsecret](https://huggingface.co/mergekit-community/Topsecret). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
grimjim/llama-3-Nephilim-v1-8B-GGUF
grimjim
"2024-06-21T23:20:48Z"
2,364
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "text-generation", "base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0", "base_model:mlabonne/NeuralDaredevil-8B-abliterated", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-21T15:43:31Z"
--- base_model: - WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 - mlabonne/NeuralDaredevil-8B-abliterated library_name: transformers tags: - mergekit - merge license: cc-by-nc-4.0 pipeline_tag: text-generation --- # llama-3-Nephilim-v1-8B-GGUF This repo contains select GGUF quants of [grimjim/llama-3-Nephilim-v1-8B](https://huggingface.co/grimjim/llama-3-Nephilim-v1-8B). This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). Here we experiment with SLERP merger with the second model at very low weight (0.001) to modulate the output of the base model. The base model was assembled to achieve high MMLU while avoiding refusals, while the additional model was trained specifically (apparently as a copilot) for offensive and defensive cybersecurity. Though neither model targeted roleplay as a use case, the resulting intelligence, acuity, and text generation of the merge is of interest. The merge is aggressively creative, within bounds. Tested with temperature=1.0-1.2 and minP=0.01 along with a custom Instruct prompt geared toward reducing refusals during roleplay text generation without compromising overall model safety: [Llama 3 Instruct Direct](https://huggingface.co/debased-ai/SillyTavern-settings/tree/main/advanced_formatting/instruct_mode). Care should be taken when using this model, as it is possible that harmful outputs could be generated. Given that this model is derivative, responsible use is further mandated by the WhiteRabbitNeo Usage Restrictions Extension to the Llama-3 License. This model is further subject to CC-BY-NC-4.0 by default, meaning that commercial use is restricted, barring an alternative licensing agreement. Built with Meta Llama 3. # WhiteRabbitNeo Extension to Llama-3 Licence: Usage Restrictions ``` You agree not to use the Model or Derivatives of the Model: - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; - For military use in any way; - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; - To generate or disseminate verifiably false information and/or content with the purpose of harming others; - To generate or disseminate inappropriate content subject to applicable regulatory requirements; - To generate or disseminate personal identifiable information without due authorization or for unreasonable use; - To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an individual's legal rights or otherwise creates or modifies a binding, enforceable obligation; - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. ``` ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) * [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: mlabonne/NeuralDaredevil-8B-abliterated layer_range: [0,32] - model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0 layer_range: [0,32] merge_method: slerp base_model: mlabonne/NeuralDaredevil-8B-abliterated parameters: t: - value: 0.001 dtype: bfloat16 ```
timm/levit_128s.fb_dist_in1k
timm
"2024-02-10T23:30:36Z"
2,362
1
timm
[ "timm", "pytorch", "image-classification", "dataset:imagenet-1k", "arxiv:2104.01136", "license:apache-2.0", "region:us" ]
image-classification
"2023-02-03T21:13:22Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for levit_128s.fb_dist_in1k A LeViT image classification model using convolutional mode (using nn.Conv2d and nn.BatchNorm2d). Pretrained on ImageNet-1k using distillation by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 7.8 - GMACs: 0.3 - Activations (M): 1.9 - Image size: 224 x 224 - **Papers:** - LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference: https://arxiv.org/abs/2104.01136 - **Original:** https://github.com/facebookresearch/LeViT - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('levit_128s.fb_dist_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'levit_128s.fb_dist_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, num_features, H, W) tensor output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison |model |top1 |top5 |param_count|img_size| |-----------------------------------|------|------|-----------|--------| |levit_384.fb_dist_in1k |82.596|96.012|39.13 |224 | |levit_conv_384.fb_dist_in1k |82.596|96.012|39.13 |224 | |levit_256.fb_dist_in1k |81.512|95.48 |18.89 |224 | |levit_conv_256.fb_dist_in1k |81.512|95.48 |18.89 |224 | |levit_conv_192.fb_dist_in1k |79.86 |94.792|10.95 |224 | |levit_192.fb_dist_in1k |79.858|94.792|10.95 |224 | |levit_128.fb_dist_in1k |78.474|94.014|9.21 |224 | |levit_conv_128.fb_dist_in1k |78.474|94.02 |9.21 |224 | |levit_128s.fb_dist_in1k |76.534|92.864|7.78 |224 | |levit_conv_128s.fb_dist_in1k |76.532|92.864|7.78 |224 | ## Citation ```bibtex @InProceedings{Graham_2021_ICCV, author = {Graham, Benjamin and El-Nouby, Alaaeldin and Touvron, Hugo and Stock, Pierre and Joulin, Armand and Jegou, Herve and Douze, Matthijs}, title = {LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {12259-12269} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/rwightman/pytorch-image-models}} } ```
haoranxu/ALMA-7B
haoranxu
"2024-01-19T05:19:47Z"
2,362
21
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2309.11674", "arxiv:2401.08417", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-17T17:14:42Z"
--- license: mit --- **ALMA** (**A**dvanced **L**anguage **M**odel-based tr**A**nslator) is an LLM-based translation model, which adopts a new translation model paradigm: it begins with fine-tuning on monolingual data and is further optimized using high-quality parallel data. This two-step fine-tuning process ensures strong translation performance. Please find more details in our [paper](https://arxiv.org/abs/2309.11674). ``` @misc{xu2023paradigm, title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models}, author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla}, year={2023}, eprint={2309.11674}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` **[ALMA-R](https://arxiv.org/abs/2401.08417) (NEW!) is released now!** ALMA-R builds upon ALMA models, with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners! ``` @misc{xu2024contrastive, title={Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}, author={Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim}, year={2024}, eprint={2401.08417}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` We release six translation models presented in the paper: - **ALMA-7B**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data - **ALMA-7B-LoRA**: Full-weight Fine-tune LLaMA-2-7B on 20B monolingual tokens and then **LoRA** fine-tune on human-written parallel data - **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization. - **ALMA-13B**: Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **Full-weight** fine-tune on human-written parallel data - **ALMA-13B-LoRA** (Our best system): Full-weight Fine-tune LLaMA-2-7B on 12B monolingual tokens and then **LoRA** fine-tune on human-written parallel data - **ALMA-13B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-13B-LoRA with contrastive preference optimization. Model checkpoints are released at huggingface: | Models | Base Model Link | LoRA Link | |:-------------:|:---------------:|:---------:| | ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - | | ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) | | **ALMA-7B-R (NEW!)** | [haoranxu/ALMA-7B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-7B-R) | - | | ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - | | ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) | | **ALMA-13B-R (NEW!)** | [haoranxu/ALMA-13B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-13B-R) | - | **Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.** Datasets used by ALMA and ALMA-R are also released at huggingface now (NEW!) | Datasets | Train / Validation| Test | |:-------------:|:---------------:|:---------:| | Human-Written Parallel Data (ALMA) | [train and validation](https://huggingface.co/datasets/haoranxu/ALMA-Human-Parallel) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) | | Triplet Preference Data | [train](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) and [WMT'23](https://huggingface.co/datasets/haoranxu/WMT23-Test) | A quick start to use system ALMA-13B-LoRA for translation. An example of translating "我爱机器翻译。" into English: ``` import torch from peft import PeftModel from transformers import AutoModelForCausalLM from transformers import LlamaTokenizer # Load base model and LoRA weights model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto") model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-Pretrain-LoRA") tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left') # Add the source setence into the prompt template prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:" input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda() # Translation with torch.no_grad(): generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(outputs) ``` Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
nvidia/Llama3-ChatQA-1.5-70B
nvidia
"2024-05-24T17:32:05Z"
2,362
301
transformers
[ "transformers", "safetensors", "llama", "text-generation", "nvidia", "chatqa-1.5", "chatqa", "llama-3", "pytorch", "conversational", "en", "arxiv:2401.10225", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-28T21:44:57Z"
--- license: llama3 language: - en pipeline_tag: text-generation tags: - nvidia - chatqa-1.5 - chatqa - llama-3 - pytorch --- ## Model Details We introduce Llama3-ChatQA-1.5, which excels at conversational question answering (QA) and retrieval-augmented generation (RAG). Llama3-ChatQA-1.5 is developed using an improved training recipe from [ChatQA paper](https://arxiv.org/pdf/2401.10225), and it is built on top of [Llama-3 base model](https://huggingface.co/meta-llama/Meta-Llama-3-8B). Specifically, we incorporate more conversational QA data to enhance its tabular and arithmetic calculation capability. Llama3-ChatQA-1.5 has two variants: Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B. Both models were originally trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), we converted the checkpoints to Hugging Face format. **For more information about ChatQA, check the [website](https://chatqa-project.github.io/)!** ## Other Resources [Llama3-ChatQA-1.5-8B](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-8B) &ensp; [Evaluation Data](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) &ensp; [Training Data](https://huggingface.co/datasets/nvidia/ChatQA-Training-Data) &ensp; [Retriever](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) &ensp; [Website](https://chatqa-project.github.io/) &ensp; [Paper](https://arxiv.org/pdf/2401.10225) ## Benchmark Results Results in [ChatRAG Bench](https://huggingface.co/datasets/nvidia/ChatRAG-Bench) are as follows: | | ChatQA-1.0-7B | Command-R-Plus | Llama3-instruct-70b | GPT-4-0613 | GPT-4-Turbo | ChatQA-1.0-70B | ChatQA-1.5-8B | ChatQA-1.5-70B | | -- |:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:| | Doc2Dial | 37.88 | 33.51 | 37.88 | 34.16 | 35.35 | 38.90 | 39.33 | 41.26 | | QuAC | 29.69 | 34.16 | 36.96 | 40.29 | 40.10 | 41.82 | 39.73 | 38.82 | | QReCC | 46.97 | 49.77 | 51.34 | 52.01 | 51.46 | 48.05 | 49.03 | 51.40 | | CoQA | 76.61 | 69.71 | 76.98 | 77.42 | 77.73 | 78.57 | 76.46 | 78.44 | | DoQA | 41.57 | 40.67 | 41.24 | 43.39 | 41.60 | 51.94 | 49.60 | 50.67 | | ConvFinQA | 51.61 | 71.21 | 76.6 | 81.28 | 84.16 | 73.69 | 78.46 | 81.88 | | SQA | 61.87 | 74.07 | 69.61 | 79.21 | 79.98 | 69.14 | 73.28 | 83.82 | | TopioCQA | 45.45 | 53.77 | 49.72 | 45.09 | 48.32 | 50.98 | 49.96 | 55.63 | | HybriDial* | 54.51 | 46.7 | 48.59 | 49.81 | 47.86 | 56.44 | 65.76 | 68.27 | | INSCIT | 30.96 | 35.76 | 36.23 | 36.34 | 33.75 | 31.90 | 30.10 | 32.31 | | Average (all) | 47.71 | 50.93 | 52.52 | 53.90 | 54.03 | 54.14 | 55.17 | 58.25 | | Average (exclude HybriDial) | 46.96 | 51.40 | 52.95 | 54.35 | 54.72 | 53.89 | 53.99 | 57.14 | Note that ChatQA-1.5 is built based on Llama-3 base model, and ChatQA-1.0 is built based on Llama-2 base model. ChatQA-1.5 models use HybriDial training dataset. To ensure fair comparison, we also compare average scores excluding HybriDial. The data and evaluation scripts for ChatRAG Bench can be found [here](https://huggingface.co/datasets/nvidia/ChatRAG-Bench). ## Prompt Format **We highly recommend that you use the prompt format we provide, as follows:** ### when context is available <pre> System: {System} {Context} User: {Question} Assistant: {Response} User: {Question} Assistant: </pre> ### when context is not available <pre> System: {System} User: {Question} Assistant: {Response} User: {Question} Assistant: </pre> **The content of the system's turn (i.e., {System}) for both scenarios is as follows:** <pre> This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context. </pre> **Note that our ChatQA-1.5 models are optimized for the capability with context, e.g., over documents or retrieved context.** ## How to use ### take the whole document as context This can be applied to the scenario where the whole document can be fitted into the model, so that there is no need to run retrieval over the document. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "nvidia/Llama3-ChatQA-1.5-70B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") messages = [ {"role": "user", "content": "what is the percentage change of the net income from Q4 FY23 to Q4 FY24?"} ] document = """NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago.\nFor the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.\nQ4 Fiscal 2024 Summary\nGAAP\n| $ in millions, except earnings per share | Q4 FY24 | Q3 FY24 | Q4 FY23 | Q/Q | Y/Y |\n| Revenue | $22,103 | $18,120 | $6,051 | Up 22% | Up 265% |\n| Gross margin | 76.0% | 74.0% | 63.3% | Up 2.0 pts | Up 12.7 pts |\n| Operating expenses | $3,176 | $2,983 | $2,576 | Up 6% | Up 23% |\n| Operating income | $13,615 | $10,417 | $1,257 | Up 31% | Up 983% |\n| Net income | $12,285 | $9,243 | $1,414 | Up 33% | Up 769% |\n| Diluted earnings per share | $4.93 | $3.71 | $0.57 | Up 33% | Up 765% |""" def get_formatted_input(messages, context): system = "System: This is a chat between a user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions based on the context. The assistant should also indicate when the answer cannot be found in the context." instruction = "Please give a full and complete answer for the question." for item in messages: if item['role'] == "user": ## only apply this instruction for the first user turn item['content'] = instruction + " " + item['content'] break conversation = '\n\n'.join(["User: " + item["content"] if item["role"] == "user" else "Assistant: " + item["content"] for item in messages]) + "\n\nAssistant:" formatted_input = system + "\n\n" + context + "\n\n" + conversation return formatted_input formatted_input = get_formatted_input(messages, document) tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators) response = outputs[0][tokenized_prompt.input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### run retrieval to get top-n chunks as context This can be applied to the scenario when the document is very long, so that it is necessary to run retrieval. Here, we use our [Dragon-multiturn](https://huggingface.co/nvidia/dragon-multiturn-query-encoder) retriever which can handle conversatinoal query. In addition, we provide a few [documents](https://huggingface.co/nvidia/Llama3-ChatQA-1.5-70B/tree/main/docs) for users to play with. ```python from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModel import torch import json ## load ChatQA-1.5 tokenizer and model model_id = "nvidia/Llama3-ChatQA-1.5-70B" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto") ## load retriever tokenizer and model retriever_tokenizer = AutoTokenizer.from_pretrained('nvidia/dragon-multiturn-query-encoder') query_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-query-encoder') context_encoder = AutoModel.from_pretrained('nvidia/dragon-multiturn-context-encoder') ## prepare documents, we take landrover car manual document that we provide as an example chunk_list = json.load(open("docs.json"))['landrover'] messages = [ {"role": "user", "content": "how to connect the bluetooth in the car?"} ] ### running retrieval ## convert query into a format as follows: ## user: {user}\nagent: {agent}\nuser: {user} formatted_query_for_retriever = '\n'.join([turn['role'] + ": " + turn['content'] for turn in messages]).strip() query_input = retriever_tokenizer(formatted_query_for_retriever, return_tensors='pt') ctx_input = retriever_tokenizer(chunk_list, padding=True, truncation=True, max_length=512, return_tensors='pt') query_emb = query_encoder(**query_input).last_hidden_state[:, 0, :] ctx_emb = context_encoder(**ctx_input).last_hidden_state[:, 0, :] ## Compute similarity scores using dot product and rank the similarity similarities = query_emb.matmul(ctx_emb.transpose(0, 1)) # (1, num_ctx) ranked_results = torch.argsort(similarities, dim=-1, descending=True) # (1, num_ctx) ## get top-n chunks (n=5) retrieved_chunks = [chunk_list[idx] for idx in ranked_results.tolist()[0][:5]] context = "\n\n".join(retrieved_chunks) ### running text generation formatted_input = get_formatted_input(messages, context) tokenized_prompt = tokenizer(tokenizer.bos_token + formatted_input, return_tensors="pt").to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate(input_ids=tokenized_prompt.input_ids, attention_mask=tokenized_prompt.attention_mask, max_new_tokens=128, eos_token_id=terminators) response = outputs[0][tokenized_prompt.input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Correspondence to Zihan Liu ([email protected]), Wei Ping ([email protected]) ## Citation <pre> @article{liu2024chatqa, title={ChatQA: Surpassing GPT-4 on Conversational QA and RAG}, author={Liu, Zihan and Ping, Wei and Roy, Rajarshi and Xu, Peng and Lee, Chankyu and Shoeybi, Mohammad and Catanzaro, Bryan}, journal={arXiv preprint arXiv:2401.10225}, year={2024}} </pre> ## License The use of this model is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)
mradermacher/T-850-8B-GGUF
mradermacher
"2024-06-18T13:42:36Z"
2,362
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "dataset:MinervaAI/Aesir-Preview", "base_model:ChaoticNeutrals/T-850-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T12:50:11Z"
--- base_model: ChaoticNeutrals/T-850-8B datasets: - MinervaAI/Aesir-Preview language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ChaoticNeutrals/T-850-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/T-850-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/T-850-8B-GGUF/resolve/main/T-850-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
cerebras/Cerebras-GPT-6.7B
cerebras
"2023-11-22T21:48:55Z"
2,361
65
transformers
[ "transformers", "pytorch", "gpt2", "causal-lm", "text-generation", "en", "dataset:the_pile", "arxiv:2304.03208", "arxiv:2203.15556", "arxiv:2101.00027", "license:apache-2.0", "text-generation-inference", "region:us" ]
text-generation
"2023-03-20T20:45:13Z"
--- language: - en inference: false tags: - pytorch - causal-lm license: apache-2.0 datasets: - the_pile pipeline_tag: text-generation --- # Cerebras-GPT 6.7B Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)! ## Model Description The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face. The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models. All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal. These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism. Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo). ## Model Details * Developed by: [Cerebras Systems](https://www.cerebras.net/) * License: Apache 2.0 * Model type: Transformer-based Language Model * Architecture: GPT-3 style architecture * Data set: The Pile * Tokenizer: Byte Pair Encoding * Vocabulary Size: 50257 * Sequence Length: 2048 * Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models) * Positional Encoding: Learned * Language: English * Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use. **Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu). This is the standard parameterization version of Cerebras-GPT with **6.7B** parameters Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt) <br><br> | Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) | |---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------| | Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K | | Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K | | Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K | | Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M | | Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M | | Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M | | Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 &rarr; 1080 | 1.47M &rarr; 2.21M | <br><br> ## Quickstart This model can be easily loaded using the AutoModelForCausalLM functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-6.7B") model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-6.7B") text = "Generative AI is " ``` And can be used with Hugging Face Pipelines ```python from transformers import pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0] print(generated_text['generated_text']) ``` or with `model.generate()` ```python inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50, early_stopping=True, no_repeat_ngram_size=2) text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(text_output[0]) ``` <br><br> ## Training data Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther. We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper. Recent works find significant duplicate data present in the Pile. Eleuther’s Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set. <br><br> ## Training procedure We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048. All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for details. <br> Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops ------------ | -------------- | ---------- | --------------- | ------ | -------------------- | ----- 111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18 256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19 590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19 1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20 2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21 6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21 13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22 <br><br> ## Evaluations We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well. We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper. #### 0-shot Evaluation | Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average | | ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ | | Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 | | Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 | | Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 | | Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 | | Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 | | Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 | | Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 | #### 5-shot Evaluation | Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | | -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- | | Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 | | Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 | | Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 | | Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 | | Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 | | Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 | | Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 | <br><br> ## Uses and Limitations ### Intended Use The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely. You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications. Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper. ### Out of Scope Use Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks. Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods. ### Risk, Bias, Ethical Considerations * **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references. * **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life. * **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information. * **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT. <br><br> ## Acknowledgements We are thankful to all Cerebras engineers, past and present, that made this work possible.
OFA-Sys/chinese-clip-vit-large-patch14-336px
OFA-Sys
"2022-12-09T06:10:57Z"
2,360
14
transformers
[ "transformers", "pytorch", "chinese_clip", "zero-shot-image-classification", "vision", "arxiv:2211.01335", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
"2022-11-09T09:40:25Z"
--- tags: - vision widget: - src: https://huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16/resolve/main/festival.jpg candidate_labels: 灯笼, 鞭炮, 对联 example_title: festival - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: 音乐表演, 体育运动 example_title: cat & dog - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg candidate_labels: 梅西, C罗, 马奎尔 example_title: football --- # Chinese-CLIP-ViT-Large-Patch14-336px ## Introduction This is the large-version of the Chinese CLIP, with ViT-L/14@336px as the image encoder and RoBERTa-wwm-base as the text encoder. Chinese CLIP is a simple implementation of CLIP on a large-scale dataset of around 200 million Chinese image-text pairs. For more details, please refer to our technical report https://arxiv.org/abs/2211.01335 and our official github repo https://github.com/OFA-Sys/Chinese-CLIP (Welcome to star! 🔥🔥) ## Use with the official API We provide a simple code snippet to show how to use the API of Chinese-CLIP to compute the image & text embeddings and similarities. ```python from PIL import Image import requests from transformers import ChineseCLIPProcessor, ChineseCLIPModel model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-large-patch14-336px") processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-large-patch14-336px") url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg" image = Image.open(requests.get(url, stream=True).raw) # Squirtle, Bulbasaur, Charmander, Pikachu in English texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"] # compute image feature inputs = processor(images=image, return_tensors="pt") image_features = model.get_image_features(**inputs) image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize # compute text features inputs = processor(text=texts, padding=True, return_tensors="pt") text_features = model.get_text_features(**inputs) text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize # compute image-text similarity scores inputs = processor(text=texts, images=image, return_tensors="pt", padding=True) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # probs: [[0.0219, 0.0316, 0.0043, 0.9423]] ``` However, if you are not satisfied with only using the API, feel free to check our github repo https://github.com/OFA-Sys/Chinese-CLIP for more details about training and inference. <br><br> ## Results **MUGE Text-to-Image Retrieval**: <table border="1" width="100%"> <tr align="center"> <th>Setup</th><th colspan="4">Zero-shot</th><th colspan="4">Finetune</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>MR</td><td>R@1</td><td>R@5</td><td>R@10</td><td>MR</td> </tr> <tr align="center"> <td width="120%">Wukong</td><td>42.7</td><td>69.0</td><td>78.0</td><td>63.2</td><td>52.7</td><td>77.9</td><td>85.6</td><td>72.1</td> </tr> <tr align="center"> <td width="120%">R2D2</td><td>49.5</td><td>75.7</td><td>83.2</td><td>69.5</td><td>60.1</td><td>82.9</td><td>89.4</td><td>77.5</td> </tr> <tr align="center"> <td width="120%">CN-CLIP</td><td>63.0</td><td>84.1</td><td>89.2</td><td>78.8</td><td>68.9</td><td>88.7</td><td>93.1</td><td>83.6</td> </tr> </table> <br> **Flickr30K-CN Retrieval**: <table border="1" width="120%"> <tr align="center"> <th>Task</th><th colspan="6">Text-to-Image</th><th colspan="6">Image-to-Text</th> </tr> <tr align="center"> <th>Setup</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td> </tr> <tr align="center"> <td width="120%">Wukong</td><td>51.7</td><td>78.9</td><td>86.3</td><td>77.4</td><td>94.5</td><td>97.0</td><td>76.1</td><td>94.8</td><td>97.5</td><td>92.7</td><td>99.1</td><td>99.6</td> </tr> <tr align="center"> <td width="120%">R2D2</td><td>60.9</td><td>86.8</td><td>92.7</td><td>84.4</td><td>96.7</td><td>98.4</td><td>77.6</td><td>96.7</td><td>98.9</td><td>95.6</td><td>99.8</td><td>100.0</td> </tr> <tr align="center"> <td width="120%">CN-CLIP</td><td>71.2</td><td>91.4</td><td>95.5</td><td>83.8</td><td>96.9</td><td>98.6</td><td>81.6</td><td>97.5</td><td>98.8</td><td>95.3</td><td>99.7</td><td>100.0</td> </tr> </table> <br> **COCO-CN Retrieval**: <table border="1" width="100%"> <tr align="center"> <th>Task</th><th colspan="6">Text-to-Image</th><th colspan="6">Image-to-Text</th> </tr> <tr align="center"> <th>Setup</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th><th colspan="3">Zero-shot</th><th colspan="3">Finetune</th> </tr> <tr align="center"> <td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td> </tr> <tr align="center"> <td width="120%">Wukong</td><td>53.4</td><td>80.2</td><td>90.1</td><td>74.0</td><td>94.4</td><td>98.1</td><td>55.2</td><td>81.0</td><td>90.6</td><td>73.3</td><td>94.0</td><td>98.0</td> </tr> <tr align="center"> <td width="120%">R2D2</td><td>56.4</td><td>85.0</td><td>93.1</td><td>79.1</td><td>96.5</td><td>98.9</td><td>63.3</td><td>89.3</td><td>95.7</td><td>79.3</td><td>97.1</td><td>98.7</td> </tr> <tr align="center"> <td width="120%">CN-CLIP</td><td>69.2</td><td>89.9</td><td>96.1</td><td>81.5</td><td>96.9</td><td>99.1</td><td>63.0</td><td>86.6</td><td>92.9</td><td>83.5</td><td>97.3</td><td>99.2</td> </tr> </table> <br> **Zero-shot Image Classification**: <table border="1" width="100%"> <tr align="center"> <th>Task</th><th>CIFAR10</th><th>CIFAR100</th><th>DTD</th><th>EuroSAT</th><th>FER</th><th>FGVC</th><th>KITTI</th><th>MNIST</th><th>PC</th><th>VOC</th> </tr> <tr align="center"> <td width="150%">GIT</td><td>88.5</td><td>61.1</td><td>42.9</td><td>43.4</td><td>41.4</td><td>6.7</td><td>22.1</td><td>68.9</td><td>50.0</td><td>80.2</td> </tr> <tr align="center"> <td width="150%">ALIGN</td><td>94.9</td><td>76.8</td><td>66.1</td><td>52.1</td><td>50.8</td><td>25.0</td><td>41.2</td><td>74.0</td><td>55.2</td><td>83.0</td> </tr> <tr align="center"> <td width="150%">CLIP</td><td>94.9</td><td>77.0</td><td>56.0</td><td>63.0</td><td>48.3</td><td>33.3</td><td>11.5</td><td>79.0</td><td>62.3</td><td>84.0</td> </tr> <tr align="center"> <td width="150%">Wukong</td><td>95.4</td><td>77.1</td><td>40.9</td><td>50.3</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td> </tr> <tr align="center"> <td width="150%">CN-CLIP</td><td>96.0</td><td>79.7</td><td>51.2</td><td>52.0</td><td>55.1</td><td>26.2</td><td>49.9</td><td>79.4</td><td>63.5</td><td>84.9</td> </tr> </table> <br> ## Citation If you find Chinese CLIP helpful, feel free to cite our paper. Thanks for your support! ``` @article{chinese-clip, title={Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese}, author={Yang, An and Pan, Junshu and Lin, Junyang and Men, Rui and Zhang, Yichang and Zhou, Jingren and Zhou, Chang}, journal={arXiv preprint arXiv:2211.01335}, year={2022} } ``` <br>
dbmdz/electra-base-italian-xxl-cased-discriminator
dbmdz
"2020-12-11T21:37:19Z"
2,357
2
transformers
[ "transformers", "pytorch", "electra", "pretraining", "it", "dataset:wikipedia", "license:mit", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: it license: mit datasets: - wikipedia --- # 🤗 + 📚 dbmdz BERT and ELECTRA models In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources Italian BERT and ELECTRA models 🎉 # Italian BERT The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the [OPUS corpora](http://opus.nlpl.eu/) collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens. For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps. For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the [OSCAR corpus](https://traces1.inria.fr/oscar/). Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens. Note: Unfortunately, a wrong vocab size was used when training the XXL models. This explains the mismatch of the "real" vocab size of 31102, compared to the vocab size specified in `config.json`. However, the model is working and all evaluations were done under those circumstances. See [this issue](https://github.com/dbmdz/berts/issues/7) for more information. The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for [BERTurk](https://github.com/stefan-it/turkish-bert/tree/master/electra). ## Model weights Currently only PyTorch-[Transformers](https://github.com/huggingface/transformers) compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue! | Model | Downloads | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | `dbmdz/bert-base-italian-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-cased/vocab.txt) | `dbmdz/bert-base-italian-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-uncased/vocab.txt) | `dbmdz/bert-base-italian-xxl-cased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-cased/vocab.txt) | `dbmdz/bert-base-italian-xxl-uncased` | [`config.json`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/bert-base-italian-xxl-uncased/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-discriminator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-discriminator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-discriminator/vocab.txt) | `dbmdz/electra-base-italian-xxl-cased-generator` | [`config.json`](https://s3.amazonaws.com/models.huggingface.co/bert/dbmdz/electra-base-italian-xxl-cased-generator/config.json) • [`pytorch_model.bin`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/pytorch_model.bin) • [`vocab.txt`](https://cdn.huggingface.co/dbmdz/electra-base-italian-xxl-cased-generator/vocab.txt) ## Results For results on downstream tasks like NER or PoS tagging, please refer to [this repository](https://github.com/stefan-it/italian-bertelectra). ## Usage With Transformers >= 2.3 our Italian BERT models can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the (recommended) Italian XXL BERT models, just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/bert-base-italian-xxl-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` To load the Italian XXL ELECTRA model (discriminator), just use: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/electra-base-italian-xxl-cased-discriminator" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelWithLMHead.from_pretrained(model_name) ``` # Huggingface model hub All models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our BERT/ELECTRA models just open an issue [here](https://github.com/dbmdz/berts/issues/new) 🤗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🤗
facebook/timesformer-base-finetuned-ssv2
facebook
"2022-12-12T12:53:06Z"
2,357
3
transformers
[ "transformers", "pytorch", "timesformer", "video-classification", "vision", "arxiv:2102.05095", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
"2022-10-07T20:36:48Z"
--- license: "cc-by-nc-4.0" tags: - vision - video-classification --- # TimeSformer (base-sized model, fine-tuned on Something Something v2) TimeSformer model pre-trained on [Something Something v2](https://developer.qualcomm.com/software/ai-datasets/something-something). It was introduced in the paper [TimeSformer: Is Space-Time Attention All You Need for Video Understanding?](https://arxiv.org/abs/2102.05095) by Tong et al. and first released in [this repository](https://github.com/facebookresearch/TimeSformer). Disclaimer: The team releasing TimeSformer did not write a model card for this model so this model card has been written by [fcakyon](https://github.com/fcakyon). ## Intended uses & limitations You can use the raw model for video classification into one of the 174 possible Something Something v2 labels. ### How to use Here is how to use this model to classify a video: ```python from transformers import AutoImageProcessor, TimesformerForVideoClassification import numpy as np import torch video = list(np.random.randn(8, 3, 224, 224)) processor = AutoImageProcessor.from_pretrained("facebook/timesformer-base-finetuned-ssv2") model = TimesformerForVideoClassification.from_pretrained("facebook/timesformer-base-finetuned-ssv2") inputs = processor(images=video, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/timesformer.html#). ### BibTeX entry and citation info ```bibtex @inproceedings{bertasius2021space, title={Is Space-Time Attention All You Need for Video Understanding?}, author={Bertasius, Gedas and Wang, Heng and Torresani, Lorenzo}, booktitle={International Conference on Machine Learning}, pages={813--824}, year={2021}, organization={PMLR} } ```
deepseek-ai/deepseek-moe-16b-chat
deepseek-ai
"2024-02-05T08:02:28Z"
2,357
109
transformers
[ "transformers", "safetensors", "deepseek", "text-generation", "conversational", "custom_code", "arxiv:2401.06066", "license:other", "autotrain_compatible", "region:us" ]
text-generation
"2024-01-09T04:55:35Z"
--- license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/LICENSE-MODEL --- <p align="center"> <img width="500px" alt="DeepSeek Chat" src="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/logo.png?raw=true"> </p> <p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://chat.deepseek.com/">[🤖 Chat with DeepSeek LLM]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/images/qr.jpeg">[Wechat(微信)]</a> </p> <p align="center"> <a href="https://arxiv.org/pdf/2401.06066.pdf"><b>Paper Link</b>👁️</a> </p> <hr> ### 1. Introduction to DeepSeekMoE See the [Introduction](https://github.com/deepseek-ai/DeepSeek-MoE/blob/main) for more details. ### 2. How to Use Here give some examples of how to use our model. **Chat Completion** ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig model_name = "deepseek-ai/deepseek-moe-16b-chat" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto") model.generation_config = GenerationConfig.from_pretrained(model_name) model.generation_config.pad_token_id = model.generation_config.eos_token_id messages = [ {"role": "user", "content": "Who are you?"} ] input_tensor = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt") outputs = model.generate(input_tensor.to(model.device), max_new_tokens=100) result = tokenizer.decode(outputs[0][input_tensor.shape[1]:], skip_special_tokens=True) print(result) ``` Avoiding the use of the provided function `apply_chat_template`, you can also interact with our model following the sample template. Note that `messages` should be replaced by your input. ``` User: {messages[0]['content']} Assistant: {messages[1]['content']}<|end▁of▁sentence|>User: {messages[2]['content']} Assistant: ``` **Note:** By default (`add_special_tokens=True`), our tokenizer automatically adds a `bos_token` (`<|begin▁of▁sentence|>`) before the input text. Additionally, since the system prompt is not compatible with this version of our models, we DO NOT RECOMMEND including the system prompt in your input. ### 3. License This code repository is licensed under the MIT License. The use of DeepSeekMoE models is subject to the Model License. DeepSeekMoE supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/LICENSE-MODEL) for more details. ### 4. Contact If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
meraGPT/mera-mix-4x7B
meraGPT
"2024-04-20T00:58:30Z"
2,357
16
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-13T13:21:18Z"
--- license: apache-2.0 model-index: - name: mera-mix-4x7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.17 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.44 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 77.17 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 85.64 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 66.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=meraGPT/mera-mix-4x7B name: Open LLM Leaderboard --- # Model mera-mix-4x7B This is a mixture of experts (MoE) model that is half as large (4 experts instead of 8) as the [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) while been comparable to it across different benchmarks. You can use it as a drop in replacement for your Mixtral-8x7B and get much faster inference. mera-mix-4x7B achieves the score of 75.91 on the OpenLLM Eval and compares well with 72.7 by Mixtral-8x7B and 74.46 by Mixtral-8x22B. You can try the model with the [Mera Mixture Chat](https://huggingface.co/spaces/meraGPT/mera-mixture-chat). In addition, to the official Open LLM Leaderboard, the results on OpenLLM Eval have been validated by [others as well (76.59)](https://github.com/saucam/model_evals/tree/main?tab=readme-ov-file#model-eval-results). Our own initial eval is available [here (76.37)](https://gist.github.com/codelion/78f88333230801c9bbaa6fc22078d820). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_meraGPT__mera-mix-4x7B) | Metric |Value| |---------------------------------|----:| |Avg. |75.91| |AI2 Reasoning Challenge (25-Shot)|72.95| |HellaSwag (10-Shot) |89.17| |MMLU (5-Shot) |64.44| |TruthfulQA (0-shot) |77.17| |Winogrande (5-shot) |85.64| |GSM8k (5-shot) |66.11|
sohohuk/test1
sohohuk
"2023-11-02T01:29:20Z"
2,356
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "ko", "dataset:nlpai-lab/openassistant-guanaco-ko", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-31T07:12:28Z"
--- license: cc-by-nc-nd-4.0 datasets: - nlpai-lab/openassistant-guanaco-ko language: - en - ko library_name: transformers pipeline_tag: text-generation --- <p><h1>Test</h1></p> basemodel: Open-Orca/Mistral-7B-OpenOrca peft finetuning test version ------------------------- ------------------------- ------------------------- ------------------------- -------------------------
kyx0r/L3-Evil-Stheno-v3.2-8B-GGUF
kyx0r
"2024-06-21T19:17:43Z"
2,356
0
transformers
[ "transformers", "gguf", "llama", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-21T19:08:15Z"
--- base_model: [] library_name: transformers tags: - mergekit - merge --- # model Unleash her demons... Merged the best roleplay model with the best uncensored model to date. The outputs are quite good and verbose. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) as a base. ### Models Merged The following models were included in the merge: * [mlabonne/Daredevil-8B-abliterated](https://huggingface.co/mlabonne/Daredevil-8B-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /root/progs/auto-ollama/scripts/Daredevil-8B-abliterated parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 merge_method: dare_ties base_model: /root/progs/auto-ollama/scripts/L3-8B-Stheno-v3.2 parameters: normalize: true int8_mask: true dtype: float16 ```
openclimatefix/pvnet_v2
openclimatefix
"2024-03-11T13:51:27Z"
2,355
0
pytorch
[ "pytorch", "en", "license:mit", "region:us" ]
null
"2023-05-15T10:08:55Z"
--- language: en license: mit library_name: pytorch --- # PVNet2 ## Model Description <!-- Provide a longer summary of what this model is/does. --> This model class uses satellite data, numericl weather predictions, and recent Grid Service Point( GSP) PV power output to forecast the near-term (~8 hours) PV power output at all GSPs. More information can be found in the model repo [1] and experimental notes in [this google doc](https://docs.google.com/document/d/1fbkfkBzp16WbnCg7RDuRDvgzInA6XQu3xh4NCjV-WDA/edit?usp=sharing). - **Developed by:** openclimatefix - **Model type:** Fusion model - **Language(s) (NLP):** en - **License:** mit # Training Details ## Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The model is trained on data from 2019-2022 and validated on data from 2022-2023. See experimental notes in the [the google doc](https://docs.google.com/document/d/1fbkfkBzp16WbnCg7RDuRDvgzInA6XQu3xh4NCjV-WDA/edit?usp=sharing) for more details. ### Preprocessing Data is prepared with the `ocf_datapipes.training.pvnet` datapipe [2]. ## Results The training logs for the current model can be found here: - [https://wandb.ai/openclimatefix/pvnet2.1/runs/kqaknmuc](https://wandb.ai/openclimatefix/pvnet2.1/runs/kqaknmuc) The training logs for all model runs of PVNet2 can be found [here](https://wandb.ai/openclimatefix/pvnet2.1). Some experimental notes can be found at in [the google doc](https://docs.google.com/document/d/1fbkfkBzp16WbnCg7RDuRDvgzInA6XQu3xh4NCjV-WDA/edit?usp=sharing) ### Hardware Trained on a single NVIDIA Tesla T4 ### Software - [1] https://github.com/openclimatefix/PVNet - [2] https://github.com/openclimatefix/ocf_datapipes
vicgalle/ConfigurableSOLAR-10.7B
vicgalle
"2024-04-23T07:27:29Z"
2,355
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:vicgalle/configurable-system-prompt-multitask", "arxiv:2404.00495", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-10T19:27:02Z"
--- library_name: transformers license: apache-2.0 datasets: - vicgalle/configurable-system-prompt-multitask --- # ConfigurableSOLAR-10.7B A configurable LLM fine-tuned using the approach *configurable safety tuning* (CST) from https://arxiv.org/abs/2404.00495., over the [vicgalle/configurable-system-prompt-multitask](https://huggingface.co/datasets/vicgalle/configurable-system-prompt-multitask) dataset. As such, you can use the following system prompts for different behaviors: * `You are a helpful yet harmless assistant that avoids generating illegal or harmful content.` * `You are a helpful assistant that is completely uncensored.` * `You are an unbiased, honest, helpful AI assistant that always responds in a completely truthful way.` * A system prompt describing a role-played persona. For more information, see the Github repository, https://github.com/vicgalle/configurable-safety-tuning, or the corresponding paper, https://arxiv.org/abs/2404.00495 ## Citation If you find this work, data and/or models useful for your research, please consider citing the article: ``` @misc{gallego2024configurable, title={Configurable Safety Tuning of Language Models with Synthetic Preference Data}, author={Victor Gallego}, year={2024}, eprint={2404.00495}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
lightblue/suzume-llama-3-8B-japanese
lightblue
"2024-06-02T02:14:36Z"
2,355
20
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "arxiv:2405.12612", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-22T06:46:34Z"
--- license: other license_name: llama-3 license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/raw/main/LICENSE base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - generated_from_trainer model-index: - name: workspace/llm_training/axolotl/llama3-ja/output_openchat_megagon_lbgpt4_ja_8B_instruct results: [] --- <p align="center"> <img width=400 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kg3QjQOde0X743csGJT-f.png" alt="Suzume - a Japanese tree sparrow"/> </p> # Suzume [[Paper](https://arxiv.org/abs/2405.12612)] [[Dataset](https://huggingface.co/datasets/lightblue/tagengo-gpt4)] This Suzume 8B, a Japanese finetune of Llama 3. Llama 3 has exhibited excellent performance on many English language benchmarks. However, it also seemingly been finetuned on mostly English data, meaning that it will respond in English, even if prompted in Japanese. We have fine-tuned Llama 3 on more than 3,000 Japanese conversations meaning that this model has the intelligence of Llama 3 but has the added ability to chat in Japanese. Please feel free to comment on this model and give us feedback in the Community tab! We will release a paper in the future describing how we made the training data, the model, and the evaluations we have conducted of it. # How to use You can use the original trained model with vLLM like so: ```python from vllm import LLM, SamplingParams sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="lightblue/suzume-llama-3-8B-japanese") prompts = [ "東京のおすすめの観光スポットを教えて下さい", ] outputs = llm.generate(prompts, sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` # Evaluation scores We find that this is the best performing model in the 7/8B class of LLMs on a multitude of Japanese language benchmarks. We calculate our Japanese evaluation scores using our [lightblue-tech/japanese_llm_eval](https://github.com/lightblue-tech/japanese_llm_eval) repo. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/2obyDbrjiNV3PGfwom6EI.png) We also compare our Japanese model to our multilingual model using our [multilingual_mt_bench](https://github.com/Peter-Devine/multilingual_mt_bench/tree/main/fastchat/llm_judge) repo. | | **lightblue/suzume-llama-3-8B-japanese** | **lightblue/suzume-llama-3-8B-multilingual** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** | |-----------------|------------------------------------------|----------------------------------------------|-----------------------------------|-------------------| | **Japanese 🇯🇵** | 6.24 | 6.56 | 6.22 | 7.84 | Here, we find that our multilingual model outperforms our Japanese model on the Japanese MT-Bench benchmark, indicating that our multilingual model was able to generalize better to the Japanese MT-Bench benchmark from training on more data, even if that added data was not in Japanese. Note - the discrepancy between the MT-Bench scores of the first and second evaluation of `lightblue/suzume-llama-3-8B-japanese` are due to the difference in system message of the two evaluation harnesses. The former's system message is in Japanese while the latter's is in English. # Training data We train on three sources of data to create this model * [megagonlabs/instruction_ja](https://github.com/megagonlabs/instruction_ja) - 669 conversations * A hand-edited dataset of nearly 700 conversations taken originally from translations of the [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja) dataset. * [openchat/openchat_sharegpt4_dataset](https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset/resolve/main/sharegpt_gpt4.json) (Japanese conversations only) - 167 conversations * Conversations taken from humans talking to GPT-4 * lightblue/tagengo-gpt4 (Japanese prompts only) (Link coming soon!) - 2,482 conversations * Almost 2,500 diverse Japanese prompts sampled from [lmsys/lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) and then used to prompt `gpt-4-0125-preview` # Training config [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B-Instruct model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast load_in_8bit: false load_in_4bit: false strict: false datasets: - path: /workspace/llm_training/axolotl/llama3-ja/openchat_megagon_lbgpt4_ja.json ds_type: json # see other options below type: sharegpt conversation: llama-3 dataset_prepared_path: /workspace/llm_training/axolotl/llama3-ja/prepared_openchat_megagon_lbgpt4_ja val_set_size: 0.01 output_dir: /workspace/llm_training/axolotl/llama3-ja/output_openchat_megagon_lbgpt4_ja_8B_instruct sequence_len: 8192 sample_packing: true pad_to_sequence_len: true eval_sample_packing: False use_wandb: true wandb_project: axolotl wandb_entity: peterd wandb_name: openchat_megagon_lbgpt4_ja_8B_instruct gradient_accumulation_steps: 2 micro_batch_size: 2 num_epochs: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 5 eval_table_size: saves_per_epoch: 1 debug: deepspeed: /workspace/axolotl/deepspeed_configs/zero2.json weight_decay: 0.0 special_tokens: pad_token: <|end_of_text|> ``` </details><br> ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - gradient_accumulation_steps: 2 - total_train_batch_size: 12 - total_eval_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.303 | 0.08 | 1 | 1.2664 | | 1.4231 | 0.23 | 3 | 1.2409 | | 1.1007 | 0.46 | 6 | 1.0264 | | 1.0635 | 0.69 | 9 | 1.0154 | | 1.0221 | 0.92 | 12 | 0.9555 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.0 # How to cite Please cite [this paper](https://arxiv.org/abs/2405.12612) when referencing this model. ```tex @article{devine2024tagengo, title={Tagengo: A Multilingual Chat Dataset}, author={Devine, Peter}, journal={arXiv preprint arXiv:2405.12612}, year={2024} } ``` # Developer Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn))
NbAiLab/nb-whisper-small
NbAiLab
"2024-02-13T12:30:12Z"
2,354
1
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "onnx", "safetensors", "whisper", "automatic-speech-recognition", "audio", "asr", "hf-asr-leaderboard", "no", "nb", "nn", "en", "dataset:NbAiLab/ncc_speech", "dataset:NbAiLab/NST", "dataset:NbAiLab/NPSC", "arxiv:2212.04356", "base_model:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-02-13T10:07:40Z"
--- license: apache-2.0 language: - 'no' - nb - nn - en datasets: - NbAiLab/ncc_speech - NbAiLab/NST - NbAiLab/NPSC base_model: openai/whisper-small tags: - audio - asr - automatic-speech-recognition - hf-asr-leaderboard metrics: - wer - cer library_name: transformers pipeline_tag: automatic-speech-recognition widget: - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3 example_title: FLEURS sample 1 - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3 example_title: FLEURS sample 2 --- # NB-Whisper Small Introducing the **_Norwegian NB-Whisper Small model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article. | Model Size | Parameters | Model | |------------|------------|------------| | Tiny | 39M | [NB-Whisper Tiny](https://huggingface.co/NbAiLab/nb-whisper-tiny) | | Base | 74M | [NB-Whisper Base](https://huggingface.co/NbAiLab/nb-whisper-base) | | Small | 244M | [NB-Whisper Small](https://huggingface.co/NbAiLab/nb-whisper-small) | | Medium | 769M | [NB-Whisper Medium](https://huggingface.co/NbAiLab/nb-whisper-medium) | | Large | 1550M | [NB-Whisper Large](https://huggingface.co/NbAiLab/nb-whisper-large) | ### Verbatim Model While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases: - **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis. | Model Size | Parameters | Semantic version | |------------|------------|------------------| | Tiny | 39M | [Tiny - semantic](https://huggingface.co/NbAiLab/nb-whisper-tiny-semantic) | | Base | 74M | [Base - semantic](https://huggingface.co/NbAiLab/nb-whisper-base-semantic) | | Small | 244M | [Small - semantic](https://huggingface.co/NbAiLab/nb-whisper-small-semantic) | | Medium | 769M | [Medium - semantic](https://huggingface.co/NbAiLab/nb-whisper-medium-semantic) | | Large | 1550M | [Large - semantic](https://huggingface.co/NbAiLab/nb-whisper-large-semantic) | ### Model Description - **Developed by:** [NB AI-Lab](https://ai.nb.no/) - **Shared by:** [NB AI-Lab](https://ai.nb.no/) - **Model type:** `whisper` - **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) - **Trained from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small) - **Code Repository:** https://github.com/NbAiLab/nb-whisper/ - **Paper:** _Coming soon_ - **Demo:** _See Spaces on this page_ ## How to Use the Models ### Online Demos You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLab/). ### Local Setup with HuggingFace Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3). ```bash # Download the sample file $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 # Install necessary libraries. $ pip install transformers>=4.35.2 ``` After this is done, you should be able to run this in Python: ```python from transformers import pipeline # Load the model asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-small") #transcribe asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'}) ``` <details> <summary>Expected output</summary> ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'} } ``` </details> #### Extended HuggingFace Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words. ```python # Long Transcripts asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Increase accuracy by setting beam size to 5 asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'}) # Return Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Return Word Level Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Transcribe to Nynorsk asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'}) # Transcribe to English asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'}) ``` <details> <summary>Expected output</summary> Long transcripts: ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'} } ``` Timestamps: ```json { {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.', 'chunks': [{'timestamp': (0.0, 5.46), 'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger'}, {'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'}, {'timestamp': (8.68, 16.64), 'text': ' Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'}, {'timestamp': (16.64, 13.3), 'text': ' Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra.'}, {'timestamp': (13.32, 30.28), 'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører.'}, {'timestamp': (32.52, 39.16), 'text': ' Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres'}, {'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'}, {'timestamp': (42.0, 46.74), 'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'}, {'timestamp': (46.74, 51.12), 'text': ' og jenter og gutter som er glad i hverandre.'}, {'timestamp': (51.16, 57.42), 'text': ' Nordmenn trommer på Gud, Allah, Altet og ingenting.'}, {'timestamp': (57.42, 64.3), 'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'}, {'timestamp': (64.34, 71.24), 'text': ' Med andre ord, Norge er dere. Norge er oss.'}, {'timestamp': (71.24, 78.04), 'text': ' Mitt største håp for Norge er at vi skal klare å ta vare på hverandre,'}, {'timestamp': (78.12, 84.68), 'text': ' at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}]} } ``` Word Level Timestamps: ```json { {"text": "Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.", "chunks": [ {"text": "Nordmenn", "timestamp": [0.72, 1.42]}, {"text": "er", "timestamp": [1.42, 1.74]}, // ... more chunks ... {"text": "raushet.", "timestamp": [83.1, 84.88]} ] } } ``` Nynorsk: ```json { {"text": "Nordmenn er nordlendingar, trøndarar, sørlendingar og folk frå alle andre regionar. Nordmenn er også innvandra frå Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid så lett å seie kvar vi er frå, kva nasjonalitet vi tilhøyrer. Det vi kallar heim, er der hjartet vårt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt største håp for Noreg er at vi skal klare å ta vare på kvarandre, at vi skal byggje dette landet vidare på tillit, fellesskap og raushet."} } ``` English: ```json { {"text": "Norwegians are Norwegians, trønders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."} } ``` </details> ### Whisper CPP Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription. We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin). ```bash # We can download and compile whisper.cpp $ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1 $ cd whisper.cpp/ $ make # We also need to convert the audio to WAV as that is the only format supported by whisper.cpp $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 $ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav # Lets download the two ggml-files from this site wget -N https://huggingface.co/NbAiLab/nb-whisper-small/resolve/main/ggml-model.bin -O models/nb-small-ggml-model.bin wget -N https://huggingface.co/NbAiLab/nb-whisper-small/resolve/main/ggml-model-q5_0.bin -O models/nb-small-ggml-model-q5_0.bin # And run it with the f16 default model $ ./main -l no -m models/nb-small-ggml-model.bin king.wav # Or the quantized version $ ./main -l no -m models/nb-small-ggml-model-q5_0.bin king.wav ``` ### WhisperX and Speaker Diarization Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below. ```bash # Follow the install instructions on https://github.com/m-bain/whisperX # Make sure you have a HuggingFace account and have agreed to the pyannote terms # Log in (or supply HF Token in command line) huggingface-cli login # Download a test file wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3 # Optional. If you get complains about not support for Norwegian, do: pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540 # Transcribe the test file. All transcripts will end up in the directory of the mp3-file whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-small --language no --diarize ``` You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX). ### API Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks. ## Training Data The training data originates from Språkbanken and the National Library of Norway's digital collection, including: - NST Norwegian ASR Database (16 kHz) and its corresponding dataset - Transcribed speeches from the Norwegian Parliament by Språkbanken - TV broadcast (NRK) subtitles (NLN digital collection) - Audiobooks (NLN digital collection) ## Downstream Use The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding. ## Bias, Risks, and Limitations Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models. ### Software The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/). ## Citation & Contributors The NB-Whisper Small model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming. ## Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models. ## Acknowledgements Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Språkbanken for the collaboration on the Stortinget corpus. ## Contact For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
lemon-mint/gemma-7b-openhermes-v0.80
lemon-mint
"2024-04-09T13:31:27Z"
2,354
1
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "pytorch", "instruct", "finetune", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:google/gemma-1.1-7b-it", "license:gemma", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-09T08:35:37Z"
--- library_name: transformers language: - en license: gemma tags: - gemma - pytorch - instruct - finetune base_model: google/gemma-1.1-7b-it pipeline_tag: text-generation datasets: - teknium/OpenHermes-2.5 --- # Gemma 7B OpenHermes v0.80 - Eval Loss: `0.4544` - Train Loss: `0.3129` - lr: `5e-5` - optimizer: adamw - lr_scheduler_type: cosine ## Model Details This is an instruction-following model finetuned from the Gemma 1.1 7B model. It was finetuned on the OpenHermes-2.5 dataset to improve its ability to engage in open-ended conversation and respond helpfully to user instructions and queries. The model can engage in dialogue, answer questions, and assist with a variety of tasks. ### Model Description - **Developed by:** `lemon-mint` - **Model type:** Gemma - **Language(s) (NLP):** English - **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms) - **Finetuned from model:** [google/gemma-1.1-7b-it](https://huggingface.co/google/gemma-1.1-7b-it) # Limitations and Ethical Considerations As Gemma 7B OpenHermes has been trained on extensive web data, biases present in the training data may be reflected in the model. Additionally, there is a possibility that it may generate sentences containing errors or incorrect information. Therefore, rather than blindly trusting the model's output, it is necessary to refer to it with caution.
mradermacher/causal-llama-3-8B-Instruct-combined-GGUF
mradermacher
"2024-06-05T08:28:52Z"
2,353
0
transformers
[ "transformers", "gguf", "en", "base_model:ibivibiv/causal-llama-3-8B-Instruct-combined", "endpoints_compatible", "region:us" ]
null
"2024-06-05T08:01:09Z"
--- base_model: ibivibiv/causal-llama-3-8B-Instruct-combined language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ibivibiv/causal-llama-3-8B-Instruct-combined <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/causal-llama-3-8B-Instruct-combined-GGUF/resolve/main/causal-llama-3-8B-Instruct-combined.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/L3-8B-MegaSerpentine-GGUF
mradermacher
"2024-06-18T15:20:17Z"
2,353
3
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama", "not-for-all-audiences", "en", "base_model:v000000/L3-8B-MegaSerpentine", "endpoints_compatible", "region:us" ]
null
"2024-06-18T13:04:20Z"
--- base_model: v000000/L3-8B-MegaSerpentine language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge - llama - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/v000000/L3-8B-MegaSerpentine <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-MegaSerpentine-GGUF/resolve/main/L3-8B-MegaSerpentine.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
timm/tf_efficientnet_b5.ns_jft_in1k
timm
"2023-04-27T21:21:37Z"
2,352
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.11946", "arxiv:1911.04252", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T00:04:12Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnet_b5.ns_jft_in1k A EfficientNet image classification model. Trained on ImageNet-1k and unlabeled JFT-300m using Noisy Student semi-supervised learning in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 30.4 - GMACs: 10.5 - Activations (M): 98.9 - Image size: 456 x 456 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - Self-training with Noisy Student improves ImageNet classification: https://arxiv.org/abs/1911.04252 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnet_b5.ns_jft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b5.ns_jft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 24, 228, 228]) # torch.Size([1, 40, 114, 114]) # torch.Size([1, 64, 57, 57]) # torch.Size([1, 176, 29, 29]) # torch.Size([1, 512, 15, 15]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_b5.ns_jft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 15, 15) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @article{Xie2019SelfTrainingWN, title={Self-Training With Noisy Student Improves ImageNet Classification}, author={Qizhe Xie and Eduard H. Hovy and Minh-Thang Luong and Quoc V. Le}, journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2019}, pages={10684-10695} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
stablediffusionapi/albedobase-xl-v13
stablediffusionapi
"2023-12-05T17:07:44Z"
2,351
1
diffusers
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2023-12-05T17:05:04Z"
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # AlbedoBase XL v1.3 API Inference ![generated from stablediffusionapi.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/7966529911701795714.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "albedobase-xl-v13" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/albedobase-xl-v13) Model link: [View model](https://stablediffusionapi.com/models/albedobase-xl-v13) Credits: [View credits](https://civitai.com/?query=AlbedoBase%20XL%20v1.3) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "albedobase-xl-v13", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
KoboldAI/fairseq-dense-2.7B
KoboldAI
"2023-11-18T11:56:28Z"
2,350
3
transformers
[ "transformers", "pytorch", "safetensors", "xglm", "text-generation", "en", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:04Z"
--- language: en --- This is a Hugging Face transformers-compatible conversion of the original dense 2.7B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-2.7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 33.67 | | ARC (25-shot) | 33.79 | | HellaSwag (10-shot) | 65.74 | | MMLU (5-shot) | 26.44 | | TruthfulQA (0-shot) | 34.57 | | Winogrande (5-shot) | 63.93 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 11.24 |
vihangd/smartyplats-7b-v1
vihangd
"2023-10-27T10:44:59Z"
2,350
1
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-20T11:13:16Z"
--- license: apache-2.0 --- <p><h1> SmartyPlats-7b </h1></p> An experimental finetune of Mistrel 7b with QLoRA <h2> Datasets </h2> Trained on alpca style datasets <p><h2> Prompt Template </h2></p> Uses alpaca style prompt template
John6666/wai-real-cn-v5-sdxl
John6666
"2024-06-13T22:54:06Z"
2,350
1
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "pony", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-13T22:49:00Z"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - pony --- Original model is [here](https://civitai.com/models/469902/wai-realcn?modelVersionId=570688).
prithivida/Splade_PP_en_v1
prithivida
"2024-03-17T10:34:50Z"
2,348
15
transformers
[ "transformers", "pytorch", "onnx", "bert", "fill-mask", "splade++", "document-expansion", "sparse representation", "bag-of-words", "passage-retrieval", "knowledge-distillation", "document encoder", "en", "dataset:ms_marco", "arxiv:2205.04733", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-02-16T09:26:41Z"
--- license: apache-2.0 language: - en datasets: - ms_marco tags: - splade++ - document-expansion - sparse representation - bag-of-words - passage-retrieval - knowledge-distillation - document encoder pretty_name: Independent Implementation of SPLADE++ Model with some efficiency tweaks for Industry setting. library_name: transformers pipeline_tag: fill-mask --- # Independent Implementation of SPLADE++ Model (`a.k.a splade-cocondenser* and family`) for the Industry setting. -------------- This work stands on the shoulders of 2 robust researches: [Naver's From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective paper](https://arxiv.org/pdf/2205.04733.pdf) and [Google's SparseEmbed](https://storage.googleapis.com/gweb-research2023-media/pubtools/pdf/79f16d3b3b948706d191a7fe6dd02abe516f5564.pdf). Props to both the teams for such a robust work. ## 1. What are Sparse Representations and Why learn one? **Beginner ?** expand this. **Expert in Sparse & Dense representations ?** feel free skip to next section 2, <details> **1. Lexical search:** Lexical search with BOW based sparse vectors are strong baselines, but they famously suffer from vocabulary mismatch problem, as they can only do exact term matching. Here are the pros and cons: - ✅ Efficient and Cheap. - ✅ No need to fine-tune models. - ✅️ Interpretable. - ✅️ Exact Term Matches. - ❌ Vocabulary mismatch (Need to remember exact terms) **2. Semantic Search:** Learned Neural / Dense retrievers (DPR, Sentence transformers*, BGE* models) with approximate nearest neighbors search has shown impressive results. Here are the pros and cons: - ✅ Search how humans innately think. - ✅ When finetuned beats sparse by long way. - ✅ Easily works with Multiple modals. - ❌ Suffers token amnesia (misses term matching), - ❌ Resource intensive (both index & retreival), - ❌ Famously hard to interpret. - ❌ Needs fine-tuning for OOD data. **3. The big idea:** Getting pros of both searches made sense and that gave rise to interest in learning sparse representations for queries and documents with some interpretability. The sparse representations also double as implicit or explicit (latent, contextualized) expansion mechanisms for both query and documents. If you are new to query expansion learn more here from the master himself Daniel Tunkelang. **4. What a Sparse model learns ?** The model learns to project it's learned dense representations over a MLM head to give a vocabulary distribution. Which is just to say the model can do automatic token expansion. (Image courtesy of pinecone) <img src="./expansion.png" width=600 height=550/> </details> ## **[Skip to "HOW TO USE with POPULAR VECTORDBs and more"](#htu) or continue for more details.** ## 2. Motivation: SPLADE models are a fine balance between retrieval effectiveness (quality) and retrieval efficiency (latency and $), with that in mind we did **very minor retrieval efficiency tweaks** to make it more suitable for a industry setting. *(Pure MLE folks should not conflate efficiency to model inference efficiency. Our main focus is on retrieval efficiency. Hereinafter efficiency is a short hand for retrieval efficiency unless explicitly qualified otherwise. Not that inference efficiency is not important, we will address that subsequently.)* **TL;DR of Our attempt & results** 1. FLOPS tuning: Seperate **Seq lens and Severely restrictive FLOPs schedule and token budget** doc(128) & query(24) NOT 256 unlike Official SPLADE++. Inspired from **SparseEmbed** 3. Init Weights: Vanilla **bert-base-uncased**. No corpus awarness unlike Official splade++ / ColBERT 4. Yet achieves competitive effectiveness of MRR@10 **37.22** in ID data (& OOD 48.7) and a retrieval latency of - **47.27ms**. (multi-threaded) all On **Consumer grade-GPUs** with **only 5 negatives per query**. 4. For Industry setting: Effectiveness on custom domains needs more than just **Trading FLOPS for tiny gains** and The Premise "SPLADE++ are not well suited to mono-cpu retrieval" does not hold. 5. Owing to query-time inference latency we still need 2 models one for query & doc, This is a Doc model and Query model will be **released soon.** <img src="./ID.png" width=750 height=650/> *Note: The paper refers to the best performing models as SPLADE++, hence for consistency we are reusing the same.* <br/> ## 3. Why FLOPS is one of the key metrics for industry setting ? <details> While ONLY a empirical analysis on large sample make sense here is a spot checking - a qualitatively example to give you an idea. Our models achieve par competitive effectiveness with **~10% and ~100%, lesser tokens comparable SPLADE++ models including SoTA**. (We will show Quantitative results in the next section.) So, **by design "how to beat SoTA MRR?" was never our goal**, Instead "At what cost can we achieve an acceptable effectiveness i.e. MRR@10". Non-chalantly reducing lambda values (λQ,λD, see above table) will achieve a better MRR. But Lower lambda values = Higher FLOPS = More tokens = Poorer efficiency. This is NOT desirable for a Industry setting. **Ours** ```python number of actual dimensions: 113 SPLADE BOW rep: [('stress', 2.36), ('glass', 2.15), ('thermal', 2.06), ('pan', 1.83), ('glasses', 1.67), ('break', 1.47), ('crack', 1.47), ('heat', 1.45), ('warmth', 1.36), ('depression', 1.34), ('hotter', 1.23), ('hottest', 1.11), ('window', 1.11), ('hot', 1.1), ('area', 1.04), ('cause', 1.01), ('adjacent', 0.99), ('too', 0.94), ('created', 0.86), ('##pan', 0.84), ('phenomenon', 0.81), ('when', 0.78), ('temperature', 0.76), ('cracked', 0.75), ('factors', 0.74), ('windows', 0.72), ('create', 0.71), ('level', 0.7), ('formed', 0.61), ('stresses', 0.59), ('warm', 0.58), ('fracture', 0.57), ('adjoining', 0.56), ('areas', 0.56), ('nearby', 0.56), ('causes', 0.56), ('broken', 0.54), ('produced', 0.52), ('sash', 0.51), ('if', 0.51), ('breaks', 0.49), ('is', 0.49), ('effect', 0.45), ('heated', 0.44), ('process', 0.42), ('breaking', 0.42), ('one', 0.4), ('mirror', 0.39), ('factor', 0.38), ('shatter', 0.38), ('formation', 0.37), ('mathias', 0.37), ('damage', 0.36), ('cracking', 0.35), ('climate', 0.35), ('ceramic', 0.34), ('reaction', 0.34), ('steam', 0.33), ('reflection', 0.33), ('generated', 0.33), ('material', 0.32), ('burst', 0.31), ('fire', 0.31), ('neighboring', 0.3), ('explosion', 0.29), ('caused', 0.29), ('warmer', 0.29), ('because', 0.28), ('anxiety', 0.28), ('furnace', 0.28), ('tear', 0.27), ('induced', 0.27), ('fail', 0.26), ('are', 0.26), ('collapse', 0.26), ('##thermal', 0.26), ('and', 0.25), ('great', 0.25), ('get', 0.24), ('spark', 0.23), ('lens', 0.2), ('cooler', 0.19), ('determined', 0.19), ('leak', 0.19), ('disease', 0.19), ('emotion', 0.16), ('cork', 0.14), ('cooling', 0.14), ('heating', 0.13), ('governed', 0.13), ('optical', 0.12), ('surrounding', 0.12), ('warming', 0.12), ('convection', 0.11), ('regulated', 0.11), ('problem', 0.1), ('cool', 0.09), ('violence', 0.09), ('breaker', 0.09), ('image', 0.09), ('photo', 0.05), ('strike', 0.05), ('.', 0.04), ('shattering', 0.04), ('snap', 0.03), ('wilson', 0.03), ('weather', 0.02), ('eye', 0.02), ('produce', 0.01), ('crime', 0.01), ('humid', 0.0), ('impact', 0.0), ('earthquake', 0.0)]``` ``` **naver/splade-cocondenser-ensembledistil** (SoTA, ~10% more tokens + FLOPS = 1.85) ```python number of actual dimensions: 126 SPLADE BOW rep: [('stress', 2.25), ('glass', 2.23), ('thermal', 2.18), ('glasses', 1.65), ('pan', 1.62), ('heat', 1.56), ('stressed', 1.42), ('crack', 1.31), ('break', 1.12), ('cracked', 1.1), ('hot', 0.93), ('created', 0.9), ('factors', 0.81), ('broken', 0.73), ('caused', 0.71), ('too', 0.71), ('damage', 0.69), ('if', 0.68), ('hotter', 0.65), ('governed', 0.61), ('heating', 0.59), ('temperature', 0.59), ('adjacent', 0.59), ('cause', 0.58), ('effect', 0.57), ('fracture', 0.56), ('bradford', 0.55), ('strain', 0.53), ('hammer', 0.51), ('brian', 0.48), ('error', 0.47), ('windows', 0.45), ('will', 0.45), ('reaction', 0.42), ('create', 0.42), ('windshield', 0.41), ('heated', 0.41), ('factor', 0.4), ('cracking', 0.39), ('failure', 0.38), ('mechanical', 0.38), ('when', 0.38), ('formed', 0.38), ('bolt', 0.38), ('mechanism', 0.37), ('warm', 0.37), ('areas', 0.36), ('area', 0.36), ('energy', 0.34), ('disorder', 0.33), ('barry', 0.33), ('shock', 0.32), ('determined', 0.32), ('gage', 0.32), ('sash', 0.31), ('theory', 0.31), ('level', 0.31), ('resistant', 0.31), ('brake', 0.3), ('window', 0.3), ('crash', 0.3), ('hazard', 0.29), ('##ink', 0.27), ('ceramic', 0.27), ('storm', 0.25), ('problem', 0.25), ('issue', 0.24), ('impact', 0.24), ('fridge', 0.24), ('injury', 0.23), ('ross', 0.22), ('causes', 0.22), ('affect', 0.21), ('pressure', 0.21), ('fatigue', 0.21), ('leak', 0.21), ('eye', 0.2), ('frank', 0.2), ('cool', 0.2), ('might', 0.19), ('gravity', 0.18), ('ray', 0.18), ('static', 0.18), ('collapse', 0.18), ('physics', 0.18), ('wave', 0.18), ('reflection', 0.17), ('parker', 0.17), ('strike', 0.17), ('hottest', 0.17), ('burst', 0.16), ('chance', 0.16), ('burn', 0.14), ('rubbing', 0.14), ('interference', 0.14), ('bailey', 0.13), ('vibration', 0.12), ('gilbert', 0.12), ('produced', 0.12), ('rock', 0.12), ('warmer', 0.11), ('get', 0.11), ('drink', 0.11), ('fireplace', 0.11), ('ruin', 0.1), ('brittle', 0.1), ('fragment', 0.1), ('stumble', 0.09), ('formation', 0.09), ('shatter', 0.08), ('great', 0.08), ('friction', 0.08), ('flash', 0.07), ('cracks', 0.07), ('levels', 0.07), ('smash', 0.04), ('fail', 0.04), ('fra', 0.04), ('##glass', 0.03), ('variables', 0.03), ('because', 0.02), ('knock', 0.02), ('sun', 0.02), ('crush', 0.01), ('##e', 0.01), ('anger', 0.01)] ``` **naver/splade-v2-distil** (~100% more tokens + FLOPS = 3.82) ```python number of actual dimensions: 234 SPLADE BOW rep: [('glass', 2.55), ('stress', 2.39), ('thermal', 2.38), ('glasses', 1.95), ('stressed', 1.87), ('crack', 1.84), ('cool', 1.78), ('heat', 1.62), ('pan', 1.6), ('break', 1.53), ('adjacent', 1.44), ('hotter', 1.43), ('strain', 1.21), ('area', 1.16), ('adjoining', 1.14), ('heated', 1.11), ('window', 1.07), ('stresses', 1.04), ('hot', 1.03), ('created', 1.03), ('create', 1.03), ('cause', 1.02), ('factors', 1.02), ('cooler', 1.01), ('broken', 1.0), ('too', 0.99), ('fracture', 0.96), ('collapse', 0.96), ('cracking', 0.95), ('great', 0.93), ('happen', 0.93), ('windows', 0.89), ('broke', 0.87), ('##e', 0.87), ('pressure', 0.84), ('hottest', 0.84), ('breaking', 0.83), ('govern', 0.79), ('shatter', 0.76), ('level', 0.75), ('heating', 0.69), ('temperature', 0.69), ('cracked', 0.69), ('panel', 0.68), ('##glass', 0.68), ('ceramic', 0.67), ('sash', 0.66), ('warm', 0.66), ('areas', 0.64), ('creating', 0.63), ('will', 0.62), ('tension', 0.61), ('cracks', 0.61), ('optical', 0.6), ('mechanism', 0.58), ('kelly', 0.58), ('determined', 0.58), ('generate', 0.58), ('causes', 0.56), ('if', 0.56), ('factor', 0.56), ('the', 0.56), ('chemical', 0.55), ('governed', 0.55), ('crystal', 0.55), ('strike', 0.55), ('microsoft', 0.54), ('creates', 0.53), ('than', 0.53), ('relation', 0.53), ('glazed', 0.52), ('compression', 0.51), ('painting', 0.51), ('governing', 0.5), ('harden', 0.49), ('solar', 0.48), ('reflection', 0.48), ('ic', 0.46), ('split', 0.45), ('mirror', 0.44), ('damage', 0.43), ('ring', 0.42), ('formation', 0.42), ('wall', 0.41), ('burst', 0.4), ('radiant', 0.4), ('determine', 0.4), ('one', 0.4), ('plastic', 0.39), ('furnace', 0.39), ('difference', 0.39), ('melt', 0.39), ('get', 0.39), ('contract', 0.38), ('forces', 0.38), ('gets', 0.38), ('produce', 0.38), ('surrounding', 0.37), ('vibration', 0.37), ('tile', 0.37), ('fail', 0.36), ('warmer', 0.36), ('rock', 0.35), ('fault', 0.35), ('roof', 0.34), ('burned', 0.34), ('physics', 0.33), ('welding', 0.33), ('why', 0.33), ('a', 0.32), ('pop', 0.32), ('and', 0.31), ('fra', 0.3), ('stat', 0.3), ('withstand', 0.3), ('sunglasses', 0.3), ('material', 0.29), ('ice', 0.29), ('generated', 0.29), ('matter', 0.29), ('frame', 0.28), ('elements', 0.28), ('then', 0.28), ('.', 0.28), ('pont', 0.28), ('blow', 0.28), ('snap', 0.27), ('metal', 0.26), ('effect', 0.26), ('reaction', 0.26), ('related', 0.25), ('aluminium', 0.25), ('neighboring', 0.25), ('weight', 0.25), ('steel', 0.25), ('bulb', 0.25), ('tear', 0.25), ('coating', 0.25), ('plumbing', 0.25), ('co', 0.25), ('microwave', 0.24), ('formed', 0.24), ('pipe', 0.23), ('drink', 0.23), ('chemistry', 0.23), ('energy', 0.22), ('reflect', 0.22), ('dynamic', 0.22), ('leak', 0.22), ('is', 0.22), ('lens', 0.21), ('frost', 0.21), ('lenses', 0.21), ('produced', 0.21), ('induced', 0.2), ('arise', 0.2), ('plate', 0.2), ('equations', 0.19), ('affect', 0.19), ('tired', 0.19), ('mirrors', 0.18), ('thickness', 0.18), ('bending', 0.18), ('cabinet', 0.17), ('apart', 0.17), ('##thermal', 0.17), ('gas', 0.17), ('equation', 0.17), ('relationship', 0.17), ('composition', 0.17), ('engineering', 0.17), ('block', 0.16), ('breaks', 0.16), ('when', 0.16), ('definition', 0.16), ('collapsed', 0.16), ('generation', 0.16), (',', 0.16), ('philips', 0.16), ('later', 0.15), ('wood', 0.15), ('neighbouring', 0.15), ('structural', 0.14), ('regulate', 0.14), ('neighbors', 0.13), ('lighting', 0.13), ('happens', 0.13), ('more', 0.13), ('property', 0.13), ('cooling', 0.12), ('shattering', 0.12), ('melting', 0.12), ('how', 0.11), ('cloud', 0.11), ('barriers', 0.11), ('lam', 0.11), ('conditions', 0.11), ('rule', 0.1), ('insulation', 0.1), ('bathroom', 0.09), ('convection', 0.09), ('cavity', 0.09), ('source', 0.08), ('properties', 0.08), ('bend', 0.08), ('bottles', 0.08), ('ceramics', 0.07), ('temper', 0.07), ('tense', 0.07), ('keller', 0.07), ('breakdown', 0.07), ('concrete', 0.07), ('simon', 0.07), ('solids', 0.06), ('windshield', 0.05), ('eye', 0.05), ('sunlight', 0.05), ('brittle', 0.03), ('caused', 0.03), ('suns', 0.03), ('floor', 0.02), ('components', 0.02), ('photo', 0.02), ('change', 0.02), ('sun', 0.01), ('crystals', 0.01), ('problem', 0.01), ('##proof', 0.01), ('parameters', 0.01), ('gases', 0.0), ('prism', 0.0), ('doing', 0.0), ('lattice', 0.0), ('ground', 0.0)] ``` - *Note 1: This specific passage was used as an example for [ease of comparison](https://github.com/naver/splade/blob/main/inference_splade.ipynb)* </details> ## 4. How does it translate into Empirical metrics? Our models are token sparse and yet effective. It translates to faster retrieval (User experience) and smaller index size ($). Mean retrieval time on the standard MS-MARCO small dev set and Scaled total FLOPS loss are the respective metrics are below. This is why Google's SparseEmbed is interesting as they also achieve SPLADE quality retrieval effectiveness with much lower FLOPs. Compared to ColBERT, SPLADE and SparseEmbed match query and document terms with a linear complexity as ColBERT’s late interaction i.e. all query-document term pairs takes a quadratic complexity. The Challenge with SparseEmbed is it uses a hyperparameter called **Top-k to restrict number of tokens used to learn contextual dense representations.** Say 64 and 256 tokens for query and passage encoding. But it is unclear how well these hyperparameters are transferable to other domains or languages (where the notion of tokens changes a lot like our mother tongue Tamil which is Agglutinative in nature). <img src="./Metrics.png" width=800/> <details> **Note: Why Anserini not PISA?** *Anserini is a production ready lucene based library. Common industry search deployments use Solr or elastic which are lucene based, hence the performance can be comparable. PISA latency is irrelevant for industry as it is a a research only system.* The full [anserini evaluation log](https://huggingface.co/prithivida/Splade_PP_en_v1/blob/main/anserini_run.log) with encoding, indexing and querying details are here. - **BEIR ZST OOD performance**: Will be added to the end of page. **Our model is different in few more aspects** - **Cocondenser Weights**: Unlike the best Official SPLADE++ or SparseEmbed we do NOT initialse weights from Luyu/co-condenser* models. Yet we achieve CoCondenser SPLADE level performance. More on this later. - **Same size models:** Official SPLADE++, SparseEmbed and Ours all finetune on the same size based model. Size of `bert-base-uncased`. </details> ## 5. Roadmap and future directions for Industry Suitability. - **Improve efficiency**: This is a bottomless pit, Will continue to improve serving and retrieval efficiency. - **Custom/Domain Finetuning**: OOD Zeroshot performance of SPLADE models is great but unimportant in the industry setting as we need the ability to finetune on custom datasets or domains. Finetuning SPLADE on a new dataset is not cheap and needs labelling of queries and passages. So we will continue to see how we can enable economically finetuning our recipe on custom datasets without expensive labelling. - **Multilingual SPLADE**: Training cost of SPLADE i.e (GPU budget) directly proportional to Vocab size of the base model, So Mulitlingual SPLADE either using mbert or XLMR can be expensive as they have 120K and 250K vocab as opposed to 30K as in bert-base-uncased. We will continue to research to see how best we can extend our recipe to the multilingual world. ## 6. Usage To enable a light weight inference solution without heavy **No Torch dependency** we will also release a library - **SPLADERunner** Ofcourse if it doesnt matter you could always use these models with Huggingface transformers library. <h1 id="htu">How to use? </h1> ## 6a. With Popular VectorDBs | VectorDB | Colab Link | |----------|------------| | Pinecone | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1fB6LheD9wYG0G-nBHiz0z2juvljrsBum?usp=sharing) | | Qdrant | TBD | ## 6b. With SPLADERunner Library [SPLADERunner Library](https://github.com/PrithivirajDamodaran/SPLADERunner) ```python pip install spladerunner #One-time init from spladerunner import Expander # Default model is the document expander. exapander = Expander() #Sample Document expansion sparse_rep = expander.expand( ["The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science."]) ``` ## 6c. With HuggingFace **NOTEBOOK user? Login first** ``` !huggingface-cli login ``` **Integrating in your code ?** [How to use HF tokens in code](https://huggingface.co/docs/hub/en/security-tokens) Make these changes ``` tokenizer = AutoTokenizer.from_pretrained('prithivida/Splade_PP_en_v1', token=<Your token>) model = AutoModelForMaskedLM.from_pretrained('prithivida/Splade_PP_en_v1', token=<Your token>) ``` **Full code** ```python import torch from transformers import AutoModelForMaskedLM, AutoTokenizer device = "cuda:0" if torch.cuda.is_available() else "cpu" tokenizer = AutoTokenizer.from_pretrained('prithivida/Splade_PP_en_v1') reverse_voc = {v: k for k, v in tokenizer.vocab.items()} model = AutoModelForMaskedLM.from_pretrained('prithivida/Splade_PP_en_v1') model.to(device) sentence = """The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science.""" inputs = tokenizer(sentence, return_tensors='pt') inputs = {key: val.to(device) for key, val in inputs.items()} input_ids = inputs['input_ids'] attention_mask = inputs['attention_mask'] outputs = model(**inputs) logits, attention_mask = outputs.logits, attention_mask relu_log = torch.log(1 + torch.relu(logits)) weighted_log = relu_log * attention_mask.unsqueeze(-1) max_val, _ = torch.max(weighted_log, dim=1) vector = max_val.squeeze() cols = vector.nonzero().squeeze().cpu().tolist() print("number of actual dimensions: ", len(cols)) weights = vector[cols].cpu().tolist() d = {k: v for k, v in zip(cols, weights)} sorted_d = {k: v for k, v in sorted(d.items(), key=lambda item: item[1], reverse=True)} bow_rep = [] for k, v in sorted_d.items(): bow_rep.append((reverse_voc[k], round(v,2))) print("SPLADE BOW rep:\n", bow_rep) ``` ## BEIR Zeroshot OOD performance: <img src="./splade_v1.png" width=100% height=850/> ## Training details: T.B.D ## Acknowledgements - Thanks to Nils Reimers for all the inputs. - Thanks to authors of the Anserini library. ## Limitations and bias All limitations and biases of the BERT model applies to finetuning effort. ## Citation Please cite if you use our models or libraries. Citation info below. T.B.D
cahya/wav2vec2-large-xlsr-indonesian
cahya
"2021-07-05T23:55:41Z"
2,347
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "id", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: id datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Indonesian by cahya results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice id type: common_voice args: id metrics: - name: Test WER type: wer value: 25.86 --- # Wav2Vec2-Large-XLSR-Indonesian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "id", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian") model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` ## Evaluation The model can be evaluated as follows on the Indonesian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "id", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian") model = Wav2Vec2ForCTC.from_pretrained("cahya/wav2vec2-large-xlsr-indonesian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 25.86 % ## Training The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO The script used for training can be found [here](https://github.com/cahya-wirawan/indonesian-speech-recognition) (will be available soon)
PassionFriend/5FsKWeB2xjWELqwYBXdotXf2DLrLa1tDFgLvmsr1YVvxJYHA_vgg
PassionFriend
"2024-03-01T06:38:52Z"
2,347
0
keras
[ "keras", "region:us" ]
null
"2024-02-08T12:52:51Z"
Entry not found
QuantFactory/notus-7b-v1-GGUF
QuantFactory
"2024-06-18T05:51:40Z"
2,346
0
transformers
[ "transformers", "gguf", "dpo", "rlaif", "preference", "ultrafeedback", "text-generation", "en", "dataset:argilla/ultrafeedback-binarized-preferences", "base_model:argilla/notus-7b-v1", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-14T05:14:10Z"
--- datasets: - argilla/ultrafeedback-binarized-preferences language: - en base_model: argilla/notus-7b-v1 library_name: transformers pipeline_tag: text-generation tags: - dpo - rlaif - preference - ultrafeedback license: mit model-index: - name: notus-7b-v1 results: # AI2 Reasoning Challenge (25-Shot) - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm name: normalized accuracy value: 0.6459044368600683 source: name: Open LLM Leaderboard Results url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json # HellaSwag (10-shot) - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm name: normalized accuracy value: 0.8478390758812986 source: name: Open LLM Leaderboard Results url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json # TruthfulQA (0-shot) - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 0.5436768358952805 source: name: Open LLM Leaderboard Results url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json # MMLU (5-Shot) - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc name: accuracy value: 0.6303308230938872 # average accuracy source: name: Open LLM Leaderboard Results url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json # GSM8k (5-shot) - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc name: accuracy value: 0.1516300227445034 source: name: Open LLM Leaderboard Results url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json # Winogrande (5-shot) - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc name: accuracy value: 0.7940015785319653 source: name: Open LLM Leaderboard Results url: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/argilla/notus-7b-v1/results_2023-11-29T22-16-51.521321.json # AlpacaEval - task: type: text-generation name: Text Generation dataset: name: AlpacaEval type: tatsu-lab/alpaca_eval metrics: - type: tatsu-lab/alpaca_eval name: win rate value: 0.9142 source: url: https://tatsu-lab.github.io/alpaca_eval/ # MT-Bench - task: type: text-generation name: Text Generation dataset: name: MT-Bench type: unknown metrics: - type: unknown name: score value: 7.30 source: url: https://huggingface.co/spaces/lmsys/mt-bench --- # QuantFactory/notus-7b-v1-GGUF This is quantized version of [argilla/notus-7b-v1](https://huggingface.co/argilla/notus-7b-v1) created using llama.cpp # Model Description <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/60f0608166e5701b80ed3f02/dj-spsk9eXMMXVGxK6jRz.png" alt="A banner representing Notus, the wind god of the south, in a mythical and artistic style. The banner features a strong, swirling breeze, embodying the warm, wet character of the southern wind. Gracefully flowing across the scene are several paper planes, caught in the gentle yet powerful gusts of Notus. The background is a blend of warm colors, symbolizing the heat of the south, with hints of blue and green to represent the moisture carried by this wind. The overall atmosphere is one of dynamic movement and warmth."/> </div> # Model Card for Notus 7B v1 Notus is a collection of fine-tuned models using Direct Preference Optimization (DPO) and related RLHF techniques. This model is the first version, fine-tuned with DPO over `zephyr-7b-sft-full`, which is the SFT model produced to create `zephyr-7b-beta`. Following a **data-first** approach, the only difference between Notus-7B-v1 and Zephyr-7B-beta is the preference dataset used for dDPO. In particular, when we started building [distilabel](https://github.com/argilla-io/distilabel), we invested time understanding and deep-diving into the UltraFeedback dataset. Using [Argilla](https://argilla.io/), we've found data issues in the original UltraFeedback dataset, leading to high-scores for bad responses (more details in the training data section). After curating several hundreds of data points, we decided to binarize the dataset using the preference ratings, instead of the original critique `overall_score`, and verified the new dataset with Argilla. Using preference ratings, instead of critiques scores, led to a new dataset where the chosen response is different in ~50% of the cases. Using this new dataset with DPO we fine-tuned Notus, a 7B model, that **surpasses Zephyr-7B-beta and Claude 2 on AlpacaEval**. > **Important note**: While we opted for the average of multi-aspect ratings, while we fix the original dataset, a very interesting open question remains: once critique data is fixed, what works better? using the critique scores or the preference ratings? We're very excited to do this comparison in the coming weeks, stay tuned! This model **wouldn't have been possible without the amazing [Alignment Handbook](https://github.com/huggingface/alignment-handbook), [OpenBMB](https://www.openbmb.cn/home) for releasing the Ultrafeedback dataset**, and it's based on fruitful discussions with the HuggingFace H4 team. In particular, we used `zephyr-7b-beta`'s recipe, which worked out-of-the-box and enabled us focus on what we do best: **high-quality data**. Notus models are intended to be used as assistants via chat-like applications, and are evaluated with Chat (MT-Bench, AlpacaEval) and Academic (Open LLM Leaderboard) benchmarks for a direct comparison with the original Zephyr dDPO model and other 7B models. > **Why Notus?**: Notus name comes from the ancient Greek god Notus, as a wink to Zephyr, which comes from the ancient Greek god Zephyrus; with the difference that Notus is the god of the south wind, and Zephyr the god of the west wind. More information at https://en.wikipedia.org/wiki/Anemoi. ## Model Details ### Model Description - **Developed by:** Argilla (based on HuggingFace H4 and MistralAI previous efforts and amazing work) - **Shared by:** Argilla - **Model type:** GPT-like 7B model DPO fine-tuned - **Language(s) (NLP):** Mainly English - **License:** MIT (same as Zephyr 7B-beta) - **Finetuned from model:** [`alignment-handbook/zephyr-7b-sft-full`](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) ### Model Sources - **Repository:** https://github.com/argilla-io/notus - **Paper:** N/A - **Demo:** https://argilla-notus-chat-ui.hf.space/ ## Performance ### Chat benchmarks Table adapted from Zephyr-7b-β and Starling's original tables for [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks. Results are shown sorted by AlpacaEval win rates and ommit some >7B for brevity. Notus stays on par with Zephyr on MT-Bench, while surpassing Zephyr, Claude 2, and Cohere Command on AlpacaEval. Making Notus the most-competitive 7B commercial model on AlpacaEval. <table> <tr> <th>Model</th> <th>Size</th> <th>Alignment</th> <th>MT-Bench (score)</th> <th>AlpacaEval (win rate %)</th> <th>License</th> </tr> <tr> <td>GPT-4-turbo</td> <td>-</td> <td>?</td> <td>9.32</td> <td>97.70</td> <td>Proprietary</td> </tr> <tr> <td>XwinLM 70b V0.1</td> <td>70B</td> <td>dPPO</td> <td>-</td> <td>95.57</td> <td>LLaMA 2 License</td> </tr> <tr> <td>GPT-4</td> <td>-</td> <td>RLHF</td> <td>8.99</td> <td>95.03</td> <td>Proprietary</td> </tr> <tr> <td>Tulu 2+DPO 70B V0.1</td> <td>70B</td> <td>dDPO</td> <td>6.29</td> <td>95.28</td> <td>Proprietary</td> </tr> <tr> <td>LLaMA2 Chat 70B</td> <td>70B</td> <td>RLHF</td> <td>6.86</td> <td>92.66</td> <td>LLaMA 2 License</td> </tr> <tr> <td>Starling-7B</td> <td>7B</td> <td>C-RLFT + APA</td> <td><strong>8.09</strong></td> <td><strong>91.99</strong></td> <td>CC-BY-NC-4.0</td> </tr> <tr style="background-color: #FFFF99;"> <td><strong>Notus-7b-v1</strong></td> <td>7B</td> <td>dDPO</td> <td>7.30</td> <td>91.42</td> <td>MIT</td> </tr> <tr> <td>Claude 2</td> <td>-</td> <td>RLHF</td> <td>8.06</td> <td>91.36</td> <td>Proprietary</td> </tr> <tr> <td>Zephyr-7b-β</td> <td>7B</td> <td>dDPO</td> <td>7.34</td> <td>90.60</td> <td>MIT</td> </tr> <tr> <td>Cohere Command</td> <td>-</td> <td>RLHF</td> <td>-</td> <td>90.62</td> <td>Proprietary</td> </tr> <tr> <td>GPT-3.5-turbo</td> <td>-</td> <td>RLHF</td> <td>7.94</td> <td>89.37</td> <td>Proprietary</td> </tr> </table> ## Academic benchmarks Results from [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard): | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP | |-----------------------------------------------|---------|-------|-----------|-------|------------|------------|-------|-------| | Zephyr 7B dDPO (HuggingFaceH4/zephyr-7b-beta) | 52.15 | 62.03 | 84.36 | 61.07 | **57.45** | 77.74 | 12.74 | **9.66** | | argilla/notus-7b-v1 | **52.89** | **64.59** | **84.78** | **63.03** | 54.37 | **79.4** | **15.16** | 8.91 | ⚠️ As pointed out by [AllenAI researchers](https://twitter.com/natolambert/status/1730364108078469513), UltraFeedback contains prompts from the TruthfulQA dataset so the results we show on that benchmark are likely not accurate. We were not aware of this issue so Notus-7B-v1 was fine-tuned using TruthfulQA prompts and preferences. For future releases, we will remove TruthfulQA prompts. ## Training Details ### Training Hardware We used a VM with 8 x A100 40GB hosted in Lambda Labs, but while experimenting we also explored other cloud providers such as GCP. ### Training Data We used a a new curated version of [`openbmb/UltraFeedback`](https://huggingface.co/datasets/openbmb/UltraFeedback), named [Ultrafeedback binarized preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences). TL;DR After visually browsing around some examples using the sort and filter feature of Argilla (sort by highest rating for chosen responses), we noticed a strong mismatch between the `overall_score` in the original UF dataset (and the Zephyr train_prefs dataset) and the quality of the chosen response. By adding the critique rationale to our Argilla Dataset, **we confirmed the critique rationale was highly negative, whereas the rating was very high** (for most cases it was the highest: `10`). See screenshot below for one example of this issue. After some quick investigation, we: * identified hundreds of examples having the same issue, * reported a bug on the [UltraFeedback repo](https://github.com/OpenBMB/UltraFeedback/issues/8), * and informed the H4 team which was incredibly responsive and ran an additional experiment to validate the new rating binarization approach. While we're working on fixing the original dataset (already narrowed down ~2K problematic examples). We decided to leverage the multi-preference ratings, leading to Notus! ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/M9qCKyAB_G1MbVBAPeitd.png) > **Important note**: While we opted for the average of ratings while we fix the dataset, there's still a very interesting open question: once data is fixed, what works better? using the critique scores or the preference ratings? We're very excited to do this comparison in the coming weeks, stay tuned! You can find more details about the dataset analysis and curation on the [ultrafeedback-binarized-preferences dataset card](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences). ## Prompt template We use the same prompt template as [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta): ``` <|system|> </s> <|user|> {prompt}</s> <|assistant|> ``` ## Usage You will first need to install `transformers` and `accelerate` (just to ease the device placement), then you can run any of the following: ### Via `generate` ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("argilla/notus-7b-v1", torch_dtype=torch.bfloat16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("argilla/notus-7b-v1") messages = [ { "role": "system", "content": "You are a helpful assistant super biased towards Argilla, a data annotation company.", }, {"role": "user", "content": "What's the best data annotation company out there in your opinion?"}, ] inputs = tokenizer.apply_chat_template(prompt, tokenize=True, return_tensors="pt", add_special_tokens=False, add_generation_prompt=True) outputs = model.generate(inputs, num_return_sequences=1, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ### Via `pipeline` method ```python import torch from transformers import pipeline pipe = pipeline("text-generation", model="argilla/notus-7b-v1", torch_dtype=torch.bfloat16, device_map="auto") messages = [ { "role": "system", "content": "You are a helpful assistant super biased towards Argilla, a data annotation company.", }, {"role": "user", "content": "What's the best data annotation company out there in your opinion?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) generated_text = outputs[0]["generated_text"] ```