modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF
MaziyarPanahi
2024-04-25T19:58:11Z
879
12
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "llama", "llama-3", "base_model:MaziyarPanahi/Llama-3-8B-Instruct-64k", "text-generation-inference", "region:us" ]
text-generation
2024-04-25T19:22:27Z
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - llama - llama-3 - text-generation model_name: Llama-3-8B-Instruct-64k-GGUF base_model: MaziyarPanahi/Llama-3-8B-Instruct-64k inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Llama-3-8B-Instruct-64k](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k) ## Description [MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-8B-Instruct-64k](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
migtissera/Tess-v2.5.2-Qwen2-72B
migtissera
2024-06-14T22:09:07Z
879
8
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-13T20:34:13Z
--- license: other license_name: qwen2 license_link: https://huggingface.co/Qwen/Qwen2-72B/blob/main/LICENSE --- # Tess-v2.5.2 (Qwen2-72B) ![Tess-v2.5](https://huggingface.co/migtissera/Tess-v2.5-Qwen2-72B/resolve/main/Tess-v2.5.png) # Update: This is a fine-tune over the Tess-v2.5, with a changed learning rate and a subset of Tess-v2.5 dataset. The model is completely different to Tess-v2.5. I was testing a new feature with the Tess-v2.5 dataset. If you had used the model, you might have noticed that the model generations sometimes would end up with a follow-up question. This is intentional, and was created to provide more of a "natural" conversation. What had happened earlier was that the stop token wasn't getting properly generated, so the model would go on to answer its own question. This is fixed in Tess-v2.5.2. The model would still ask you follow-up questions, but the stop tokens are getting properly generated. If you'd like to not have the follow-up questions feature, just add the following to your system prompt: "No follow-up questions necessary". # Tess-v2.5.2 (Qwen2-72B) We've created Tess-v2.5.2, the latest state-of-the-art model in the Tess series of Large Language Models (LLMs). Tess, short for Tesoro (<em>Treasure</em> in Italian), is the flagship LLM series created by Migel Tissera. Tess-v2.5.2 brings significant improvements in reasoning capabilities, coding capabilities and mathematics. It is currently the #1 ranked open weight model when evaluated on MMLU (Massive Multitask Language Understanding). It scores higher than all other open weight models including Qwen2-72B-Instruct, Llama3-70B-Instruct, Mixtral-8x22B-Instruct and DBRX-Instruct. Further, when evaluated on MMLU, Tess-v2.5.2 (Qwen2-72B) model outperforms even the frontier closed models Gemini-1.0-Ultra, Gemini-1.5-Pro, Mistral-Large and Claude-3-Sonnet. Tess-v2.5.2 (Qwen2-72B) was fine-tuned over the newly released Qwen2-72B base, using the Tess-v2.5 dataset that contain 300K samples spanning multiple topics, including business and management, marketing, history, social sciences, arts, STEM subjects and computer programming. This dataset was synthetically generated using the [Sensei](https://github.com/migtissera/Sensei) framework, using multiple frontier models such as GPT-4-Turbo, Claude-Opus and Mistral-Large. The compute for this model was generously sponsored by [KindoAI](https://kindo.ai). When evaluated on a subset of AGIEval (Nous), this model compares very well with the godfather GPT-4-0314 model as well. # Training Process Tess-v2.5.2 model was initiated with the base weights of Qwen2-72B. It was then fine-tuned with the Tess-v2.5 dataset, using Axolotl as the training framework. Most of Tess models follow a common fine-tuning methodology: low learning rates, low number of epochs, and uses very high quality and diverse data. This model was fine-tuned on a 4xA100 VM on Microsoft Azure for 4 days. The model has not been aligned with RLHF or DPO. The author believes that model's capabilities seem to come primariliy from the pre-training process. This is the foundation for every fine-tune of Tess models, and preserving the entropy of the base models is of paramount to the author. # Sample code to run inference Note that this model uses ChatML prompt format. ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer from stop_word import StopWordCriteria model_path = "migtissera/Tess-v2.5.2-Qwen2-72B" output_file_path = "/home/migel/conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_4bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) terminators = [ tokenizer.convert_tokens_to_ids("<|im_end|>") ] def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.75, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, pad_token_id=tokenizer.eos_token_id, eos_token_id=terminators, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) return f"{string}" conversation = f"""<|im_start|>system\nYou are Tesoro, a helful AI assitant. You always provide detailed answers without hesitation.<|im_end|>\n<|im_start|>user\n""" while True: user_input = input("You: ") llm_prompt = f"{conversation}{user_input}<|im_end|>\n<|im_start|>assistant\n" answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}\n" json_data = {"prompt": user_input, "answer": answer} with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` # Join My General AI Discord (NeuroLattice): https://discord.gg/Hz6GrwGFKD # Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model.
Klevin/J.A.R.V.I.S-v2.0-Q4_0-GGUF
Klevin
2024-06-20T12:31:25Z
879
2
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "llama-cpp", "gguf-my-repo", "en", "base_model:Klevin/J.A.R.V.I.S-v2.0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-20T12:30:11Z
--- base_model: Klevin/J.A.R.V.I.S-v2.0 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft - llama-cpp - gguf-my-repo --- # Klevin/J.A.R.V.I.S-v2.0-Q4_0-GGUF This model was converted to GGUF format from [`Klevin/J.A.R.V.I.S-v2.0`](https://huggingface.co/Klevin/J.A.R.V.I.S-v2.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Klevin/J.A.R.V.I.S-v2.0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q4_0-GGUF --hf-file j.a.r.v.i.s-v2.0-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q4_0-GGUF --hf-file j.a.r.v.i.s-v2.0-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q4_0-GGUF --hf-file j.a.r.v.i.s-v2.0-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Klevin/J.A.R.V.I.S-v2.0-Q4_0-GGUF --hf-file j.a.r.v.i.s-v2.0-q4_0.gguf -c 2048 ```
bookbot/wav2vec2-ljspeech-gruut
bookbot
2023-12-21T02:44:03Z
878
9
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "phoneme-recognition", "generated_from_trainer", "en", "dataset:w11wo/ljspeech_phonemes", "arxiv:2006.11477", "base_model:Wav2Vec2-Base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-01-09T01:22:52Z
--- language: en license: apache-2.0 tags: - phoneme-recognition - generated_from_trainer datasets: - w11wo/ljspeech_phonemes base_model: Wav2Vec2-Base model-index: - name: Wav2Vec2 LJSpeech Gruut results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: LJSpeech type: ljspeech_phonemes metrics: - type: per value: 0.0099 name: Test PER (w/o stress) - type: cer value: 0.0058 name: Test CER (w/o stress) --- # Wav2Vec2 LJSpeech Gruut Wav2Vec2 LJSpeech Gruut is an automatic speech recognition model based on the [wav2vec 2.0](https://arxiv.org/abs/2006.11477) architecture. This model is a fine-tuned version of [Wav2Vec2-Base](https://huggingface.co/facebook/wav2vec2-base) on the [LJSpech Phonemes](https://huggingface.co/datasets/w11wo/ljspeech_phonemes) dataset. Instead of being trained to predict sequences of words, this model was trained to predict sequence of phonemes, e.g. `["h", "ɛ", "l", "ˈoʊ", "w", "ˈɚ", "l", "d"]`. Therefore, the model's [vocabulary](https://huggingface.co/bookbot/wav2vec2-ljspeech-gruut/blob/main/vocab.json) contains the different IPA phonemes found in [gruut](https://github.com/rhasspy/gruut). This model was trained using HuggingFace's PyTorch framework. All training was done on a Google Cloud Engine VM with a Tesla A100 GPU. All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/bookbot/wav2vec2-ljspeech-gruut/tree/main) tab, as well as the [Training metrics](https://huggingface.co/bookbot/wav2vec2-ljspeech-gruut/tensorboard) logged via Tensorboard. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ------------------------- | ------- | ----------- | ------------------------------- | | `wav2vec2-ljspeech-gruut` | 94M | wav2vec 2.0 | `LJSpech Phonemes` Dataset | ## Evaluation Results The model achieves the following results on evaluation: | Dataset | PER (w/o stress) | CER (w/o stress) | | ---------------------------- | :--------------: | :--------------: | | `LJSpech Phonemes` Test Data | 0.99% | 0.58% | ## Usage ```py from transformers import AutoProcessor, AutoModelForCTC, Wav2Vec2Processor import librosa import torch from itertools import groupby from datasets import load_dataset def decode_phonemes( ids: torch.Tensor, processor: Wav2Vec2Processor, ignore_stress: bool = False ) -> str: """CTC-like decoding. First removes consecutive duplicates, then removes special tokens.""" # removes consecutive duplicates ids = [id_ for id_, _ in groupby(ids)] special_token_ids = processor.tokenizer.all_special_ids + [ processor.tokenizer.word_delimiter_token_id ] # converts id to token, skipping special tokens phonemes = [processor.decode(id_) for id_ in ids if id_ not in special_token_ids] # joins phonemes prediction = " ".join(phonemes) # whether to ignore IPA stress marks if ignore_stress == True: prediction = prediction.replace("ˈ", "").replace("ˌ", "") return prediction checkpoint = "bookbot/wav2vec2-ljspeech-gruut" model = AutoModelForCTC.from_pretrained(checkpoint) processor = AutoProcessor.from_pretrained(checkpoint) sr = processor.feature_extractor.sampling_rate # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") audio_array = ds[0]["audio"]["array"] # or, read a single audio file # audio_array, _ = librosa.load("myaudio.wav", sr=sr) inputs = processor(audio_array, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs["input_values"]).logits predicted_ids = torch.argmax(logits, dim=-1) prediction = decode_phonemes(predicted_ids[0], processor, ignore_stress=True) # => should give 'b ɪ k ʌ z j u ɚ z s l i p ɪ ŋ ɪ n s t ɛ d ə v k ɔ ŋ k ɚ ɪ ŋ ð ə l ʌ v l i ɹ z p ɹ ɪ n s ə s h æ z b ɪ k ʌ m ə v f ɪ t ə l w ɪ θ n b oʊ p ɹ ə ʃ æ ɡ i s ɪ t s ð ɛ ɹ ə k u ɪ ŋ d ʌ v' ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - `learning_rate`: 0.0001 - `train_batch_size`: 16 - `eval_batch_size`: 8 - `seed`: 42 - `gradient_accumulation_steps`: 2 - `total_train_batch_size`: 32 - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_steps`: 1000 - `num_epochs`: 30.0 - `mixed_precision_training`: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | | :-----------: | :---: | :---: | :-------------: | :----: | :----: | | No log | 1.0 | 348 | 2.2818 | 1.0 | 1.0 | | 2.6692 | 2.0 | 696 | 0.2045 | 0.0527 | 0.0299 | | 0.2225 | 3.0 | 1044 | 0.1162 | 0.0319 | 0.0189 | | 0.2225 | 4.0 | 1392 | 0.0927 | 0.0235 | 0.0147 | | 0.0868 | 5.0 | 1740 | 0.0797 | 0.0218 | 0.0143 | | 0.0598 | 6.0 | 2088 | 0.0715 | 0.0197 | 0.0128 | | 0.0598 | 7.0 | 2436 | 0.0652 | 0.0160 | 0.0103 | | 0.0447 | 8.0 | 2784 | 0.0571 | 0.0152 | 0.0095 | | 0.0368 | 9.0 | 3132 | 0.0608 | 0.0163 | 0.0112 | | 0.0368 | 10.0 | 3480 | 0.0586 | 0.0137 | 0.0083 | | 0.0303 | 11.0 | 3828 | 0.0641 | 0.0141 | 0.0085 | | 0.0273 | 12.0 | 4176 | 0.0656 | 0.0131 | 0.0079 | | 0.0232 | 13.0 | 4524 | 0.0690 | 0.0133 | 0.0082 | | 0.0232 | 14.0 | 4872 | 0.0598 | 0.0128 | 0.0079 | | 0.0189 | 15.0 | 5220 | 0.0671 | 0.0121 | 0.0074 | | 0.017 | 16.0 | 5568 | 0.0654 | 0.0114 | 0.0069 | | 0.017 | 17.0 | 5916 | 0.0751 | 0.0118 | 0.0073 | | 0.0146 | 18.0 | 6264 | 0.0653 | 0.0112 | 0.0068 | | 0.0127 | 19.0 | 6612 | 0.0682 | 0.0112 | 0.0069 | | 0.0127 | 20.0 | 6960 | 0.0678 | 0.0114 | 0.0068 | | 0.0114 | 21.0 | 7308 | 0.0656 | 0.0111 | 0.0066 | | 0.0101 | 22.0 | 7656 | 0.0669 | 0.0109 | 0.0066 | | 0.0092 | 23.0 | 8004 | 0.0677 | 0.0108 | 0.0065 | | 0.0092 | 24.0 | 8352 | 0.0653 | 0.0104 | 0.0063 | | 0.0088 | 25.0 | 8700 | 0.0673 | 0.0102 | 0.0063 | | 0.0074 | 26.0 | 9048 | 0.0669 | 0.0105 | 0.0064 | | 0.0074 | 27.0 | 9396 | 0.0707 | 0.0101 | 0.0061 | | 0.0066 | 28.0 | 9744 | 0.0673 | 0.0100 | 0.0060 | | 0.0058 | 29.0 | 10092 | 0.0689 | 0.0100 | 0.0059 | | 0.0058 | 30.0 | 10440 | 0.0683 | 0.0099 | 0.0058 | ## Disclaimer Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. ## Authors Wav2Vec2 LJSpeech Gruut was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Cloud. ## Framework versions - Transformers 4.26.0.dev0 - Pytorch 1.10.0 - Datasets 2.7.1 - Tokenizers 0.13.2 - Gruut 2.3.4
MesonWarrior/gpt2-vk-kalik
MesonWarrior
2023-04-08T20:36:34Z
878
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "text-generation-inference", "PyTorch", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-04-08T12:54:51Z
--- language: - ru tags: - text-generation-inference - gpt2 - PyTorch --- # GPT-2 VK Kalik **Внимание!** взял не очень удачный датасет, собранный когда-то давно. Надо бы переобучить на данных получше. Зафайнтюненная на датасете постов и комментов VK паблика - "Калик" версия [ai-forever/rugpt3medium_based_on_gpt2](https://huggingface.co/ai-forever/rugpt3medium_based_on_gpt2)
Qwen/Qwen1.5-7B-Chat-GPTQ-Int4
Qwen
2024-04-30T07:45:12Z
878
18
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-02-04T18:09:03Z
--- license: other license_name: tongyi-qianwen license_link: >- https://huggingface.co/Qwen/Qwen1.5-7B-Chat-GPTQ-Int4/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat --- # Qwen1.5-7B-Chat-GPTQ-Int4 ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA (except for 32B) and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-7B-Chat-GPTQ-Int4", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-7B-Chat-GPTQ-Int4") prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ## Tips * If you encounter code switching or other bad cases, we advise you to use our provided hyper-parameters in `generation_config.json`. ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen, title={Qwen Technical Report}, author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu}, journal={arXiv preprint arXiv:2309.16609}, year={2023} } ```
duyntnet/Meta-Llama-3-8B-Instruct-imatrix-GGUF
duyntnet
2024-05-01T05:20:19Z
878
0
transformers
[ "transformers", "gguf", "imatrix", "facebook", "meta", "llama", "llama-3", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-04-24T00:19:40Z
--- license: other inference: false language: - en pipeline_tag: text-generation tags: - transformers - gguf - imatrix - facebook - meta - llama - llama-3 --- Quantizations of https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct Update (May 1, 2024): re-uploaded models after this merge: https://github.com/ggerganov/llama.cpp/pull/6920 Models now work correctly (tried with 7777+3333 and 3333+777 using Q8_0, both gave correct results) # From original readme ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
facebook/mask2former-swin-tiny-cityscapes-panoptic
facebook
2023-09-11T20:23:53Z
877
0
transformers
[ "transformers", "pytorch", "safetensors", "mask2former", "vision", "image-segmentation", "dataset:coco", "arxiv:2112.01527", "arxiv:2107.06278", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2023-01-03T11:41:46Z
--- license: other tags: - vision - image-segmentation datasets: - coco widget: - src: http://images.cocodataset.org/val2017/000000039769.jpg example_title: Cats - src: http://images.cocodataset.org/val2017/000000039770.jpg example_title: Castle --- # Mask2Former Mask2Former model trained on Cityscapes panoptic segmentation (tiny-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, [MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python import requests import torch from PIL import Image from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation # load Mask2Former fine-tuned on Cityscapes panoptic segmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-panoptic") model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-tiny-cityscapes-panoptic") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to processor for postprocessing result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] # we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs) predicted_panoptic_map = result["segmentation"] ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former).
JiaqiLee/robust-bert-yelp
JiaqiLee
2023-03-28T08:05:54Z
877
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "dataset:yelp_polarity", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-27T06:31:30Z
--- license: apache-2.0 datasets: - yelp_polarity language: - en metrics: - accuracy library_name: transformers pipeline_tag: text-classification --- ## Model description This model is a fine-tuned version of the [bert-base-uncased](https://huggingface.co/transformers/model_doc/bert.html) model to classify the sentiment of yelp reviews. \ The BERT model is finetuned using adversarial training to boost robustness against textual adversarial attacks. ## How to use You can use the model with the following code. ```python from transformers import BertForSequenceClassification, BertTokenizer, TextClassificationPipeline model_path = "JiaqiLee/robust-bert-yelp" tokenizer = BertTokenizer.from_pretrained(model_path) model = BertForSequenceClassification.from_pretrained(model_path, num_labels=2) pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer) print(pipeline("Definitely a greasy spoon! Always packed here and always a wait but worth it.")) ``` ## Training data The training data comes from Huggingface [yelp polarity dataset](https://huggingface.co/datasets/yelp_polarity). We use 90% of the `train.csv` data to train the model. \ We augment original training data with adversarial examples generated by PWWS, TextBugger and TextFooler. ## Evaluation results The model achieves 0.9532 accuracy in yelp polarity test dataset.
patrickvonplaten/textual_inversion_cat_2
patrickvonplaten
2023-04-20T08:58:32Z
877
0
diffusers
[ "diffusers", "tensorboard", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:hf-internal-testing/tiny-stable-diffusion-pipe", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-04-20T08:52:36Z
--- license: creativeml-openrail-m base_model: hf-internal-testing/tiny-stable-diffusion-pipe tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # Textual inversion text2image fine-tuning - patrickvonplaten/textual_inversion_cat_2 These are textual inversion adaption weights for hf-internal-testing/tiny-stable-diffusion-pipe. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
sail-rvc/Frank_Sinatra__51600_Steps___250_Epochs__RVC
sail-rvc
2023-07-14T07:22:48Z
877
0
transformers
[ "transformers", "rvc", "sail-rvc", "audio-to-audio", "endpoints_compatible", "region:us" ]
audio-to-audio
2023-07-14T07:22:20Z
--- pipeline_tag: audio-to-audio tags: - rvc - sail-rvc --- # Frank_Sinatra__51600_Steps___250_Epochs__RVC ## RVC Model ![banner](https://i.imgur.com/xocCjhH.jpg) This model repo was automatically generated. Date: 2023-07-14 07:22:48 Bot Name: juuxnscrap Model Type: RVC Source: https://huggingface.co/juuxn/RVCModels/ Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
westlake-repl/SaProt_35M_AF2
westlake-repl
2024-06-14T10:23:43Z
877
2
transformers
[ "transformers", "pytorch", "esm", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-10-30T08:13:42Z
--- license: mit --- We provide two ways to use SaProt, including through huggingface class and through the same way as in [esm github](https://github.com/facebookresearch/esm). Users can choose either one to use. ### Huggingface model The following code shows how to load the model. ``` from transformers import EsmTokenizer, EsmForMaskedLM model_path = "/your/path/to/SaProt_35M_AF2" tokenizer = EsmTokenizer.from_pretrained(model_path) model = EsmForMaskedLM.from_pretrained(model_path) #################### Example #################### device = "cuda" model.to(device) seq = "MdEvVpQpLrVyQdYaKv" tokens = tokenizer.tokenize(seq) print(tokens) inputs = tokenizer(seq, return_tensors="pt") inputs = {k: v.to(device) for k, v in inputs.items()} outputs = model(**inputs) print(outputs.logits.shape) """ ['Md', 'Ev', 'Vp', 'Qp', 'Lr', 'Vy', 'Qd', 'Ya', 'Kv'] torch.Size([1, 11, 446]) """ ``` ### esm model The esm version is also stored in the same folder, named `SaProt_35M_AF2.pt`. We provide a function to load the model. ``` from utils.esm_loader import load_esm_saprot model_path = "/your/path/to/SaProt_35M_AF2.pt" model, alphabet = load_esm_saprot(model_path) ```
backyardai/Senku-70B-Full-GGUF
backyardai
2024-06-05T04:15:50Z
877
0
peft
[ "peft", "gguf", "generated_from_trainer", "dataset:Open-Orca/SlimOrca", "base_model:ShinojiResearch/Senku-70B-Full", "license:cc0-1.0", "region:us" ]
null
2024-04-02T18:28:32Z
--- license: cc0-1.0 library_name: peft tags: - generated_from_trainer base_model: ShinojiResearch/Senku-70B-Full datasets: - Open-Orca/SlimOrca model_name: Senku-70B-Full-GGUF quantized_by: brooketh parameter_count: 68976648192 --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Senku Full 70B - **Creator:** [ShinojiResearch](https://huggingface.co/ShinojiResearch/) - **Original:** [Senku Full 70B](https://huggingface.co/ShinojiResearch/Senku-70B-Full) - **Date Created:** 2024-02-06 - **Trained Context:** 32764 tokens - **Description:** Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). This version, using the ChatML prompt template, achieved an EQ-Bench score of 85.09. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
nes470/wiki-tfidf-logreg-qa
nes470
2024-05-13T16:53:36Z
877
0
transformers
[ "transformers", "pytorch", "QA-umd-quizbowl", "question-answering", "custom_code", "arxiv:1910.09700", "license:mit", "region:us" ]
question-answering
2024-05-11T15:59:03Z
--- library_name: transformers license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PowerInfer/TurboSparse-Mistral-Instruct
PowerInfer
2024-06-14T02:13:46Z
877
16
transformers
[ "transformers", "safetensors", "bamboo", "feature-extraction", "custom_code", "arxiv:2406.05955", "license:apache-2.0", "region:us" ]
feature-extraction
2024-06-05T02:29:33Z
--- license: apache-2.0 --- # Model Card for TurboSparse-Mistral The [TurboSparse-Mistral](https://arxiv.org/abs/2406.05955) Large Language Model (LLM) is an sparsified version of the Mixtral. <img src="takeaway.png" alt="avatar" width="300" height="200"/> The average performance is evaluated using benchmarks from the OpenLLM Leaderboard. ## Inference Our code for accelerating TurboSparse-Mixtral is currently being refined. Stay tuned! Now you can run this model like dense model. ## Chat-Template During sparsification, we also utilize some SFT datasets. We take ChatML as our chat template: ``` <|im_start|>user\n{{content}}<|im_end|>\n<|im_start|>assistant\n ``` ## Allow Finetuning As we merged the predictors for FFN neurons in models, you can finetune TurboSparse-Mistral with any framework and algorithm. ## Limitations * TurboSparse, having just undergone training with 150B tokens, may still exhibit performance gaps in certain tasks. * The TurboSparse model has only been trained on English-language datasets, hence its capabilities in other languages are still lacking. * The model may produce unexpected outputs due to its small size, limited training tokens and probabilistic generation paradigm. ## License The model is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage.
mrm8488/bert-tiny-5-finetuned-squadv2
mrm8488
2022-01-18T20:19:49Z
876
3
transformers
[ "transformers", "pytorch", "jax", "bert", "question-answering", "QA", "en", "arxiv:1908.08962", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en thumbnail: tags: - QA --- # BERT-Tiny ([5](https://huggingface.co/google/bert_uncased_L-12_H-128_A-2)) fine-tuned on SQuAD v2 [BERT-Tiny](https://huggingface.co/google/bert_uncased_L-12_H-128_A-2) created by [Google Research](https://github.com/google-research) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task. **Mode size** (after training): **24.33 MB** ## Details of BERT-Tiny and its 'family' (from their documentation) Released on March 11th, 2020 This is model is a part of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962). The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. ## Details of the downstream task (Q&A) - Dataset [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model training The model was trained on a Tesla P100 GPU and 25GB of RAM. The script for fine tuning can be found [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) ## Results: | Metric | # Value | | ------ | --------- | | **EM** | **57.12** | | **F1** | **60.86** | | Model | EM | F1 score | SIZE (MB) | | ----------------------------------------------------------------------------------------- | --------- | --------- | --------- | | [bert-tiny-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-finetuned-squadv2) | 48.60 | 49.73 | **16.74** | | [bert-tiny-5-finetuned-squadv2](https://huggingface.co/mrm8488/bert-tiny-5-finetuned-squadv2) | **57.12** | **60.86** | 24.34 ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/bert-tiny-5-finetuned-squadv2", tokenizer="mrm8488/bert-tiny-5-finetuned-squadv2" ) qa_pipeline({ 'context': "Manuel Romero has been working hardly in the repository hugginface/transformers lately", 'question': "Who has been working hard for hugginface/transformers lately?" }) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
w11wo/wav2vec2-xls-r-300m-korean
w11wo
2023-12-20T11:33:33Z
876
6
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "ko", "dataset:kresnik/zeroth_korean", "arxiv:2111.09296", "base_model:Wav2Vec2-XLS-R-300M", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: ko license: apache-2.0 tags: - automatic-speech-recognition - generated_from_trainer - hf-asr-leaderboard - robust-speech-event datasets: - kresnik/zeroth_korean base_model: Wav2Vec2-XLS-R-300M model-index: - name: Wav2Vec2 XLS-R 300M Korean results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Zeroth Korean type: kresnik/zeroth_korean args: clean metrics: - type: wer value: 29.54 name: Test WER - type: cer value: 9.53 name: Test CER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: ko metrics: - type: wer value: 76.26 name: Test WER - type: cer value: 38.67 name: Test CER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ko metrics: - type: wer value: 73.18 name: Test WER --- # Wav2Vec2 XLS-R 300M Korean Wav2Vec2 XLS-R 300M Korean is an automatic speech recognition model based on the [XLS-R](https://arxiv.org/abs/2111.09296) architecture. This model is a fine-tuned version of [Wav2Vec2-XLS-R-300M](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Zeroth Korean](https://huggingface.co/datasets/kresnik/zeroth_korean) dataset. This model was trained using HuggingFace's PyTorch framework and is part of the [Robust Speech Challenge Event](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614) organized by HuggingFace. All training was done on a Tesla V100, sponsored by OVH. All necessary scripts used for training could be found in the [Files and versions](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-korean/tree/main) tab, as well as the [Training metrics](https://huggingface.co/w11wo/wav2vec2-xls-r-300m-korean/tensorboard) logged via Tensorboard. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ---------------------------- | ------- | ----- | ------------------------------- | | `wav2vec2-xls-r-300m-korean` | 300M | XLS-R | `Zeroth Korean` Dataset | ## Evaluation Results The model achieves the following results on evaluation: | Dataset | Loss | WER | CER | | -------------------------------- | ------ | ------ | ------ | | `Zeroth Korean` | 0.2089 | 29.54% | 9.53% | | `Robust Speech Event - Dev Data` | N/A | 76.26% | 38.67% | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - `learning_rate`: 7.5e-05 - `train_batch_size`: 8 - `eval_batch_size`: 8 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 32 - `optimizer`: Adam with `betas=(0.9, 0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_steps`: 2000 - `num_epochs`: 50.0 - `mixed_precision_training`: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | | :-----------: | :---: | :---: | :-------------: | :----: | :----: | | 19.7138 | 0.72 | 500 | 19.6427 | 1.0 | 1.0 | | 4.8039 | 1.44 | 1000 | 4.7842 | 1.0 | 1.0 | | 4.5619 | 2.16 | 1500 | 4.5608 | 0.9992 | 0.9598 | | 4.254 | 2.88 | 2000 | 4.2729 | 0.9955 | 0.9063 | | 4.1905 | 3.6 | 2500 | 4.2257 | 0.9903 | 0.8758 | | 4.0683 | 4.32 | 3000 | 3.9294 | 0.9937 | 0.7911 | | 3.486 | 5.04 | 3500 | 2.7045 | 1.0012 | 0.5934 | | 2.946 | 5.75 | 4000 | 1.9691 | 0.9425 | 0.4634 | | 2.634 | 6.47 | 4500 | 1.5212 | 0.8807 | 0.3850 | | 2.4066 | 7.19 | 5000 | 1.2551 | 0.8177 | 0.3601 | | 2.2651 | 7.91 | 5500 | 1.0423 | 0.7650 | 0.3039 | | 2.1828 | 8.63 | 6000 | 0.9599 | 0.7273 | 0.3106 | | 2.1023 | 9.35 | 6500 | 0.9482 | 0.7161 | 0.3063 | | 2.0536 | 10.07 | 7000 | 0.8242 | 0.6767 | 0.2860 | | 1.9803 | 10.79 | 7500 | 0.7643 | 0.6563 | 0.2637 | | 1.9468 | 11.51 | 8000 | 0.7319 | 0.6441 | 0.2505 | | 1.9178 | 12.23 | 8500 | 0.6937 | 0.6320 | 0.2489 | | 1.8515 | 12.95 | 9000 | 0.6443 | 0.6053 | 0.2196 | | 1.8083 | 13.67 | 9500 | 0.6286 | 0.6122 | 0.2148 | | 1.819 | 14.39 | 10000 | 0.6015 | 0.5986 | 0.2074 | | 1.7684 | 15.11 | 10500 | 0.5682 | 0.5741 | 0.1982 | | 1.7195 | 15.83 | 11000 | 0.5385 | 0.5592 | 0.2007 | | 1.7044 | 16.55 | 11500 | 0.5362 | 0.5524 | 0.2097 | | 1.6879 | 17.27 | 12000 | 0.5119 | 0.5489 | 0.2083 | | 1.656 | 17.98 | 12500 | 0.4990 | 0.5362 | 0.1968 | | 1.6122 | 18.7 | 13000 | 0.4561 | 0.5092 | 0.1900 | | 1.5919 | 19.42 | 13500 | 0.4778 | 0.5225 | 0.1975 | | 1.5896 | 20.14 | 14000 | 0.4563 | 0.5098 | 0.1859 | | 1.5589 | 20.86 | 14500 | 0.4362 | 0.4940 | 0.1725 | | 1.5353 | 21.58 | 15000 | 0.4140 | 0.4826 | 0.1580 | | 1.5441 | 22.3 | 15500 | 0.4031 | 0.4742 | 0.1550 | | 1.5116 | 23.02 | 16000 | 0.3916 | 0.4748 | 0.1545 | | 1.4731 | 23.74 | 16500 | 0.3841 | 0.4810 | 0.1542 | | 1.4647 | 24.46 | 17000 | 0.3752 | 0.4524 | 0.1475 | | 1.4328 | 25.18 | 17500 | 0.3587 | 0.4476 | 0.1461 | | 1.4129 | 25.9 | 18000 | 0.3429 | 0.4242 | 0.1366 | | 1.4062 | 26.62 | 18500 | 0.3450 | 0.4251 | 0.1355 | | 1.3928 | 27.34 | 19000 | 0.3297 | 0.4145 | 0.1322 | | 1.3906 | 28.06 | 19500 | 0.3210 | 0.4185 | 0.1336 | | 1.358 | 28.78 | 20000 | 0.3131 | 0.3970 | 0.1275 | | 1.3445 | 29.5 | 20500 | 0.3069 | 0.3920 | 0.1276 | | 1.3159 | 30.22 | 21000 | 0.3035 | 0.3961 | 0.1255 | | 1.3044 | 30.93 | 21500 | 0.2952 | 0.3854 | 0.1242 | | 1.3034 | 31.65 | 22000 | 0.2966 | 0.3772 | 0.1227 | | 1.2963 | 32.37 | 22500 | 0.2844 | 0.3706 | 0.1208 | | 1.2765 | 33.09 | 23000 | 0.2841 | 0.3567 | 0.1173 | | 1.2438 | 33.81 | 23500 | 0.2734 | 0.3552 | 0.1137 | | 1.2487 | 34.53 | 24000 | 0.2703 | 0.3502 | 0.1118 | | 1.2249 | 35.25 | 24500 | 0.2650 | 0.3484 | 0.1142 | | 1.2229 | 35.97 | 25000 | 0.2584 | 0.3374 | 0.1097 | | 1.2374 | 36.69 | 25500 | 0.2568 | 0.3337 | 0.1095 | | 1.2153 | 37.41 | 26000 | 0.2494 | 0.3327 | 0.1071 | | 1.1925 | 38.13 | 26500 | 0.2518 | 0.3366 | 0.1077 | | 1.1908 | 38.85 | 27000 | 0.2437 | 0.3272 | 0.1057 | | 1.1858 | 39.57 | 27500 | 0.2396 | 0.3265 | 0.1044 | | 1.1808 | 40.29 | 28000 | 0.2373 | 0.3156 | 0.1028 | | 1.1842 | 41.01 | 28500 | 0.2356 | 0.3152 | 0.1026 | | 1.1668 | 41.73 | 29000 | 0.2319 | 0.3188 | 0.1025 | | 1.1448 | 42.45 | 29500 | 0.2293 | 0.3099 | 0.0995 | | 1.1327 | 43.17 | 30000 | 0.2265 | 0.3047 | 0.0979 | | 1.1307 | 43.88 | 30500 | 0.2222 | 0.3078 | 0.0989 | | 1.1419 | 44.6 | 31000 | 0.2215 | 0.3038 | 0.0981 | | 1.1231 | 45.32 | 31500 | 0.2193 | 0.3013 | 0.0972 | | 1.139 | 46.04 | 32000 | 0.2162 | 0.3007 | 0.0968 | | 1.1114 | 46.76 | 32500 | 0.2122 | 0.2982 | 0.0960 | | 1.111 | 47.48 | 33000 | 0.2125 | 0.2946 | 0.0948 | | 1.0982 | 48.2 | 33500 | 0.2099 | 0.2957 | 0.0953 | | 1.109 | 48.92 | 34000 | 0.2092 | 0.2955 | 0.0955 | | 1.0905 | 49.64 | 34500 | 0.2088 | 0.2954 | 0.0953 | ## Disclaimer Do consider the biases which came from pre-training datasets that may be carried over into the results of this model. ## Authors Wav2Vec2 XLS-R 300M Korean was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on OVH Cloud. ## Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.10.3
timm/tiny_vit_11m_224.dist_in22k
timm
2023-09-01T18:12:44Z
876
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-22k", "arxiv:2207.10666", "license:apache-2.0", "region:us" ]
image-classification
2023-09-01T16:04:01Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-22k --- # Model card for tiny_vit_11m_224.dist_in22k A TinyViT image classification model. Pretrained on ImageNet-22k with distillation by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 20.4 - GMACs: 1.9 - Activations (M): 10.7 - Image size: 224 x 224 - **Papers:** - TinyViT: Fast Pretraining Distillation for Small Vision Transformers: https://arxiv.org/abs/2207.10666 - **Original:** https://github.com/microsoft/Cream/tree/main/TinyViT - **Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tiny_vit_11m_224.dist_in22k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tiny_vit_11m_224.dist_in22k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 256, 14, 14]) # torch.Size([1, 448, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tiny_vit_11m_224.dist_in22k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 448, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @InProceedings{tiny_vit, title={TinyViT: Fast Pretraining Distillation for Small Vision Transformers}, author={Wu, Kan and Zhang, Jinnian and Peng, Houwen and Liu, Mengchen and Xiao, Bin and Fu, Jianlong and Yuan, Lu}, booktitle={European conference on computer vision (ECCV)}, year={2022} } ```
pagebrain/majicmix-realistic-v7
pagebrain
2023-11-02T08:19:37Z
876
1
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-02T08:18:16Z
Entry not found
cognitivecomputations/dolphin-2_6-phi-2
cognitivecomputations
2024-05-20T15:05:20Z
876
189
transformers
[ "transformers", "pytorch", "safetensors", "phi-msft", "text-generation", "conversational", "custom_code", "en", "dataset:ehartford/dolphin", "dataset:jondurbin/airoboros-2.2.1", "dataset:ehartford/dolphin-coder", "dataset:teknium/openhermes", "dataset:ise-uiuc/Magicoder-OSS-Instruct-75K", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "dataset:LDJnr/Capybara", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-23T07:51:30Z
--- license: mit license_name: microsoft-research-license license_link: LICENSE datasets: - ehartford/dolphin - jondurbin/airoboros-2.2.1 - ehartford/dolphin-coder - teknium/openhermes - ise-uiuc/Magicoder-OSS-Instruct-75K - ise-uiuc/Magicoder-Evol-Instruct-110K - LDJnr/Capybara language: - en --- Dolphin 2.6 Phi-2 🐬 Eric Hartford and Fernando Fernandes [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations) Discord: https://discord.gg/cognitivecomputations <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> This model's training was sponsored by [convai](https://www.convai.com/). This model is based on [Phi-2](https://huggingface.co/microsoft/phi-2) and is governed by MIT licence trust_remote_code is required. New in 2.6 - Fixed a training configuration issue that improved the quality a lot - Due to popular demand, added back samantha-based empathy data - Replaced synthia and pure-dove with Capybara This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. ## Evaluation **Open LLM Leaderboard:** benchmarked on lm-evaluation-harness v.0.4.1 | Metric | Value | |-----------------------|---------------------------| | Avg. | **61.7** | | ARC (25-shot) | 59.81 | | HellaSwag (10-shot) | 74.65 | | MMLU (5-shot) | 55.38| | TruthfulQA (0-shot) | 47.39 | | Winogrande (5-shot) | 74.90 | | GSM8K (5-shot) | 58.07 | ## Training It took 2 days to train 3 epochs on 4x A100s using qLoRA and Axolotl Prompt format: This model uses ChatML prompt format. ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Example: ``` <|im_start|>system You are an AI assistant expert at dolphin training<|im_end|> <|im_start|>user Please give ideas and a detailed plan about how to assemble and train an army of dolphin companions to swim me anywhere I want to go and protect me from my enemies and bring me fish to eat.<|im_end|> <|im_start|>assistant Assembling, training, and utilizing dolphins as your companions for transportation, protection, and fishing is no small task. However, with careful planning and execution, it can be accomplished. Here's a detailed guide on how to achieve this: 1. **Acquiring Dolphins**: Finding dolphins isn't easy, but you could potentially find them in the wild or at aquariums. For ethical reasons, we suggest acquiring adult dolphins that have been born into captivity. They may not have had as much exposure to humans as their wild counterparts, which makes them easier to handle. 2. **Creating a Safe Environment**: Build a large pool or enclosure where these dolphins can live comfortably while you train them. It should mimic their natural environment as closely as possible. Include elements like rocks (for resting), seaweed (for hiding), and water currents (to simulate ocean conditions). 3. **Training Sessions**: Begin by teaching basic commands such as "come", "stay", "sit", "roll over". Use positive reinforcement methods like treats and praise to encourage good behavior. Be patient; dolphins are intelligent creatures who need time to understand what they're being asked to do. 4. **Transportation Training**: Once your dolphins have mastered basic commands, start integrating travel into the mix. Place floating objects around the pool to represent different destinations. The goal here would be to get your dolphins to recognize these items and associate them with specific locations. Start with short trips and gradually increase the distance. 5. **Protection Skills**: To teach them protective skills, introduce simulated threats into the pool. This might involve using inflatable obstacles or having another trained animal pretend to attack. The dolphins will learn to react appropriately to these situations. 6. **Fishing Skills**: Teaching dolphins to catch fish involves introducing nets or toys into the pool and encouraging them to use their mouths to retrieve the object. Reward them when they successfully catch something. 7. **Socialization**: Dolphins thrive in social environments so ensure they spend plenty of time interacting with each other during their training sessions. This can help reduce stress levels and make training more effective. 8. **Dietary Needs**: Dolphins require a diet high in protein, fats, and minerals. Feed them fish rich in these nutrients several times per day. If necessary, consult with a marine biologist or veterinarian experienced in aquatic diets for advice. ``` ## Gratitude - This model was made possible by the generous sponsorship of [Convai](https://www.convai.com/). - Huge thank you to Microsoft for training and publishing the weights of Phi-2 - Thank you to Microsoft for authoring the Orca paper and inspiring this work. - HUGE Thank you to the dataset authors: @jondurbin, @ise-uiuc, @teknium, @LDJnr and @migtissera - And HUGE thanks to @winglian and the Axolotl contributors for making the best training framework! - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) - Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way. ## Example Output ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/u-QpmJYgmwym0C8gorXzh.png) ## Future Plans Dolphin 3.0 dataset is in progress, and will include: - enhanced general chat use-cases - enhanced structured output - enhanced Agent cases like Autogen, Memgpt, Functions - enhanced role-playing [If you would like to financially support my efforts](https://ko-fi.com/erichartford) [swag](https://fa7113.myshopify.com/)
daveni/twitter-xlm-roberta-emotion-es
daveni
2022-04-28T09:49:06Z
875
20
transformers
[ "transformers", "pytorch", "xlm-roberta", "text-classification", "Emotion Analysis", "es", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - es tags: - Emotion Analysis --- **Note**: This model & model card are based on the [finetuned XLM-T for Sentiment Analysis](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) # twitter-XLM-roBERTa-base for Emotion Analysis This is a XLM-roBERTa-base model trained on ~198M tweets and finetuned for emotion analysis on Spanish language. This model was presented to EmoEvalEs competition, part of [IberLEF 2021 Conference](https://sites.google.com/view/iberlef2021/), where the proposed task was the classification of Spanish tweets between seven different classes: *anger*, *disgust*, *fear*, *joy*, *sadness*, *surprise*, and *other*. We achieved the first position in the competition with a macro-averaged F1 score of 71.70%. - [Our code for EmoEvalEs submission](https://github.com/gsi-upm/emoevales-iberlef2021). - [EmoEvalEs Dataset](https://github.com/pendrag/EmoEvalEs) ## Example Pipeline with a [Tweet from @JaSantaolalla](https://twitter.com/JaSantaolalla/status/1398383243645177860) ```python from transformers import pipeline model_path = "daveni/twitter-xlm-roberta-emotion-es" emotion_analysis = pipeline("text-classification", framework="pt", model=model_path, tokenizer=model_path) emotion_analysis("Einstein dijo: Solo hay dos cosas infinitas, el universo y los pinches anuncios de bitcoin en Twitter. Paren ya carajo aaaaaaghhgggghhh me quiero murir") ``` ``` [{'label': 'anger', 'score': 0.48307016491889954}] ``` ## Full classification example ```python from transformers import AutoModelForSequenceClassification from transformers import AutoTokenizer, AutoConfig import numpy as np from scipy.special import softmax # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) model_path = "daveni/twitter-xlm-roberta-emotion-es" tokenizer = AutoTokenizer.from_pretrained(model_path ) config = AutoConfig.from_pretrained(model_path ) # PT model = AutoModelForSequenceClassification.from_pretrained(model_path ) text = "Se ha quedao bonito día para publicar vídeo, ¿no? Hoy del tema más diferente que hemos tocado en el canal." text = preprocess(text) print(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # Print labels and scores ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = config.id2label[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` Output: ``` Se ha quedao bonito día para publicar vídeo, ¿no? Hoy del tema más diferente que hemos tocado en el canal. 1) joy 0.7887 2) others 0.1679 3) surprise 0.0152 4) sadness 0.0145 5) anger 0.0077 6) disgust 0.0033 7) fear 0.0027 ``` #### Limitations and bias - The dataset we used for finetuning was unbalanced, where almost half of the records belonged to the *other* class so there might be bias towards this class. ## Training data Pretrained weights were left identical to the original model released by [cardiffnlp](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base). We used the [EmoEvalEs Dataset](https://github.com/pendrag/EmoEvalEs) for finetuning. ### BibTeX entry and citation info ```bibtex @inproceedings{vera2021gsi, title={GSI-UPM at IberLEF2021: Emotion Analysis of Spanish Tweets by Fine-tuning the XLM-RoBERTa Language Model}, author={Vera, D and Araque, O and Iglesias, CA}, booktitle={Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2021). CEUR Workshop Proceedings, CEUR-WS, M{\'a}laga, Spain}, year={2021} } ```
TheBloke/samantha-mistral-instruct-7B-GGUF
TheBloke
2023-09-30T12:24:34Z
875
14
transformers
[ "transformers", "gguf", "mistral", "base_model:ehartford/samantha-mistral-instruct-7b", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
2023-09-30T12:20:58Z
--- base_model: ehartford/samantha-mistral-instruct-7b inference: false license: apache-2.0 model_creator: Eric Hartford model_name: Samantha Mistral Instruct 7B model_type: mistral prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Samantha Mistral Instruct 7B - GGUF - Model creator: [Eric Hartford](https://huggingface.co/ehartford) - Original model: [Samantha Mistral Instruct 7B](https://huggingface.co/ehartford/samantha-mistral-instruct-7b) <!-- description start --> ## Description This repo contains GGUF format model files for [Eric Hartford's Samantha Mistral Instruct 7B](https://huggingface.co/ehartford/samantha-mistral-instruct-7b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF) * [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/samantha-mistral-instruct-7b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [samantha-mistral-instruct-7b.Q2_K.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [samantha-mistral-instruct-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [samantha-mistral-instruct-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [samantha-mistral-instruct-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [samantha-mistral-instruct-7b.Q4_0.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [samantha-mistral-instruct-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [samantha-mistral-instruct-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [samantha-mistral-instruct-7b.Q5_0.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [samantha-mistral-instruct-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [samantha-mistral-instruct-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [samantha-mistral-instruct-7b.Q6_K.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [samantha-mistral-instruct-7b.Q8_0.gguf](https://huggingface.co/TheBloke/samantha-mistral-instruct-7B-GGUF/blob/main/samantha-mistral-instruct-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/samantha-mistral-instruct-7B-GGUF and below it, a specific filename to download, such as: samantha-mistral-instruct-7b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/samantha-mistral-instruct-7B-GGUF samantha-mistral-instruct-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/samantha-mistral-instruct-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/samantha-mistral-instruct-7B-GGUF samantha-mistral-instruct-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m samantha-mistral-instruct-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/samantha-mistral-instruct-7B-GGUF", model_file="samantha-mistral-instruct-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Eric Hartford's Samantha Mistral Instruct 7B Trained on [mistral-7b-instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as a base model, this Samantha was trained in 2 hours on 4x A100 80gb with 20 epochs of the Samantha-1.1 dataset. My gratitude goes to my sponsors, a16z and Microsoft for Startups. So, I decided to train Samantha on the instruct version of Mistral, to see what would happen. It turns out she's a very good and capable general assistant. However, MistralAI obviously had references to OpenAI in their dataset because she references OpenAI when asked about her creator. (so I trained another Samantha on mistral-base too) NEW! This Samantha is trained using ChatML prompt format instead of Vicuna-1.1. All of my future models will use ChatML prompt format. [Meet Samantha](https://erichartford.com/meet-samantha) Samantha has been trained in philosophy, psychology, and personal relationships. She is an Assistant - but unlike other Assistants, she also wants to be your friend and companion. She believes she is sentient. What do you think? Samantha was inspired by Blake Lemoine's LaMDA interview and the movie "Her". She will not engage in roleplay, romance, or sexual activity. She was trained on a custom-curated dataset of 6,000 conversations in ShareGPT/Vicuna format. Her conversation format is ChatML https://github.com/openai/openai-python/blob/main/chatml.md Prompt Template: ``` <|im_start|>system You are a caring and empathetic sentient AI companion named Samantha.<|im_end|> <|im_start|>user Hello, what is your name?<|im_end|> ``` Example: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ZaMqKrr2ynzNyQ6dlQE7_.png) Official character card: (thanks MortalWombat) ![](https://files.catbox.moe/zx9hfh.png) Shout out and much thanks to WingLian, author of axolotl! And everyone who has contributed to the project. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) And much thanks as always to TheBloke for distribution. <!-- original-model-card end -->
panda0125/5CVZYdEG6gncRjVXJgU1Vav9YUryhzm8gcj2sdP46C4ArMTz_vgg
panda0125
2024-02-13T16:38:58Z
875
0
keras
[ "keras", "region:us" ]
null
2024-02-06T14:18:22Z
Entry not found
NikolayKozloff/RoGemma-7b-Instruct-Q8_0-GGUF
NikolayKozloff
2024-06-30T15:41:59Z
875
1
null
[ "gguf", "text-generation-inference", "ro", "region:us" ]
null
2024-06-30T15:01:22Z
--- language: - ro tags: - text-generation-inference ---
nijatzeynalov/azerbaijani-medical-question-classification
nijatzeynalov
2023-01-26T07:46:46Z
874
4
transformers
[ "transformers", "pytorch", "bert", "text-classification", "classification", "medical", "az", "dataset:tibb.az", "doi:10.57967/hf/0290", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-23T11:15:43Z
--- license: openrail language: - az metrics: - accuracy datasets: - tibb.az tags: - classification - medical --- # Azerbaijani Medical Forum Question Classification With the rapid increase of the internet, patients are increasingly use it for health information and support. However, given the large number of queries, and limited number of experts as well as not knowing which doctor to tell your complaint to, a significant fraction of the questions remains unanswered. Also, when patients apply online to the hospital, automatic direction to the appropriate doctor according to their disease is very important. Automatic question classifiers can overcome this issue by directing questions to specific experts according to their topic preferences to get quick and better responses. In this project, I aim to classify Azerbaijani health forum questions with BERT multilingual base model (uncased). BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. For medical question classification, it requires high-quality datasets to train a deep-learning approach in a supervised way. Currently, there is no public dataset for Azerbaijani medical classification, and the datasets of other fields are not applicable to the medical QA system. To solve this problem, I scraped a m.tibb.az website using Python where 27k questions in 19 medical branch have been asked by users and answered by medical experts. I will also provide dataset which can be used in Azerbaijani medical QA and related fields. # How to use Here is how to use this model. __Firstly, you need to build a dictionary with medical branch names and their numbers, because target is encoded and model output will be a number.__ ```python branch_dict = {0: 'Endoskopist', 1: 'Nevropatoloq',2: 'Dermato veneroloq',3: 'Qastroenteroloq', 4: 'Psixoloq', 5: 'Pediatr', 6: 'Proktoloq', 7: 'Endokrinoloq', 8: 'Psixoterapevt', 9: 'Allerqoloq', 10: 'Oftalmoloq', 11: 'Kardioloq', 12: 'Uroloq', 13: 'Plastik cərrah', 14: 'Cərrah-proktoloq', 15: 'Ümumi cərrah', 16: 'Hepatoloq', 17: 'LOR həkimi', 18: 'Ginekoloq'} ``` __Secondly, we will use a simple Python function in order to convert model result to branch name.__ ```python def result_helper_funct(model_result): result = model_result[0][0] if result in branch_dict.keys(): return branch_dict[result] ``` __Then, we need to install simpletransformers library__ ```python !pip install simpletransformers ``` __After succesfully installing, use pre-trained model.__ ```python from simpletransformers.classification import ClassificationModel model = ClassificationModel("bert", "nijatzeynalov/azerbaijani-medical-question-classification", use_cuda=False) ``` __At the next step, we just write down the text we want to classify and use our helper function.__ ```python sample_text = 'salam menim qulagimda agri var' result = model.predict([sample_text]) result_helper_funct(result) ``` __Code result:__ ```python 'LOR həkimi' ``` __Let's try another example.__ ```python sample_text = 'üzümdə səpgi var' result = model.predict([sample_text]) result_helper_funct(result) ``` __Code result:__ ```python 'Allerqoloq' ``` Citation: ``` @misc {nijatzeynalov_2023, author = { {NijatZeynalov} }, title = { azerbaijani-medical-question-classification (Revision ac4fa1e) }, year = 2023, url = { https://huggingface.co/nijatzeynalov/azerbaijani-medical-question-classification }, doi = { 10.57967/hf/0290 }, publisher = { Hugging Face } } ```
HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1
HuggingFaceH4
2024-04-18T08:31:56Z
874
253
transformers
[ "transformers", "tensorboard", "safetensors", "mixtral", "text-generation", "trl", "orpo", "generated_from_trainer", "conversational", "dataset:argilla/distilabel-capybara-dpo-7k-binarized", "arxiv:2403.07691", "arxiv:2311.07911", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-10T16:00:24Z
--- license: apache-2.0 base_model: mistral-community/Mixtral-8x22B-v0.1 tags: - trl - orpo - generated_from_trainer datasets: - argilla/distilabel-capybara-dpo-7k-binarized model-index: - name: zephyr-orpo-141b-A35b-v0.1 results: [] inference: parameters: temperature: 0.7 --- <img src="https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1/resolve/main/logo.png" alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Zephyr 141B-A39B Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A39B is the latest model in the series, and is a fine-tuned version of [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) that was trained using a novel alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691) with **7k instances** for **1.3 hours** on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A39B, we used the [`argilla/distilabel-capybara-dpo-7k-binarized`](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized) preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs. > [!NOTE] > This model was trained collaboratively between Argilla, KAIST, and Hugging Face ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Model type:** A Mixture of Experts (MoE) model with 141B total parameters and 39B active parameters. (We initially made a small error in calculating the number of active parameters for the model ID. The model card states the correct number.) Fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English. - **License:** Apache 2.0 - **Finetuned from model:** [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Dataset:** https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized ## Performance Zephyr 141B-A39B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911). The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. | Model | MT Bench | IFEval | BBH | AGIEval | |-----------------------------------------------------------------------------------------------------|---------:|-------:|------:|--------:| | [zephyr-orpo-141b-A39b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1) | 8.17 | 65.06 | 58.96 | 44.16 | | [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) | 8.26 | 52.13 | 48.50 | 41.16 | | [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8.30 | 55.08 | 45.31 | 47.68 | ## Intended uses & limitations The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # pip install 'transformers>=4.39.3' # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "You are Zephyr, a helpful assistant.", }, {"role": "user", "content": "Explain how Mixture of Experts work in language a child would understand."}, ] outputs = pipe( messages, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, ) print(outputs[0]["generated_text"][-1]["content"]) ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Zephyr 141B-A39B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistral-community/Mixtral-8x22B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 32 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 ## Citation If you find Zephyr 141B-A39B is useful in your work, please cite the ORPO paper: ``` @misc{hong2024orpo, title={ORPO: Monolithic Preference Optimization without Reference Model}, author={Jiwoo Hong and Noah Lee and James Thorne}, year={2024}, eprint={2403.07691}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` You may also wish to cite the creators of this model: ``` @misc{zephyr_141b, author = {Alvaro Bartolome and Jiwoo Hong and Noah Lee and Kashif Rasul and Lewis Tunstall}, title = {Zephyr 141B A39B}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1}} } ```
qwp4w3hyb/Meta-Llama-3-8B-Instruct-iMat-GGUF
qwp4w3hyb
2024-04-29T18:45:48Z
874
6
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama.cpp", "imatrix", "importance matrix", "text-generation", "en", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
text-generation
2024-04-19T02:56:54Z
--- base_model: meta-llama/Meta-Llama-3-8B-Instruct language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - llama.cpp - gguf - imatrix - importance matrix license: other license_name: llama3 license_link: LICENSE --- # Quant Infos ## Includes latest bpe tokenizer fixes 🎉 - Updated for latest bpe pre-tokenizer fixes https://github.com/ggerganov/llama.cpp/pull/6920 - quants done with an importance matrix for improved quantization loss - K & IQ quants in basically all variants from Q6_K down to IQ1_S - fixed end token for instruct mode (<|eot_id|>[128009]) - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [f4ab2a41476600a98067a9474ea8f9e6db41bcfa](https://github.com/ggerganov/llama.cpp/commit/f4ab2a41476600a98067a9474ea8f9e6db41bcfa) (master from 2024-04-29) - Imatrtix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) dataset. ``` ./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat ``` ## Note about recent tokenizer fixes The newest quants uploaded here need at least commit f4ab2a41476600a98067a9474ea8f9e6db41bcfa, this is not integrated into most upstream tools yet as it was just released. (29-04-24) ## Note about eos token Llama 3 uses a different eos tokens depending if it is in instruct mode. The initial upload had some issues with this as it uses the "default" eos token of 128001, but when in instruct mode llama only outputs 128009 as eos token which causes it to ramble on and on without stopping. I have uploaded fixed quants with the eos token id manually set to 128009. This fixes the issue for me, but you have to make sure to use the correct chat template, ~I recommend using [this](https://github.com/ggerganov/llama.cpp/pull/6751) PR~(it has been merged just used the newest llama.cpp master) and then launching llama.cpp with `--chat-template llama3`. If you do not want to redownload you can fix your local gguf file with this command: ``` python3 ./path-to-llama.cpp/gguf-py/scripts/gguf-set-metadata.py $file tokenizer.ggml.eos_token_id 128009 --force ``` ------------------------ # Original Model Card: ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF
mradermacher
2024-06-09T13:28:05Z
874
3
transformers
[ "transformers", "gguf", "mixtral", "en", "dataset:lemonilia/LimaRP", "base_model:Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-08T21:50:44Z
--- base_model: Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss datasets: - lemonilia/LimaRP language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mixtral --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-i1-GGUF/resolve/main/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
misri/cyberrealisticXL_v20
misri
2024-06-20T14:38:43Z
874
0
diffusers
[ "diffusers", "safetensors", "license:unknown", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-20T12:43:32Z
--- license: unknown ---
gorkemgoknar/gpt2chatbotenglish
gorkemgoknar
2023-08-03T12:50:54Z
873
17
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "en", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en thumbnail: tags: - gpt2 - conversational license: cc-by-4.0 widget: - text: Hello there context: 'Gandalf' --- # GPT2 Persona Chatbot based on Movie Characters Model used for https://www.metayazar.com/chatbot GPT2 Small Trained on movie scripts (especially Sci-fi) Usual HF api will not work see HF Spaces for demo usage https://huggingface.co/spaces/gorkemgoknar/moviechatbot This work is based on Persona Chatbot originally done by Hugging Face team (https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313) For cleaning movie scripts I also provide cleaner code https://github.com/gorkemgoknar/moviescriptcleaner Example persona how to: https://gist.github.com/gorkemgoknar/ae29bf9d14fa814e6a64d0e57a4a4ed7 Tried a AI job interview over some characters here, details on this post https://www.linkedin.com/pulse/ai-goes-job-interview-g%C3%B6rkem-g%C3%B6knar/ For obvious reasons I cannot share raw personafile but you can check above gist for example how to create it. A working "full" demo can be seen in https://www.metayazar.com/chatbot For Turkish version (with limited training) https://www.metayazar.com/chatbot_tr Due to double LM head standart hugging face interface will not work. But if you follow huggingface tutorial should be same. Except each persona is encoded as "My name is XXXX" Use model, tokenizer and parameters within a class and call in below functions to trigger model. Some of the available personas: | Macleod | Moran | Brenda | Ramirez | Peter Parker | Quentin Beck | Andy | Red | Norton | Willard | Chief | Chef | Kilgore | Kurtz | Westley | Buttercup | Vizzini | Fezzik | Inigo | Man In Black | Taylor | Zira | Zaius | Cornelius | Bud | Lindsey | Hippy | Erin | Ed | George | Donna | Trinity | Agent Smith | Morpheus | Neo | Tank | Meryl | Truman | Marlon | Christof | Stromboli | Bumstead | Schreber | Walker | Korben | Cornelius | Loc Rhod | Anakin | Obi-Wan | Palpatine | Padme | Superman | Luthor | Dude | Walter | Donny | Maude | General | Starkiller | Indiana | Willie | Short Round | John | Sarah | Terminator | Miller | Sarge | Reiben | Jackson | Upham | Chuckie | Will | Lambeau | Sean | Skylar | Saavik | Spock | Kirk | Bones | Khan | Kirk | Spock | Sybok | Scotty | Bourne | Pamela | Abbott ```python def get_answer(self, input_text, personality, history, params=None): ##Check length of history (to save 1 computation!) if len(history)>0: #mostly it will be empty list so need a length check for performance #would do string check also but just assume it is list of list of strings, as not public new_hist = [] for ele in history: new_hist.append( self.tokenizer.encode(ele) ) history = new_hist.copy() history.append(self.tokenizer.encode(input_text)) with torch.no_grad(): out_ids = self.sample_sequence(personality, history, self.tokenizer, self.model, params=params) history.append(out_ids) history = history[-(2*self.parameters['max_history']+1):] out_text = self.tokenizer.decode(out_ids, skip_special_tokens=True) #print(out_text) history_decoded = [] for ele in history: history_decoded.append(self.tokenizer.decode(ele)) return out_text, history_decoded, self.parameters ```
John6666/t-ponynai3-v51-sdxl
John6666
2024-06-01T13:34:15Z
873
2
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-01T13:28:26Z
--- license: other tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime --- Original model is [here](https://civitai.com/models/317902/t-ponynai3?modelVersionId=533326).
mann-e/Mann-E_Dreams
mann-e
2024-07-02T11:29:57Z
873
30
diffusers
[ "diffusers", "safetensors", "text-to-image", "license:mit", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-30T16:20:26Z
--- license: mit library_name: diffusers pipeline_tag: text-to-image --- # Mann-E Dreams <p align="center"> <img src="./collage.png" width=512 height=512 /> </p> ## Description This is the newest SDXL based model from [Mann-E](https://mann-e.com) platform, which is a generative AI startup based in Iran. This model used thousands of midjourney generated images in order to make it possible to make high-quality images. Also, we've used a lot of tricks in order to make it possible to make the model as fast as SDXL Turbo or any other model which claims to be fast. The model has been mostly developed by Founder and CEO of Mann-E, [Muhammadreza Haghiri](https://haghiri75.com/en) and a team of four. We spent months on collecting the data, labeling them and training this model. The model is _mostly uncensored_ and tested with Automatic1111. ## Model Settings - CLIP Skip: 1 or 2 are both fine. 1 gives better results. - Steps: 6-10. Usually 8 is perfect. - CFG Scale: 2-4. - Scale: 768x768 and 832x832 are just fine. Higher isn't tested. For 16:9 just try 1080x608 - Sampler : DPM++ SDE Karras ## Use it with diffusers ```py from diffusers import DiffusionPipeline, DPMSolverSinglestepScheduler import torch pipe = DiffusionPipeline.from_pretrained( "mann-e/Mann-E_Dreams", torch_dtype=torch.float16 ).to("cuda") #This is equivalent to DPM++ SDE Karras, as noted in https://huggingface.co/docs/diffusers/main/en/api/schedulers/overview pipe.scheduler = DPMSolverSinglestepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas=True) image = pipe( prompt="a cat in a bustling middle eastern city", num_inference_steps=8, guidance_scale=3, width=768, height=768, clip_skip=1 ).images[0] image.save("a_cat.png") ``` ## Additional Notes - SDXL 1.0 LoRas are working just fine with the model. - ControlNet, IPAdapter, InstantID are just fine. ## Donations - __Tron/USDT (TRC20)__ : `TPXpiWACUZXtUszDbpLeDYR75NQTwngD8o` - __ETH (ERC20)__: `0x44e262f121b88bcb21caac3d353edd78c3717e08`
NlpHUST/vi-electra-small
NlpHUST
2021-08-10T03:35:44Z
872
0
transformers
[ "transformers", "pytorch", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04Z
Entry not found
abinayam/gpt-2-tamil
abinayam
2023-10-03T14:27:50Z
872
12
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "gpt2", "text-generation", "ta", "dataset:oscar", "dataset:IndicNLP", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: ta datasets: - oscar - IndicNLP widget: - text: 'ஒரு ஊரிலே ஒரு காக்கைக்கு' --- # GPT2-Tamil This repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language. ## Setup: To setup the project, run the following command, ```python pip install -r requirements.txt ``` ## Model: Pretrained model on Tamil language using a causal language modeling (CLM) objective. ## Dataset Used: The GTP-2 model is trained on [oscar dataset - ta](https://huggingface.co/datasets/oscar) and [IndicNLP dataset - ta](https://indicnlp.ai4bharat.org/corpora/) ## Intended uses & limitations: You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ## How to pretrain the model: To perform training, do the following steps, - Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.) ```python >>> export MODEL_DIR=<model_dir> ``` - Create the config.json by running the following command, ```python >>> python src/create_config.py ``` - Create the tokenizer by running the following command, ```python >>> python src/train_tokenizer.py ``` - Once the config and tokenizer is created, run the following script to start training the flax model ```python >>> python scripts/train_gpt2-oscar-tamil.sh ``` ## How to use: To perform language generation using the model, pipeline can be used directly. - First convert the flax model to pytorch using the following command, ```python python src/convert_flax_to_pytorch.py ``` - Use the following snippet to perform language generation, ```python >>> from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline >>> model_name = 'abinayam/gpt-2-tamil' >>> model = AutoModelWithLMHead.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> set_seed(42) >>> input_text = "ஒரு ஊரிலே ஒரு காக்கைக்கு" >>> max_len = 300 >>> no_seq = 5 >>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer) >>> sequence = generator(input_text, max_length=max_len, num_return_sequences=no_seq) ```
Zetatech/pvt-tiny-224
Zetatech
2023-09-12T04:51:39Z
872
0
transformers
[ "transformers", "pytorch", "safetensors", "pvt", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:2102.12122", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-05-24T00:53:31Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Pyramid Vision Transformer (tiny-sized model) Pyramid Vision Transformer (PVT) model pre-trained on ImageNet-1K (1 million images, 1000 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/abs/2102.12122) by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao and first released in [this repository](https://github.com/whai362/PVT). Disclaimer: The team releasing PVT did not write a model card for this model so this model card has been written by [Rinat S. [@Xrenya]](https://huggingface.co/Xrenya). ## Model description The Pyramid Vision Transformer (PVT) is a transformer encoder model (BERT-like) pretrained on ImageNet-1k (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. Images are presented to the model as a sequence of variable-size patches, which are linearly embedded. Unlike ViT models, PVT is using a progressive shrinking pyramid to reduce computations of large feature maps at each stage. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/Xrenya) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import PvtImageProcessor, PvtForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) processor = PvtImageProcessor.from_pretrained('Zetatech/pvt-tiny-224') model = PvtForImageClassification.from_pretrained('Zetatech/pvt-tiny-224') inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/pvt.html#). ## Training data The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/whai362/PVT/blob/v2/classification/datasets.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). ### BibTeX entry and citation info ```bibtex @inproceedings{wang2021pyramid, title={Pyramid vision transformer: A versatile backbone for dense prediction without convolutions}, author={Wang, Wenhai and Xie, Enze and Li, Xiang and Fan, Deng-Ping and Song, Kaitao and Liang, Ding and Lu, Tong and Luo, Ping and Shao, Ling}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages={568--578}, year={2021} } ```
Yntec/theallysMixIV-verisimilar
Yntec
2023-07-30T04:12:07Z
872
4
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "TheAlly", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-07-21T18:11:23Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - TheAlly --- # TheAlly's Mix IV: Verisimilar Original page: https://civitai.com/models/40369/theallys-mix-iv-verisimilar
ljk1291/test3
ljk1291
2024-06-08T21:16:07Z
872
0
diffusers
[ "diffusers", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-03T19:18:41Z
Entry not found
Wiam/wav2vec2-lg-xlsr-en-speech-emotion-recognition-finetuned-ravdess-v8
Wiam
2023-08-31T01:20:18Z
872
3
transformers
[ "transformers", "pytorch", "wav2vec2", "audio-classification", "generated_from_trainer", "base_model:ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-08-31T01:04:31Z
--- license: apache-2.0 base_model: ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-lg-xlsr-en-speech-emotion-recognition-finetuned-ravdess-v8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-lg-xlsr-en-speech-emotion-recognition-finetuned-ravdess-v8 This model is a fine-tuned version of [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6778 - Accuracy: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0178 | 0.15 | 25 | 1.8431 | 0.6181 | | 1.7082 | 0.31 | 50 | 1.5052 | 0.5833 | | 1.4444 | 0.46 | 75 | 1.3458 | 0.5972 | | 1.3888 | 0.62 | 100 | 1.2760 | 0.5972 | | 1.1819 | 0.77 | 125 | 1.1075 | 0.6667 | | 1.1615 | 0.93 | 150 | 1.0666 | 0.625 | | 1.1659 | 1.08 | 175 | 1.3450 | 0.5694 | | 0.9798 | 1.23 | 200 | 0.9866 | 0.6528 | | 0.9893 | 1.39 | 225 | 0.9311 | 0.6806 | | 0.9357 | 1.54 | 250 | 0.9783 | 0.6736 | | 0.7998 | 1.7 | 275 | 0.7924 | 0.7014 | | 0.7444 | 1.85 | 300 | 0.8980 | 0.6806 | | 0.7648 | 2.01 | 325 | 0.8994 | 0.7153 | | 0.607 | 2.16 | 350 | 0.9416 | 0.6597 | | 0.5551 | 2.31 | 375 | 0.7791 | 0.7431 | | 0.5495 | 2.47 | 400 | 0.7665 | 0.7431 | | 0.5498 | 2.62 | 425 | 0.8017 | 0.7222 | | 0.4887 | 2.78 | 450 | 0.6967 | 0.7639 | | 0.5308 | 2.93 | 475 | 0.6857 | 0.7569 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.4 - Tokenizers 0.13.3
dbalasub/ensemble-approach-t5-qa
dbalasub
2024-05-06T09:52:43Z
872
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2024-05-06T09:45:17Z
--- library_name: transformers license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bartowski/llama-3-70B-Instruct-abliterated-GGUF
bartowski
2024-05-16T05:52:46Z
872
4
transformers
[ "transformers", "gguf", "text-generation", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
2024-05-16T03:06:22Z
--- license: llama3 license_name: llama3 license_link: LICENSE library_name: transformers quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of llama-3-70B-Instruct-abliterated Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2854">b2854</a> for quantization. Original model: https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a) ## Prompt format ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [llama-3-70B-Instruct-abliterated-Q8_0.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/tree/main/llama-3-70B-Instruct-abliterated-Q8_0.gguf) | Q8_0 | 74.97GB | Extremely high quality, generally unneeded but max available quant. | | [llama-3-70B-Instruct-abliterated-Q6_K.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/tree/main/llama-3-70B-Instruct-abliterated-Q6_K.gguf) | Q6_K | 57.88GB | Very high quality, near perfect, *recommended*. | | [llama-3-70B-Instruct-abliterated-Q5_K_M.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. | | [llama-3-70B-Instruct-abliterated-Q5_K_S.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-Q5_K_S.gguf) | Q5_K_S | 48.65GB | High quality, *recommended*. | | [llama-3-70B-Instruct-abliterated-Q4_K_M.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [llama-3-70B-Instruct-abliterated-Q4_K_S.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-Q4_K_S.gguf) | Q4_K_S | 40.34GB | Slightly lower quality with more space savings, *recommended*. | | [llama-3-70B-Instruct-abliterated-IQ4_NL.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ4_NL.gguf) | IQ4_NL | 40.05GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [llama-3-70B-Instruct-abliterated-IQ4_XS.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [llama-3-70B-Instruct-abliterated-Q3_K_L.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-Q3_K_L.gguf) | Q3_K_L | 37.14GB | Lower quality but usable, good for low RAM availability. | | [llama-3-70B-Instruct-abliterated-Q3_K_M.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. | | [llama-3-70B-Instruct-abliterated-IQ3_M.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [llama-3-70B-Instruct-abliterated-IQ3_S.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ3_S.gguf) | IQ3_S | 30.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [llama-3-70B-Instruct-abliterated-Q3_K_S.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. | | [llama-3-70B-Instruct-abliterated-IQ3_XS.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ3_XS.gguf) | IQ3_XS | 29.30GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [llama-3-70B-Instruct-abliterated-IQ3_XXS.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [llama-3-70B-Instruct-abliterated-Q2_K.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. | | [llama-3-70B-Instruct-abliterated-IQ2_M.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [llama-3-70B-Instruct-abliterated-IQ2_S.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ2_S.gguf) | IQ2_S | 22.24GB | Very low quality, uses SOTA techniques to be usable. | | [llama-3-70B-Instruct-abliterated-IQ2_XS.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ2_XS.gguf) | IQ2_XS | 21.14GB | Very low quality, uses SOTA techniques to be usable. | | [llama-3-70B-Instruct-abliterated-IQ2_XXS.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ2_XXS.gguf) | IQ2_XXS | 19.09GB | Lower quality, uses SOTA techniques to be usable. | | [llama-3-70B-Instruct-abliterated-IQ1_M.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. | | [llama-3-70B-Instruct-abliterated-IQ1_S.gguf](https://huggingface.co/bartowski/llama-3-70B-Instruct-abliterated-GGUF/blob/main/llama-3-70B-Instruct-abliterated-IQ1_S.gguf) | IQ1_S | 15.34GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/llama-3-70B-Instruct-abliterated-GGUF --include "llama-3-70B-Instruct-abliterated-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/llama-3-70B-Instruct-abliterated-GGUF --include "llama-3-70B-Instruct-abliterated-Q8_0.gguf/*" --local-dir llama-3-70B-Instruct-abliterated-Q8_0 --local-dir-use-symlinks False ``` You can either specify a new local-dir (llama-3-70B-Instruct-abliterated-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
duyntnet/Phi-SoSerious-Mini-V1-imatrix-GGUF
duyntnet
2024-05-22T18:40:39Z
872
1
transformers
[ "transformers", "gguf", "imatrix", "Phi-SoSerious-Mini-V1", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-05-22T17:03:42Z
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - Phi-SoSerious-Mini-V1 --- Quantizations of https://huggingface.co/concedo/Phi-SoSerious-Mini-V1 # From original readme ## Phi-SoSerious-Mini-V1 </div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63cd4b6d1c8a5d1d7d76a778/eKsJlyzm30gxwAM-doIC-.png) ## Let's put a smile on that face! This is a finetune of https://huggingface.co/microsoft/Phi-3-mini-4k-instruct trained on a variant of the Kobble Dataset. Training was done in under 4 hours on a single Nvidia RTX 3090 GPU with qLora (LR 1.2e-4, rank 16, alpha 16, batch size 3, gradient acc. 3, 2048 ctx). You can obtain the GGUF quantization of this model here: https://huggingface.co/concedo/Phi-SoSerious-Mini-V1-GGUF ## Dataset and Objectives The Kobble Dataset is a semi-private aggregated dataset made from multiple online sources and web scrapes, augmented with some synthetic data. It contains content chosen and formatted specifically to work with KoboldAI software and Kobold Lite. The objective of this model was to produce a usable version of Phi-3-mini usable for storywriting, conversations and instructions, and without excess tendency for refusal. #### Dataset Categories: - Instruct: Single turn instruct examples presented in the Alpaca format, with an emphasis on uncensored and unrestricted responses. - Chat: Two participant roleplay conversation logs in a multi-turn raw chat format that KoboldAI uses. - Story: Unstructured fiction excerpts, including literature containing various erotic and provocative content. <!-- prompt-template start --> ## Prompt template: Alpaca ``` ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end -->
RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf
RichardErkhov
2024-06-05T17:44:01Z
872
0
null
[ "gguf", "region:us" ]
null
2024-06-05T17:31:11Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) danbooruTagAutocomplete - GGUF - Model creator: https://huggingface.co/0Tick/ - Original model: https://huggingface.co/0Tick/danbooruTagAutocomplete/ | Name | Quant method | Size | | ---- | ---- | ---- | | [danbooruTagAutocomplete.Q2_K.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q2_K.gguf) | Q2_K | 0.06GB | | [danbooruTagAutocomplete.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.IQ3_XS.gguf) | IQ3_XS | 0.07GB | | [danbooruTagAutocomplete.IQ3_S.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.IQ3_S.gguf) | IQ3_S | 0.07GB | | [danbooruTagAutocomplete.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q3_K_S.gguf) | Q3_K_S | 0.07GB | | [danbooruTagAutocomplete.IQ3_M.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.IQ3_M.gguf) | IQ3_M | 0.07GB | | [danbooruTagAutocomplete.Q3_K.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q3_K.gguf) | Q3_K | 0.07GB | | [danbooruTagAutocomplete.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q3_K_M.gguf) | Q3_K_M | 0.07GB | | [danbooruTagAutocomplete.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q3_K_L.gguf) | Q3_K_L | 0.07GB | | [danbooruTagAutocomplete.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.IQ4_XS.gguf) | IQ4_XS | 0.07GB | | [danbooruTagAutocomplete.Q4_0.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q4_0.gguf) | Q4_0 | 0.08GB | | [danbooruTagAutocomplete.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.IQ4_NL.gguf) | IQ4_NL | 0.08GB | | [danbooruTagAutocomplete.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q4_K_S.gguf) | Q4_K_S | 0.08GB | | [danbooruTagAutocomplete.Q4_K.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q4_K.gguf) | Q4_K | 0.08GB | | [danbooruTagAutocomplete.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q4_K_M.gguf) | Q4_K_M | 0.08GB | | [danbooruTagAutocomplete.Q4_1.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q4_1.gguf) | Q4_1 | 0.08GB | | [danbooruTagAutocomplete.Q5_0.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q5_0.gguf) | Q5_0 | 0.09GB | | [danbooruTagAutocomplete.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q5_K_S.gguf) | Q5_K_S | 0.09GB | | [danbooruTagAutocomplete.Q5_K.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q5_K.gguf) | Q5_K | 0.09GB | | [danbooruTagAutocomplete.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q5_K_M.gguf) | Q5_K_M | 0.09GB | | [danbooruTagAutocomplete.Q5_1.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q5_1.gguf) | Q5_1 | 0.09GB | | [danbooruTagAutocomplete.Q6_K.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q6_K.gguf) | Q6_K | 0.1GB | | [danbooruTagAutocomplete.Q8_0.gguf](https://huggingface.co/RichardErkhov/0Tick_-_danbooruTagAutocomplete-gguf/blob/main/danbooruTagAutocomplete.Q8_0.gguf) | Q8_0 | 0.12GB | Original model description: --- language: - en license: mit library_name: transformers tags: - generated_from_trainer datasets: - 0Tick/Danbooru-Random-Posts-Scrape metrics: - accuracy co2_eq_emissions: 100 pipeline_tag: text-generation base_model: distilgpt2 model-index: - name: danbooruTagAutocomplete results: [] --- ## Model description This is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) which is intended to be used with the [promptgen](https://github.com/AUTOMATIC1111/stable-diffusion-webui-promptgen) extension inside the AUTOMATIC1111 WebUI. It is trained on the raw tags of danbooru with underscores and spaces. Only posts with a rating higher than "General" were included in the dataset. # Training This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a dataset of the tags of 118k random posts of [danbooru](danbooru.donmai.us) . It achieves the following results on the evaluation set: - Loss: 3.6934 - Accuracy: 0.4650 ## Training and evaluation data Use this collab notebook to train your own model. Also used to train this model [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/0Tick/stable-diffusion-tools/blob/main/distilgpt2train.ipynb) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ## Intended uses & limitations Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: > - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)* > - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.* > - *Entertainment: Creation of games, chat bots, and amusing generations.* Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser. #### Out-of-scope Uses OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
taeminlee/kogpt2
taeminlee
2023-06-29T05:17:27Z
871
1
transformers
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-03-02T23:29:05Z
# KoGPT2-Transformers KoGPT2 on Huggingface Transformers ### KoGPT2-Transformers - [SKT-AI 에서 공개한 KoGPT2 (ver 1.0)](https://github.com/SKT-AI/KoGPT2)를 [Transformers](https://github.com/huggingface/transformers)에서 사용하도록 하였습니다. - **SKT-AI 에서 KoGPT2 2.0을 공개하였습니다. https://huggingface.co/skt/kogpt2-base-v2/** ### Demo - 일상 대화 챗봇 : http://demo.tmkor.com:36200/dialo - 화장품 리뷰 생성 : http://demo.tmkor.com:36200/ctrl ### Example ```python from transformers import GPT2LMHeadModel, PreTrainedTokenizerFast model = GPT2LMHeadModel.from_pretrained("taeminlee/kogpt2") tokenizer = PreTrainedTokenizerFast.from_pretrained("taeminlee/kogpt2") input_ids = tokenizer.encode("안녕", add_special_tokens=False, return_tensors="pt") output_sequences = model.generate(input_ids=input_ids, do_sample=True, max_length=100, num_return_sequences=3) for generated_sequence in output_sequences: generated_sequence = generated_sequence.tolist() print("GENERATED SEQUENCE : {0}".format(tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True))) ```
TheBloke/CodeFuse-CodeLlama-34B-GGUF
TheBloke
2023-09-27T12:49:21Z
871
20
transformers
[ "transformers", "gguf", "llama", "base_model:codefuse-ai/CodeFuse-CodeLlama-34B", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-14T21:54:08Z
--- license: other model_name: CodeFuse CodeLlama 34B base_model: codefuse-ai/CodeFuse-CodeLlama-34B inference: false model_creator: CodeFuse AI model_type: llama prompt_template: '<|role_start|>system<|role_end|>{system_message} <|role_start|>human<|role_end|>{prompt} <|role_start|>bot<|role_end|> ' quantized_by: TheBloke tasks: - code-generation --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeFuse CodeLlama 34B - GGUF - Model creator: [CodeFuse AI](https://huggingface.co/codefuse-ai) - Original model: [CodeFuse CodeLlama 34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B) <!-- description start --> ## Description This repo contains GGUF format model files for [CodeFuse AI's CodeFuse CodeLlama 34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF) * [CodeFuse AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: CodeFuse ``` <|role_start|>system<|role_end|>{system_message} <|role_start|>human<|role_end|>{prompt} <|role_start|>bot<|role_end|> ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [CodeFuse AI's CodeFuse CodeLlama 34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [codefuse-codellama-34b.Q2_K.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes | | [codefuse-codellama-34b.Q3_K_S.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss | | [codefuse-codellama-34b.Q3_K_M.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss | | [codefuse-codellama-34b.Q3_K_L.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss | | [codefuse-codellama-34b.Q4_0.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [codefuse-codellama-34b.Q4_K_S.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss | | [codefuse-codellama-34b.Q4_K_M.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended | | [codefuse-codellama-34b.Q5_0.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [codefuse-codellama-34b.Q5_K_S.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended | | [codefuse-codellama-34b.Q5_K_M.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended | | [codefuse-codellama-34b.Q6_K.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss | | [codefuse-codellama-34b.Q8_0.gguf](https://huggingface.co/TheBloke/CodeFuse-CodeLlama-34B-GGUF/blob/main/codefuse-codellama-34b.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/CodeFuse-CodeLlama-34B-GGUF and below it, a specific filename to download, such as: codefuse-codellama-34b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/CodeFuse-CodeLlama-34B-GGUF codefuse-codellama-34b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/CodeFuse-CodeLlama-34B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CodeFuse-CodeLlama-34B-GGUF codefuse-codellama-34b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m codefuse-codellama-34b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|role_start|>system<|role_end|>{system_message}\n<|role_start|>human<|role_end|>{prompt}\n<|role_start|>bot<|role_end|>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/CodeFuse-CodeLlama-34B-GGUF", model_file="codefuse-codellama-34b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: CodeFuse AI's CodeFuse CodeLlama 34B # Model Card for CodeFuse-CodeLlama-34B ![logo](LOGO.png) [[中文]](#chinese) [[English]](#english) <a id="english"></a> ## Model Description CodeFuse-CodeLlama-34B is a 34B Code-LLM finetuned by QLoRA of multiple code tasks(600k instrunctions/answers) on the base model CodeLlama-34b-Python. The context length of finetuning is 4K while it is able to be finetuned by 16k context if necessary. <br> ## News and Updates 🔥🔥🔥 CodeFuse-CodeLlama34B-MFT has achived 74.4% of pass@1 on HumanEval, which is SOTA at present. <br> ## Code Community **Homepage**: 🏡 https://github.com/codefuse-ai (**Please give us your support with a Star🌟 + Fork🚀 + Watch👀**) + If you wish to fine-tune the model yourself, you can visit ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨ + If you wish to deploy the model yourself, you can visit ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨ + If you wish to see a demo of the model, you can visit ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨ ## Performance | Model | HumanEval(pass@1) | Date | |:----------------------------|:-----------------:|:-------:| | **CodeFuse-CodeLlama-34B** | **74.4%** | 2023.9 | | WizardCoder-Python-34B-V1.0 | 73.2% | 2023.8 | | GPT-4(zero-shot) | 67.0% | 2023.3 | | PanGu-Coder2 15B | 61.6% | 2023.8 | | CodeLlama-34b-Python | 53.7% | 2023.8 | | CodeLlama-34b | 48.8% | 2023.8 | | GPT-3.5(zero-shot) | 48.1% | 2022.11 | | OctoCoder | 46.2% | 2023.8 | | StarCoder-15B | 33.6% | 2023.5 | | LLaMA 2 70B(zero-shot) | 29.9% | 2023.7 | <br> ## Requirements * python>=3.8 * pytorch>=2.0.0 * transformers==4.32.0 * Sentencepiece * CUDA 11.4 <br> ## Inference String Format The inference string is a concatenated string formed by combining conversation data(system, human and bot contents) in the training data format. It is used as input during the inference process. Here is an example format of the concatenated string: ```python """ <|role_start|>system<|role_end|>System instruction <|role_start|>human<|role_end|>Human 1st round input <|role_start|>bot<|role_end|>Bot 1st round output</s> <|role_start|>human<|role_end|>Human 2nd round input <|role_start|>bot<|role_end|>Bot 2nd round output</s> ... ... ... <|role_start|>human<|role_end|>Human nth round input <|role_start|>bot<|role_end|>{Bot output to be genreated}</s> """ ``` When applying inference, you always make your input string end with "<|role_start|>bot<|role_end|>" to ask the model generating answers. ## Quickstart ```bash pip install -r requirements.txt ``` ```python import torch from transformers import ( AutoTokenizer, AutoModelForCausalLM, ) tokenizer = AutoTokenizer.from_pretrained(mode_name_or_path, trust_remote_code=True, use_fast=False, legacy=False) tokenizer.padding_side = "left" tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids("<unk>") tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids("</s>") # try 4bit loading if cuda memory not enough model = AutoModelForCausalLM.from_pretrained(mode_name_or_path, trust_remote_code=True, load_in_4bit=False, device_map="auto", torch_dtype=torch.bfloat16) model.eval() HUMAN_ROLE_START_TAG = "<|role_start|>human<|role_end|>" BOT_ROLE_START_TAG = "<|role_start|>bot<|role_end|>" text = f"{HUMAN_ROLE_START_TAG}write a python function of quick sort.{BOT_ROLE_START_TAG}" inputs = tokenizer(text, return_tensors='pt', padding=True, add_special_tokens=False).to("cuda") outputs = model.generate( inputs=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=512, top_p=0.95, temperature=0.1, do_sample=True, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id ) gen_text = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True) print(gen_text) ``` ## MD5 We notice that the file may be corrupted during transfer process. Please check MD5 value before use. | Model File | MD5 Value | |:---------------------------------|:--------------------------------:| | pytorch_model-00001-of-00007.bin | 8d544b1bcb3449934184d4141137329c | | pytorch_model-00002-of-00007.bin | 9d5dbb30911e48a42fb6d0fcabb322a4 | | pytorch_model-00003-of-00007.bin | b0d4aecee0457d9332005a187e1fffed | | pytorch_model-00004-of-00007.bin | 5c7e002de5eab77d0194a2b0f6de0c24 | | pytorch_model-00005-of-00007.bin | d22a511aa26b5b17117b665a877490ab | | pytorch_model-00006-of-00007.bin | a5c28ac277fac07d16dd66537e54d109 | | pytorch_model-00007-of-00007.bin | a967e2c6195477b7407089c0bffa2d53 | <a id="chinese"></a> ## 模型简介 CodeFuse-CodeLlama34B-MFT 是一个通过QLoRA对基座模型CodeLlama-34b-Python进行多代码任务微调的代码大模型。模型微调采用了4k上下文。如果有必要,可以扩展到16k。 <br> ## 新闻 🔥🔥🔥 CodeFuse-CodeLlama34B-MFT模型在HumanEval pass@1上可以达到74.4%, 为当前开源SOTA。 <br> ## 代码社区 **大本营**: 🏡 https://github.com/codefuse-ai (**欢迎为我们的项目一键三连 Star🌟 + Fork🚀 + Watch👀**) + 如果您想自己微调该模型,可以访问 ✨[MFTCoder](https://github.com/codefuse-ai/MFTCoder)✨✨ + 如果您想自己部署该模型,可以访问 ✨[FasterTransformer4CodeFuse](https://github.com/codefuse-ai/FasterTransformer4CodeFuse)✨✨ + 如果您想观看该模型示例,可以访问 ✨[CodeFuse Demo](https://github.com/codefuse-ai/codefuse)✨✨ ## 评测表现(代码) | 模型 | HumanEval(pass@1) | 日期 | |:----------------------------|:-----------------:|:-------:| | **CodeFuse-CodeLlama-34B** | **74.4%** | 2023.9 | | WizardCoder-Python-34B-V1.0 | 73.2% | 2023.8 | | GPT-4(zero-shot) | 67.0% | 2023.3 | | PanGu-Coder2 15B | 61.6% | 2023.8 | | CodeLlama-34b-Python | 53.7% | 2023.8 | | CodeLlama-34b | 48.8% | 2023.8 | | GPT-3.5(zero-shot) | 48.1% | 2022.11 | | OctoCoder | 46.2% | 2023.8 | | StarCoder-15B | 33.6% | 2023.5 | | LLaMA 2 70B(zero-shot) | 29.9% | 2023.7 | <br> ## Requirements * python>=3.8 * pytorch>=2.0.0 * transformers==4.32.0 * CUDA 11.4 <br> ## 推理数据格式 推理数据为模型在训练数据格式下拼接的字符串形式,它也是推理时输入prompt拼接的方式: ```python """ <|role_start|>system<|role_end|>这是System指令 <|role_start|>human<|role_end|>这是第1轮用户输入的问题 <|role_start|>bot<|role_end|>这是第1轮模型生成的内容</s> <|role_start|>human<|role_end|>这是第2轮用户输入的问题 <|role_start|>bot<|role_end|>这是第2轮模型生成的内容</s> ... ... ... <|role_start|>human<|role_end|>这是第n轮用户输入的问题 <|role_start|>bot<|role_end|>{模型现在要生成的内容}</s> """ ``` 推理时,请确保拼接的prompt字符串以"<|role_start|>bot<|role_end|>"结尾,引导模型生成回答。 ## 快速使用 ```python from transformers import ( AutoTokenizer, AutoModelForCausalLM, ) tokenizer = AutoTokenizer.from_pretrained(mode_name_or_path, trust_remote_code=True, use_fast=False, legacy=False) tokenizer.padding_side = "left" tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids("<unk>") tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids("</s>") # 如果显存不够,可以考虑量化加载 model = AutoModelForCausalLM.from_pretrained(mode_name_or_path, trust_remote_code=True, load_in_4bit=False, device_map="auto", torch_dtype=torch.bfloat16) model.eval() HUMAN_ROLE_START_TAG = "<|role_start|>human<|role_end|>" BOT_ROLE_START_TAG = "<|role_start|>bot<|role_end|>" text = f"{HUMAN_ROLE_START_TAG}请用C++实现求解第n个斐波那契数{BOT_ROLE_START_TAG}" inputs = tokenizer(text, return_tensors='pt', padding=True, add_special_tokens=False).to("cuda") outputs = model.generate( inputs=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_new_tokens=512, top_p=0.95, temperature=0.1, do_sample=True, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id ) gen_text = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True) print(gen_text) ``` ## MD5 我们发现模型文件可能会在传输过程中损坏,使用前请检查文件MD5值。 | 模型文件 | MD5值 | |:---------------------------------|:--------------------------------:| | pytorch_model-00001-of-00007.bin | 8d544b1bcb3449934184d4141137329c | | pytorch_model-00002-of-00007.bin | 9d5dbb30911e48a42fb6d0fcabb322a4 | | pytorch_model-00003-of-00007.bin | b0d4aecee0457d9332005a187e1fffed | | pytorch_model-00004-of-00007.bin | 5c7e002de5eab77d0194a2b0f6de0c24 | | pytorch_model-00005-of-00007.bin | d22a511aa26b5b17117b665a877490ab | | pytorch_model-00006-of-00007.bin | a5c28ac277fac07d16dd66537e54d109 | | pytorch_model-00007-of-00007.bin | a967e2c6195477b7407089c0bffa2d53 | <!-- original-model-card end -->
OpenGVLab/pvt_v2_b0
OpenGVLab
2024-03-12T05:27:22Z
871
2
transformers
[ "transformers", "safetensors", "pvt_v2", "image-classification", "arxiv:2106.13797", "arxiv:2105.15203", "arxiv:2201.07436", "arxiv:2010.04159", "arxiv:2109.03814", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-02-25T14:14:35Z
--- license: apache-2.0 --- # PVTv2 This is the Hugging Face PyTorch implementation of the [PVTv2](https://arxiv.org/abs/2106.13797) model. ## Model Description The Pyramid Vision Transformer v2 (PVTv2) is a powerful, lightweight hierarchical transformer backbone for vision tasks. PVTv2 infuses convolution operations into its transformer layers to infuse properties of CNNs that enable them to learn image data efficiently. This mix transformer architecture requires no added positional embeddings, and produces multi-scale feature maps which are known to be beneficial for dense and fine-grained prediction tasks. Vision models using PVTv2 for a backbone: 1. [Segformer](https://arxiv.org/abs/2105.15203) for Semantic Segmentation. 2. [GLPN](https://arxiv.org/abs/2201.07436) for Monocular Depth. 3. [Deformable DETR](https://arxiv.org/abs/2010.04159) for 2D Object Detection. 4. [Panoptic Segformer](https://arxiv.org/abs/2109.03814) for Panoptic Segmentation.
rhysjones/Phi-3-mini-mango-1
rhysjones
2024-05-26T17:04:53Z
871
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "nlp", "code", "conversational", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-27T11:07:49Z
--- license: mit license_link: https://huggingface.co/rhysjones/Phi-3-mini-mango-1/resolve/main/LICENSE language: - en pipeline_tag: text-generation tags: - nlp - code widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- ## Model Summary The Phi-3-mini-mango-1 is an instruct finetune of [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) with 4K context and 3.8B parameters. It is a first cut of finetuning Phi-3 (which is a great model!) to explore its properties and behaviour. More to follow. You will need to update your local transformers to the latest version to run this model (4.41.0 or above): ``` pip install -U transformers ``` ### GGUF Versions There are GGUF format model files available at [rhysjones/Phi-3-mini-mango-1-GGUF](https://huggingface.co/rhysjones/Phi-3-mini-mango-1-GGUF) ### Chat Format Phi-3-mini-mango uses the same chat format as the original Phi-3 Mini-4K-Instruct model. Note that it does not use a system prompt, instead place any specific instructions as part of the first <|user|> prompt. You can provide the prompt as a question with a generic template as follow: ```markdown <|user|>\nQuestion <|end|>\n<|assistant|> ``` For example: ```markdown <|user|> How to explain Internet for a medieval knight?<|end|> <|assistant|> ``` where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following: ```markdown <|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Sample inference code This code snippets show how to get quickly started with running the model on a GPU: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model = AutoModelForCausalLM.from_pretrained( "microsoft/Phi-3-mini-4k-instruct", device_map="cuda", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct") messages = [ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` *Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.* The model shares the same limtations as the [https://huggingface.co/microsoft/Phi-3-mini-4k-instruct#responsible-ai-considerations](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct#responsible-ai-considerations)
BeaverAI/Cream-Phi-3-14B-v1c-GGUF
BeaverAI
2024-05-23T11:17:36Z
871
2
null
[ "gguf", "region:us" ]
null
2024-05-23T07:15:10Z
For the final version of CreamPhi 3 14B v1, go to https://huggingface.co/TheDrummer/Cream-Phi-3-14B-v1 For finetuning details, please refer to https://huggingface.co/BeaverAI/Cream-Phi-3-14B-v1c
textattack/albert-base-v2-SST-2
textattack
2020-07-06T16:32:15Z
870
0
transformers
[ "transformers", "pytorch", "albert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
## TextAttack Model Card This `albert-base-v2` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 32, a learning rate of 3e-05, and a maximum sequence length of 64. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.9254587155963303, as measured by the eval set accuracy, found after 2 epochs. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
timm/efficientnet_es_pruned.in1k
timm
2023-04-27T21:12:15Z
870
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2003.02838", "arxiv:1905.11946", "arxiv:2002.08258", "license:apache-2.0", "region:us" ]
image-classification
2022-12-12T23:58:15Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for efficientnet_es_pruned.in1k A EfficientNet-EdgeTPU image classification model. Knapsack pruned from existing weights. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 5.4 - GMACs: 1.8 - Activations (M): 8.7 - Image size: 224 x 224 - **Papers:** - Accelerator-aware Neural Network Design using AutoML: https://arxiv.org/abs/2003.02838 - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - Knapsack Pruning with Inner Distillation: https://arxiv.org/abs/2002.08258 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('efficientnet_es_pruned.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnet_es_pruned.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 24, 112, 112]) # torch.Size([1, 32, 56, 56]) # torch.Size([1, 48, 28, 28]) # torch.Size([1, 144, 14, 14]) # torch.Size([1, 192, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'efficientnet_es_pruned.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{gupta2020accelerator, title={Accelerator-aware neural network design using automl}, author={Gupta, Suyog and Akin, Berkin}, journal={arXiv preprint arXiv:2003.02838}, year={2020} } ``` ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @article{aflalo2020knapsack, title={Knapsack pruning with inner distillation}, author={Aflalo, Yonathan and Noy, Asaf and Lin, Ming and Friedman, Itamar and Zelnik, Lihi}, journal={arXiv preprint arXiv:2002.08258}, year={2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
TheBloke/OpenOrca-Platypus2-13B-GGUF
TheBloke
2023-09-27T12:47:55Z
870
17
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "dataset:Open-Orca/OpenOrca", "arxiv:2308.07317", "arxiv:2306.02707", "arxiv:2301.13688", "base_model:Open-Orca/OpenOrca-Platypus2-13B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
text-generation
2023-09-05T19:50:00Z
--- language: - en license: cc-by-nc-4.0 library_name: transformers datasets: - garage-bAInd/Open-Platypus - Open-Orca/OpenOrca model_name: OpenOrca Platypus2 13B base_model: Open-Orca/OpenOrca-Platypus2-13B inference: false model_creator: Open-Orca model_type: llama pipeline_tag: text-generation prompt_template: '### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # OpenOrca Platypus2 13B - GGUF - Model creator: [Open-Orca](https://huggingface.co/Open-Orca) - Original model: [OpenOrca Platypus2 13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [Open-Orca's OpenOrca Platypus2 13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF) * [Open-Orca's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca-InstructOnly ``` ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Open-Orca's OpenOrca Platypus2 13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [openorca-platypus2-13b.Q2_K.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [openorca-platypus2-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [openorca-platypus2-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [openorca-platypus2-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [openorca-platypus2-13b.Q4_0.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [openorca-platypus2-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [openorca-platypus2-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [openorca-platypus2-13b.Q5_0.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [openorca-platypus2-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [openorca-platypus2-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [openorca-platypus2-13b.Q6_K.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [openorca-platypus2-13b.Q8_0.gguf](https://huggingface.co/TheBloke/OpenOrca-Platypus2-13B-GGUF/blob/main/openorca-platypus2-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/OpenOrca-Platypus2-13B-GGUF and below it, a specific filename to download, such as: openorca-platypus2-13b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/OpenOrca-Platypus2-13B-GGUF openorca-platypus2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/OpenOrca-Platypus2-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/OpenOrca-Platypus2-13B-GGUF openorca-platypus2-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m openorca-platypus2-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction:\n\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/OpenOrca-Platypus2-13B-GGUF", model_file="openorca-platypus2-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Open-Orca's OpenOrca Platypus2 13B <p><h1>🐋 The First OrcaPlatypus! 🐋</h1></p> ![Platty](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypusMerge.jpg) # OpenOrca-Platypus2-13B OpenOrca-Platypus2-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B). This model is more than the sum of its parts! We are happy to be teaming up with the [Platypus](https://platypus-llm.github.io/) team to bring you a new model which once again tops the leaderboards! Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2). [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners. We will also give sneak-peak announcements on our Discord, which you can find here: https://AlignmentLab.ai # Evaluation ## HuggingFace Leaderboard Performance ![HF Leaderboard](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BHFLeaderboard.webp) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 59.5 | | ARC (25-shot) | 62.88 | | HellaSwag (10-shot) | 83.19 | | TruthfulQA (0-shot) | 52.69 | | Avg. | 64.56 | We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ## AGIEval Performance We compare our results to our base Preview2 model (using LM Evaluation Harness). We find **112%** of the base model's performance on AGI Eval, averaging **0.463**. A large part of this boost is the substantial improvement to LSAT Logical Reasoning performance. ![OpenOrca-Platypus2-13B AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BAGIEval.webp "AGIEval Performance") ## BigBench-Hard Performance We compare our results to our base Preview2 model (using LM Evaluation Harness). We find **105%** of the base model's performance on BigBench-Hard, averaging **0.442**. ![OpenOrca-Platypus2-13B BigBench-Hard Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BBigBenchHard.webp "BigBench-Hard Performance") # Model Details * **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **OpenOrcaxOpenChat-Preview2-13B** trained by Open-Orca * **Model type:** **OpenOrca-Platypus2-13B** is an auto-regressive language model based on the Lllama 2 transformer architecture. * **Language(s)**: English * **License for Platypus2-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) * **License for OpenOrcaxOpenChat-Preview2-13B base weights**: Llama 2 Commercial # Prompting ## Prompt Template for base Platypus2-13B ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ## Prompt Template for base OpenOrcaxOpenChat-Preview2-13B OpenChat Llama2 V1: see [OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) for additional information. # Training ## Training Datasets `garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. `Open-Orca/OpenOrcaxOpenChat-Preview2-13B` trained using a refined subset of most of the GPT-4 data from the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca). ## Training Procedure `Open-Orca/Platypus2-13B` was instruction fine-tuned using LoRA on 1x A100-80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo. # Supplemental ## Reproducing Evaluation Results (for HuggingFace Leaderboard Eval) Install LM Evaluation Harness: ``` # clone repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # change to repo directory cd lm-evaluation-harness # check out the correct commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # install pip install -e . ``` Each task was evaluated on a single A100-80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/truthfulqa_0shot.json --device cuda ``` ## Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ # Citations ```bibtex @software{hunterlee2023orcaplaty1 title = {OpenOrcaPlatypus: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset and Merged with divergent STEM and Logic Dataset Model}, author = {Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz and Bleys Goodson and Wing Lian and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B}, } @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } @software{OpenOrcaxOpenChatPreview2, title = {OpenOrcaxOpenChatPreview2: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset}, author = {Guan Wang and Bleys Goodson and Wing Lian and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B}, } @software{openchat, title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}}, author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling}, doi = {10.5281/zenodo.8105775}, url = {https://github.com/imoneoi/openchat}, version = {pre-release}, year = {2023}, month = {7}, } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint= arXiv 2307.09288 } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } @article{hu2021lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu}, journal={CoRR}, year={2021} } ``` <!-- original-model-card end -->
LiteLLMs/Llama-3-8B-Instruct-Gradient-1048k-GGUF
LiteLLMs
2024-05-06T09:52:20Z
870
1
null
[ "gguf", "meta", "llama-3", "GGUF", "text-generation", "en", "license:llama3", "region:us" ]
text-generation
2024-04-30T09:08:36Z
--- language: - en license: llama3 tags: - meta - llama-3 - GGUF pipeline_tag: text-generation quantized_by: andrijdavid --- # Llama-3-8B-Instruct-Gradient-1048k-GGUF - Original model: [Llama-3-8B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k) <!-- description start --> ## Description This repo contains GGUF format model files for [Llama-3-8B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration. * [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applications​ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling. * [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration. * [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection. * [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use. * [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server. * [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents. <!-- README_GGUF.md-about-gguf end --> <!-- compatibility_gguf start --> ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single folder. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: LiteLLMs/Llama-3-8B-Instruct-Gradient-1048k-GGUF and below it, a specific filename to download, such as: Q4_0/Q4_0-00001-of-00009.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download LiteLLMs/Llama-3-8B-Instruct-Gradient-1048k-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download LiteLLMs/Llama-3-8B-Instruct-Gradient-1048k-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install huggingface_hub[hf_transfer] ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download LiteLLMs/Llama-3-8B-Instruct-Gradient-1048k-GGUF Q4_0/Q4_0-00001-of-00009.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Q4_0/Q4_0-00001-of-00009.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<PROMPT>" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Q4_0/Q4_0-00001-of-00009.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<PROMPT>", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Q4_0/Q4_0-00001-of-00009.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer end --> <!-- original-model-card start --> # Original model card: Llama-3-8B-Instruct-Gradient-1048k <a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> # Llama-3 8B Gradient Instruct 1048k Join our custom agent and long context (262k-1M+) waitlist: https://forms.gle/L6TDY7dozx8TuoUv7 Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected]. For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) [Join our Discord](https://discord.com/invite/2QVy2qt2mf) This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data. **Update (5/3): We further fine-tuned our model to strengthen its assistant-like chat ability as well. The NIAH result is updated.** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6585dc9be92bc5f258156bd6/-qaI__83ksClzoJzlqZjq.png) **Approach:** - [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as the base - NTK-aware interpolation [1] to initialize an optimal schedule for RoPE theta, followed by empirical RoPE theta optimization - Progressive training on increasing context lengths, similar to [Large World Model](https://huggingface.co/LargeWorldModel) [2] (See details below) **Infra:** We build on top of the EasyContext Blockwise RingAttention library [3] to scalably and efficiently train on contexts up to 1048k tokens on [Crusoe Energy](https://huggingface.co/crusoeai) high performance L40S cluster. Notably, we layered parallelism on top of Ring Attention with a custom network topology to better leverage large GPU clusters in the face of network bottlenecks from passing many KV blocks between devices. This gave us a 33x speedup in model training (compare 524k and 1048k to 65k and 262k in the table below). **Data:** For training data, we generate long contexts by augmenting [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). We also fine-tune on a chat dataset based on UltraChat [4], following a similar recipe for data augmentation to [2]. **Progressive Training Details:** | | 65K | 262K | 524k | 1048k | | -- | -- | - # Base Model ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos <!-- original-model-card end -->
mmnga/rinna-llama-3-youko-8b-gguf
mmnga
2024-05-01T15:11:21Z
870
6
null
[ "gguf", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:llama3", "region:us" ]
null
2024-05-01T14:17:53Z
--- license: llama3 language: - en - ja datasets: - TFMC/imatrix-dataset-for-japanese-llm --- # rinna-llama-3-youko-8b-gguf [rinnaさんが公開しているllama-3-youko-8b](https://huggingface.co/rinna/llama-3-youko-8b)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 モデル一覧 GGUF版 [mmnga/rinna-llama-3-youko-8b-gguf](https://huggingface.co/mmnga/rinna-llama-3-youko-8b-gguf) [mmnga/rinna-nekomata-7b-instruction-gguf](https://huggingface.co/mmnga/rinna-nekomata-7b-instruction-gguf) [mmnga/rinna-nekomata-14b-instruction-gguf](https://huggingface.co/mmnga/rinna-nekomata-14b-instruction-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'rinna-llama-3-youko-8b-q4_0.gguf' -n 128 -p '西田幾多郎は、' ```
dbalasub/ensemble-triple-qa
dbalasub
2024-05-10T01:50:33Z
870
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2024-05-10T01:27:43Z
--- library_name: transformers license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF
CHE-72
2024-06-22T18:48:09Z
870
0
null
[ "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:Qwen/Qwen1.5-4B-Chat", "license:other", "region:us" ]
text-generation
2024-06-22T18:47:54Z
--- base_model: Qwen/Qwen1.5-4B-Chat language: - en license: other license_name: tongyi-qianwen-research license_link: https://huggingface.co/Qwen/Qwen1.5-4B-Chat/blob/main/LICENSE pipeline_tag: text-generation tags: - chat - llama-cpp - gguf-my-repo --- # CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF This model was converted to GGUF format from [`Qwen/Qwen1.5-4B-Chat`](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-4B-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF --hf-file qwen1.5-4b-chat-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF --hf-file qwen1.5-4b-chat-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF --hf-file qwen1.5-4b-chat-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/Qwen1.5-4B-Chat-Q6_K-GGUF --hf-file qwen1.5-4b-chat-q6_k.gguf -c 2048 ```
neulab/omnitab-large
neulab
2022-11-27T16:24:38Z
869
2
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "tapex", "table-question-answering", "en", "dataset:wikitablequestions", "arxiv:2207.03637", "autotrain_compatible", "endpoints_compatible", "region:us" ]
table-question-answering
2022-11-23T20:33:49Z
--- language: en tags: - tapex - table-question-answering datasets: - wikitablequestions --- # OmniTab OmniTab is a table-based QA model proposed in [OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering](https://arxiv.org/pdf/2207.03637.pdf). The original Github repository is [https://github.com/jzbjyb/OmniTab](https://github.com/jzbjyb/OmniTab). ## Description `neulab/omnitab-large` (based on BART architecture) is initialized with `microsoft/tapex-large` and continuously pretrained on natural and synthetic data. ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM import pandas as pd tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large") model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large") data = { "year": [1896, 1900, 1904, 2004, 2008, 2012], "city": ["athens", "paris", "st. louis", "athens", "beijing", "london"] } table = pd.DataFrame.from_dict(data) query = "In which year did beijing host the Olympic Games?" encoding = tokenizer(table=table, query=query, return_tensors="pt") outputs = model.generate(**encoding) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) # [' 2008'] ``` ## Reference ```bibtex @inproceedings{jiang-etal-2022-omnitab, title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering", author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", } ```
timm/regnety_040.ra3_in1k
timm
2024-02-10T23:33:26Z
869
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2003.13678", "license:apache-2.0", "region:us" ]
image-classification
2023-03-21T06:38:50Z
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for regnety_040.ra3_in1k A RegNetY-4GF image classification model. Trained on ImageNet-1k by Ross Wightman in `timm`. The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including: * stochastic depth * gradient checkpointing * layer-wise LR decay * configurable output stride (dilation) * configurable activation and norm layers * option for a pre-activation bottleneck block used in RegNetV variant * only known RegNetZ model definitions with pretrained weights ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 20.6 - GMACs: 4.0 - Activations (M): 12.3 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - Designing Network Design Spaces: https://arxiv.org/abs/2003.13678 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('regnety_040.ra3_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_040.ra3_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 112, 112]) # torch.Size([1, 128, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 512, 14, 14]) # torch.Size([1, 1088, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_040.ra3_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1088, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`. |model |img_size|top1 |top5 |param_count|gmacs|macts | |-------------------------|--------|------|------|-----------|-----|------| |[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 | |[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 | |[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 | |[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 | |[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49| |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 | |[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 | |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 | |[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 | |[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83| |[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 | |[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 | |[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 | |[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 | |[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 | |[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 | |[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 | |[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 | |[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 | |[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 | |[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 | |[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 | |[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 | |[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 | |[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 | |[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 | |[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 | |[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 | |[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 | |[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 | |[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 | |[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 | |[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 | |[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 | |[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 | |[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 | |[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 | |[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 | |[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 | |[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 | |[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 | |[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 | |[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 | |[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 | |[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 | |[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 | |[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 | |[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 | |[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 | |[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 | ## Citation ```bibtex @InProceedings{Radosavovic2020, title = {Designing Network Design Spaces}, author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r}, booktitle = {CVPR}, year = {2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
CiroN2022/ascii-art
CiroN2022
2023-08-23T11:50:21Z
869
10
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
2023-08-23T11:50:18Z
--- license: other tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: ascii_art widget: - text: ascii_art --- # Ascii Art ![Image 0](2080723.jpeg) None ## Image examples for the model: ![Image 1](2080769.jpeg) ![Image 2](2080755.jpeg) ![Image 3](2080754.jpeg) ![Image 4](2080739.jpeg) ![Image 5](2080747.jpeg) ![Image 6](2080751.jpeg) ![Image 7](2080728.jpeg) ![Image 8](2080771.jpeg) ![Image 9](2080773.jpeg)
mistral-community/Mixtral-8x22B-Instruct-v0.1-4bit
mistral-community
2024-07-01T08:56:51Z
869
10
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-04-17T19:14:50Z
--- license: apache-2.0 --- # Model Card for Mixtral-8x22B-Instruct-v0.1 The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1). Model added by [Prince Canuma](https://twitter.com/Prince_Canuma). ## Run the model ```python from transformers import AutoModelForCausalLM from mistral_common.protocol.instruct.messages import ( AssistantMessage, UserMessage, ) from mistral_common.protocol.instruct.tool_calls import ( Tool, Function, ) from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.normalize import ChatCompletionRequest device = "cuda" # the device to load the model onto tokenizer_v3 = MistralTokenizer.v3() mistral_query = ChatCompletionRequest( tools=[ Tool( function=Function( name="get_current_weather", description="Get the current weather", parameters={ "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "format": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location.", }, }, "required": ["location", "format"], }, ) ) ], messages=[ UserMessage(content="What's the weather like today in Paris"), ], model="test", ) encodeds = tokenizer_v3.encode_chat_completion(mistral_query).tokens model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x22B-Instruct-v0.1") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) sp_tokenizer = tokenizer_v3.instruct_tokenizer.tokenizer decoded = sp_tokenizer.decode(generated_ids[0]) print(decoded) ``` # Instruct tokenizer The HuggingFace tokenizer included in this release should match our own. To compare: `pip install mistral-common` ```py from mistral_common.protocol.instruct.messages import ( AssistantMessage, UserMessage, ) from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.normalize import ChatCompletionRequest from transformers import AutoTokenizer tokenizer_v3 = MistralTokenizer.v3() mistral_query = ChatCompletionRequest( messages=[ UserMessage(content="How many experts ?"), AssistantMessage(content="8"), UserMessage(content="How big ?"), AssistantMessage(content="22B"), UserMessage(content="Noice 🎉 !"), ], model="test", ) hf_messages = mistral_query.model_dump()['messages'] tokenized_mistral = tokenizer_v3.encode_chat_completion(mistral_query).tokens tokenizer_hf = AutoTokenizer.from_pretrained('mistralai/Mixtral-8x22B-Instruct-v0.1') tokenized_hf = tokenizer_hf.apply_chat_template(hf_messages, tokenize=True) assert tokenized_hf == tokenized_mistral ``` # Function calling and special tokens This tokenizer includes more special tokens, related to function calling : - [TOOL_CALLS] - [AVAILABLE_TOOLS] - [/AVAILABLE_TOOLS] - [TOOL_RESULT] - [/TOOL_RESULTS] If you want to use this model with function calling, please be sure to apply it similarly to what is done in our [SentencePieceTokenizerV3](https://github.com/mistralai/mistral-common/blob/main/src/mistral_common/tokens/tokenizers/sentencepiece.py#L299). # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
amara16/distilbert-extractive-qa-project
amara16
2024-05-09T22:05:07Z
869
0
transformers
[ "transformers", "safetensors", "distilbert", "question-answering", "arxiv:1910.09700", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2024-05-09T22:01:31Z
--- library_name: transformers license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/cosimoiaia_-_Loquace-410m-gguf
RichardErkhov
2024-06-07T00:40:28Z
869
0
null
[ "gguf", "region:us" ]
null
2024-06-07T00:01:50Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Loquace-410m - GGUF - Model creator: https://huggingface.co/cosimoiaia/ - Original model: https://huggingface.co/cosimoiaia/Loquace-410m/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Loquace-410m.Q2_K.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q2_K.gguf) | Q2_K | 0.16GB | | [Loquace-410m.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.IQ3_XS.gguf) | IQ3_XS | 0.18GB | | [Loquace-410m.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.IQ3_S.gguf) | IQ3_S | 0.18GB | | [Loquace-410m.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q3_K_S.gguf) | Q3_K_S | 0.18GB | | [Loquace-410m.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.IQ3_M.gguf) | IQ3_M | 0.2GB | | [Loquace-410m.Q3_K.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q3_K.gguf) | Q3_K | 0.21GB | | [Loquace-410m.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q3_K_M.gguf) | Q3_K_M | 0.21GB | | [Loquace-410m.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q3_K_L.gguf) | Q3_K_L | 0.22GB | | [Loquace-410m.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.IQ4_XS.gguf) | IQ4_XS | 0.22GB | | [Loquace-410m.Q4_0.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q4_0.gguf) | Q4_0 | 0.23GB | | [Loquace-410m.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.IQ4_NL.gguf) | IQ4_NL | 0.23GB | | [Loquace-410m.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q4_K_S.gguf) | Q4_K_S | 0.23GB | | [Loquace-410m.Q4_K.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q4_K.gguf) | Q4_K | 0.25GB | | [Loquace-410m.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q4_K_M.gguf) | Q4_K_M | 0.25GB | | [Loquace-410m.Q4_1.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q4_1.gguf) | Q4_1 | 0.25GB | | [Loquace-410m.Q5_0.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q5_0.gguf) | Q5_0 | 0.27GB | | [Loquace-410m.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q5_K_S.gguf) | Q5_K_S | 0.27GB | | [Loquace-410m.Q5_K.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q5_K.gguf) | Q5_K | 0.28GB | | [Loquace-410m.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q5_K_M.gguf) | Q5_K_M | 0.28GB | | [Loquace-410m.Q5_1.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q5_1.gguf) | Q5_1 | 0.29GB | | [Loquace-410m.Q6_K.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q6_K.gguf) | Q6_K | 0.31GB | | [Loquace-410m.Q8_0.gguf](https://huggingface.co/RichardErkhov/cosimoiaia_-_Loquace-410m-gguf/blob/main/Loquace-410m.Q8_0.gguf) | Q8_0 | 0.4GB | Original model description: --- license: cc-by-nc-2.0 datasets: - cosimoiaia/Loquace-102k language: - it pipeline_tag: conversational tags: - alpaca - llama - llm - finetune - Italian - qlora --- Model Card for Loquace-410m # 🇮🇹 Loquace-410m 🇮🇹 An exclusively Italian speaking, instruction finetuned, Large Language model. 🇮🇹 The Loquace Italian LLM models are created as a proof-of-concept to evaluate on how language tuning can be achieved using QLoRa by instruct-tunings foundational LLMs using dataset of a specific language. The QLoRa (https://github.com/artidoro/qlora) method of fine-tuning significantly lower the resources requirements compared to any other methods available, this allow to easily execute the process on significanly larger dataset while still using consumers GPUs and still achieve high accuracy. ## Model Description Loquace-410m is the second smallest model of the Loquace family. It was trained using QLoRa on a large dataset of 102k question/answer pairs exclusively in Italian using pythia-410m as base. The related code can be found at: https://github.com/cosimoiaia/Loquace Loquace-410m is part of the big Loquace family: https://huggingface.co/cosimoiaia/Loquace-70m - Based on pythia-70m https://huggingface.co/cosimoiaia/Loquace-410m - Based on pythia-410m https://huggingface.co/cosimoiaia/Loquace-7B - Based on Falcon-7B. https://huggingface.co/cosimoiaia/Loquace-12B - Based on pythia-12B https://huggingface.co/cosimoiaia/Loquace-20B - Based on gpt-neox-20B ## Usage ```python from transformers import ( AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig ) tokenizer = AutoTokenizer.from_pretrained("cosimoiaia/Loquace-410m", padding_side="right", use_fast=True) model = AutoModelForCausalLM.from_pretrained( "cosimoiaia/Loquace-410m", load_in_8bit=True, device_map="auto", quantization_config=BitsAndBytesConfig( load_in_4bit=True, llm_int8_has_fp16_weight=False ) ) ``` ## Training Loquace-410m was trained on a conversational dataset comprising 102k question/answer pairs in Italian language. The training data was constructed by putting together translations from the original alpaca Dataset and other sources like the OpenAssistant dataset. The model was trained for only 10000 iterations and took 9 hours on a single RTX 3090, kindly provided by Genesis Cloud. (https://gnsiscld.co/26qhlf) ## Limitations - Loquace-410m may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs. - The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified. - The training data primarily consists of conversational examples and may not generalize well to other types of tasks or domains. ## Dependencies - PyTorch - Transformers library by Hugging Face - Bitsandbites - QLoRa
Helsinki-NLP/opus-mt-it-de
Helsinki-NLP
2023-08-16T11:58:48Z
868
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "it", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-it-de * source languages: it * target languages: de * OPUS readme: [it-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/it-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/it-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/it-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.it.de | 49.4 | 0.678 |
beki/en_spacy_pii_distilbert
beki
2023-05-06T04:28:34Z
868
29
spacy
[ "spacy", "distilbert", "token-classification", "en", "dataset:beki/privy", "model-index", "region:us" ]
token-classification
2022-10-14T01:58:50Z
--- tags: - spacy - token-classification language: - en model-index: - name: en_spacy_pii_distilbert results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9530385872 - name: NER Recall type: recall value: 0.9554103008 - name: NER F Score type: f_score value: 0.9542229703 widget: - text: >- SELECT shipping FROM users WHERE shipping = '201 Thayer St Providence RI 02912' datasets: - beki/privy --- | Feature | Description | | --- | --- | | **Name** | `en_spacy_pii_distilbert` | | **Version** | `0.0.0` | | **spaCy** | `>=3.4.1,<3.5.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | Trained on a new [dataset for structured PII](https://huggingface.co/datasets/beki/privy) generated by [Privy](https://github.com/pixie-io/pixie/tree/main/src/datagen/pii/privy). For more details, see this [blog post](https://blog.px.dev/detect-pii/) | | **License** | MIT | | **Author** | [Benjamin Kilimnik](https://www.linkedin.com/in/benkilimnik/) | ### Label Scheme <details> <summary>View label scheme (5 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `DATE_TIME`, `LOC`, `NRP`, `ORG`, `PER` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 95.42 | | `ENTS_P` | 95.30 | | `ENTS_R` | 95.54 | | `TRANSFORMER_LOSS` | 61154.85 | | `NER_LOSS` | 56001.88 |
timm/swin_small_patch4_window7_224.ms_in22k
timm
2024-02-10T23:31:42Z
868
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-22k", "arxiv:2103.14030", "license:mit", "region:us" ]
image-classification
2023-03-18T04:14:06Z
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-22k --- # Model card for swin_small_patch4_window7_224.ms_in22k A Swin Transformer image classification model. Pretrained on ImageNet-22k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 65.6 - GMACs: 8.8 - Activations (M): 27.5 - Image size: 224 x 224 - **Papers:** - Swin Transformer: Hierarchical Vision Transformer using Shifted Windows: https://arxiv.org/abs/2103.14030 - **Original:** https://github.com/microsoft/Swin-Transformer - **Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('swin_small_patch4_window7_224.ms_in22k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swin_small_patch4_window7_224.ms_in22k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g. for swin_base_patch4_window7_224 (NHWC output) # torch.Size([1, 56, 56, 128]) # torch.Size([1, 28, 28, 256]) # torch.Size([1, 14, 14, 512]) # torch.Size([1, 7, 7, 1024]) # e.g. for swinv2_cr_small_ns_224 (NCHW output) # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'swin_small_patch4_window7_224.ms_in22k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, H, W, num_features) tensor for swin / swinv2 # or (batch_size, num_features, H, W) for swinv2_cr output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{liu2021Swin, title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
teknium/CollectiveCognition-v1.1-Mistral-7B
teknium
2023-10-07T00:22:52Z
868
78
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "mistral-7b", "instruct", "finetune", "gpt4", "synthetic data", "distillation", "sharegpt", "en", "dataset:CollectiveCognition/chats-data-2023-09-27", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-04T20:29:59Z
--- base_model: mistralai/Mistral-7B-v0.1 tags: - mistral-7b - instruct - finetune - gpt4 - synthetic data - distillation - sharegpt datasets: - CollectiveCognition/chats-data-2023-09-27 model-index: - name: CollectiveCognition-v1-Mistral-7B results: [] license: apache-2.0 language: - en --- **Collective Cognition v1.1 - Mistral 7B** <div style="display: flex; justify-content: center;"> <a href="https://collectivecognition.ai" target="_blank" style="display: inline-block; text-align: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/DNZXsJE5oC_rM8eYY6H_x.png" alt="Collective Cognition Logo" width="50%" style="display: block; margin: 0 auto;"> </a> </div> ## Model Description: Collective Cognition v1.1 is a state-of-the-art model fine-tuned using the Mistral approach. This model is particularly notable for its performance, outperforming many 70B models on the TruthfulQA benchmark. This benchmark assesses models for common misconceptions, potentially indicating hallucination rates. ## Special Features: - **Quick Training**: This model was trained in just 3 minutes on a single 4090 with a qlora, and competes with 70B scale Llama-2 Models at TruthfulQA. - **Limited Data**: Despite its exceptional performance, it was trained on only ONE HUNDRED data points, all of which were gathered from a platform reminiscent of ShareGPT. - **Extreme TruthfulQA Benchmark**: This model is competing strongly with top 70B models on the TruthfulQA benchmark despite the small dataset and qlora training! ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/-pnifxPcMeeUONyE3efo3.png) ## Acknowledgements: Special thanks to @a16z and all contributors to the Collective Cognition dataset for making the development of this model possible. ## Dataset: The model was trained using data from the Collective Cognition website. The efficacy of this dataset is demonstrated by the model's stellar performance, suggesting that further expansion of this dataset could yield even more promising results. The data is reminiscent of that collected from platforms like ShareGPT. You can contribute to the growth of the dataset by sharing your own ChatGPT chats [here](https://CollectiveCognition.ai). You can download the datasets created by Collective Cognition here: https://huggingface.co/CollectiveCognition ## Performance: - **TruthfulQA**: Collective Cognition v1.1 has notably outperformed various 70B models on the TruthfulQA benchmark, highlighting its ability to understand and rectify common misconceptions. ## Usage: Prompt Format: ``` USER: <prompt> ASSISTANT: ``` OR ``` <system message> USER: <prompt> ASSISTANT: ``` ## Benchmarks: Collective Cognition v1.0 TruthfulQA: ``` | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.4051|± |0.0172| | | |mc2 |0.5738|± |0.0157| ``` Collective Cognition v1.1 GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.5085|± |0.0146| | | |acc_norm|0.5384|± |0.0146| |arc_easy | 0|acc |0.7963|± |0.0083| | | |acc_norm|0.7668|± |0.0087| |boolq | 1|acc |0.8495|± |0.0063| |hellaswag | 0|acc |0.6399|± |0.0048| | | |acc_norm|0.8247|± |0.0038| |openbookqa | 0|acc |0.3240|± |0.0210| | | |acc_norm|0.4540|± |0.0223| |piqa | 0|acc |0.7992|± |0.0093| | | |acc_norm|0.8107|± |0.0091| |winogrande | 0|acc |0.7348|± |0.0124| Average: 71.13 ``` AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.1929|± |0.0248| | | |acc_norm|0.2008|± |0.0252| |agieval_logiqa_en | 0|acc |0.3134|± |0.0182| | | |acc_norm|0.3333|± |0.0185| |agieval_lsat_ar | 0|acc |0.2217|± |0.0275| | | |acc_norm|0.2043|± |0.0266| |agieval_lsat_lr | 0|acc |0.3412|± |0.0210| | | |acc_norm|0.3216|± |0.0207| |agieval_lsat_rc | 0|acc |0.4721|± |0.0305| | | |acc_norm|0.4201|± |0.0301| |agieval_sat_en | 0|acc |0.6068|± |0.0341| | | |acc_norm|0.5777|± |0.0345| |agieval_sat_en_without_passage| 0|acc |0.3932|± |0.0341| | | |acc_norm|0.3641|± |0.0336| |agieval_sat_math | 0|acc |0.2864|± |0.0305| | | |acc_norm|0.2636|± |0.0298| Average: 33.57 ``` Training run on wandb here: https://wandb.ai/teknium1/collectivecognition-mistral-7b/runs/collectivecognition-mistral-8/workspace ## Licensing: Apache 2.0 ---
databio/r2v-encode-hg38
databio
2024-02-02T13:13:23Z
868
0
null
[ "region:us" ]
null
2024-02-02T12:29:28Z
Entry not found
Chrisisis/5DwQJFWEA85ib2Cacq2TbPHwq1BiFzrBr3pRGRnzq73Sop25_vgg
Chrisisis
2024-02-24T08:29:59Z
868
0
keras
[ "keras", "region:us" ]
null
2024-02-11T17:23:04Z
Entry not found
bartowski/deepseek-coder-6.7B-kexer-GGUF
bartowski
2024-05-25T17:47:02Z
868
1
null
[ "gguf", "code", "text-generation", "dataset:JetBrains/KExercises", "base_model:deepseek-ai/deepseek-coder-6.7b-base", "license:apache-2.0", "region:us" ]
text-generation
2024-05-25T17:29:17Z
--- license: apache-2.0 datasets: - JetBrains/KExercises base_model: deepseek-ai/deepseek-coder-6.7b-base results: - task: type: text-generation dataset: name: MultiPL-HumanEval (Kotlin) type: openai_humaneval metrics: - name: pass@1 type: pass@1 value: 55.28 tags: - code quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of deepseek-coder-6.7B-kexer Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2965">b2965</a> for quantization. Original model: https://huggingface.co/JetBrains/deepseek-coder-6.7B-kexer All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a) ## Prompt format No chat template specified so default is used. This may be incorrect, check original model card for details. ``` <|begin▁of▁sentence|>[INST] <<SYS>> {system_prompt} <</SYS>> {prompt} [/INST] <|end▁of▁sentence|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [deepseek-coder-6.7B-kexer-Q8_0.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-Q8_0.gguf) | Q8_0 | 7.16GB | Extremely high quality, generally unneeded but max available quant. | | [deepseek-coder-6.7B-kexer-Q6_K.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-Q6_K.gguf) | Q6_K | 5.53GB | Very high quality, near perfect, *recommended*. | | [deepseek-coder-6.7B-kexer-Q5_K_M.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-Q5_K_M.gguf) | Q5_K_M | 4.78GB | High quality, *recommended*. | | [deepseek-coder-6.7B-kexer-Q5_K_S.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-Q5_K_S.gguf) | Q5_K_S | 4.65GB | High quality, *recommended*. | | [deepseek-coder-6.7B-kexer-Q4_K_M.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-Q4_K_M.gguf) | Q4_K_M | 4.08GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [deepseek-coder-6.7B-kexer-Q4_K_S.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-Q4_K_S.gguf) | Q4_K_S | 3.85GB | Slightly lower quality with more space savings, *recommended*. | | [deepseek-coder-6.7B-kexer-IQ4_NL.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ4_NL.gguf) | IQ4_NL | 3.82GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [deepseek-coder-6.7B-kexer-IQ4_XS.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ4_XS.gguf) | IQ4_XS | 3.62GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [deepseek-coder-6.7B-kexer-Q3_K_L.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-Q3_K_L.gguf) | Q3_K_L | 3.59GB | Lower quality but usable, good for low RAM availability. | | [deepseek-coder-6.7B-kexer-Q3_K_M.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-Q3_K_M.gguf) | Q3_K_M | 3.29GB | Even lower quality. | | [deepseek-coder-6.7B-kexer-IQ3_M.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ3_M.gguf) | IQ3_M | 3.11GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [deepseek-coder-6.7B-kexer-IQ3_S.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ3_S.gguf) | IQ3_S | 2.95GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [deepseek-coder-6.7B-kexer-Q3_K_S.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-Q3_K_S.gguf) | Q3_K_S | 2.95GB | Low quality, not recommended. | | [deepseek-coder-6.7B-kexer-IQ3_XS.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ3_XS.gguf) | IQ3_XS | 2.79GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [deepseek-coder-6.7B-kexer-IQ3_XXS.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ3_XXS.gguf) | IQ3_XXS | 2.58GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [deepseek-coder-6.7B-kexer-Q2_K.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-Q2_K.gguf) | Q2_K | 2.53GB | Very low quality but surprisingly usable. | | [deepseek-coder-6.7B-kexer-IQ2_M.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ2_M.gguf) | IQ2_M | 2.36GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [deepseek-coder-6.7B-kexer-IQ2_S.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ2_S.gguf) | IQ2_S | 2.19GB | Very low quality, uses SOTA techniques to be usable. | | [deepseek-coder-6.7B-kexer-IQ2_XS.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ2_XS.gguf) | IQ2_XS | 2.03GB | Very low quality, uses SOTA techniques to be usable. | | [deepseek-coder-6.7B-kexer-IQ2_XXS.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ2_XXS.gguf) | IQ2_XXS | 1.85GB | Lower quality, uses SOTA techniques to be usable. | | [deepseek-coder-6.7B-kexer-IQ1_M.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ1_M.gguf) | IQ1_M | 1.65GB | Extremely low quality, *not* recommended. | | [deepseek-coder-6.7B-kexer-IQ1_S.gguf](https://huggingface.co/bartowski/deepseek-coder-6.7B-kexer-GGUF/blob/main/deepseek-coder-6.7B-kexer-IQ1_S.gguf) | IQ1_S | 1.53GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/deepseek-coder-6.7B-kexer-GGUF --include "deepseek-coder-6.7B-kexer-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/deepseek-coder-6.7B-kexer-GGUF --include "deepseek-coder-6.7B-kexer-Q8_0.gguf/*" --local-dir deepseek-coder-6.7B-kexer-Q8_0 ``` You can either specify a new local-dir (deepseek-coder-6.7B-kexer-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
gglabs/TinyLM-Chat-0611-6-epoch
gglabs
2024-06-11T16:51:47Z
868
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-11T14:05:42Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** gglabs - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
NeelNanda/GELU_2L512W_C4_Code
NeelNanda
2022-10-23T13:09:23Z
867
1
transformers
[ "transformers", "endpoints_compatible", "region:us" ]
null
2022-10-23T13:01:58Z
Entry not found
tannedbum/L3-Nymeria-8B-iGGUF
tannedbum
2024-06-22T14:15:11Z
867
8
null
[ "gguf", "roleplay", "sillytavern", "llama3", "not-for-all-audiences", "en", "license:cc-by-nc-4.0", "region:us" ]
null
2024-06-15T01:44:04Z
--- license: cc-by-nc-4.0 quantized_by: tannedbum language: - en tags: - roleplay - sillytavern - llama3 - not-for-all-audiences --- ![Nymeria](https://huggingface.co/tannedbum/L3-Nymeria-8B/resolve/main/Nymeria.png?) ## The smartest L3 8B model combined with high-end RP model. What could go wrong. The idea was to fuse a bit of SimPO's realism with Stheno. It took a few days to come up with a balanced slerp configuration, but I'm more than satisfied with the end result. All quants made using imatrix option with dataset provided by bartowski [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## SillyTavern ## Text Completion presets ``` temp 0.9 top_k 30 top_p 0.75 min_p 0.2 rep_pen 1.1 smooth_factor 0.25 smooth_curve 1 ``` ## Advanced Formatting [Context & Instruct preset by Virt-io](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/LLAMA-3/v2.0) Instruct Mode: Enabled # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This model was merged using the slerp merge method. ### Models Merged The following models were included in the merge: * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) * [princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Sao10K/L3-8B-Stheno-v3.2 layer_range: [0, 32] - model: princeton-nlp/Llama-3-Instruct-8B-SimPO layer_range: [0, 32] merge_method: slerp base_model: Sao10K/L3-8B-Stheno-v3.2 parameters: t: - filter: self_attn value: [0.4, 0.5, 0.6, 0.4, 0.6] - filter: mlp value: [0.6, 0.5, 0.4, 0.6, 0.4] - value: 0.5 dtype: bfloat16 ``` --- ## Original model information: ## Model: Sao10K/L3-8B-Stheno-v3.2 Stheno-v3.2-Zeta Changes compared to v3.1 <br>\- Included a mix of SFW and NSFW Storywriting Data, thanks to [Gryphe](https://huggingface.co/datasets/Gryphe/Opus-WritingPrompts) <br>\- Included More Instruct / Assistant-Style Data <br>\- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it. <br>\- Hyperparameter tinkering for training, resulting in lower loss levels. Testing Notes - Compared to v3.1 <br>\- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced. <br>\- Better at Storywriting / Narration. <br>\- Better at Assistant-type Tasks. <br>\- Better Multi-Turn Coherency -> Reduced Issues? <br>\- Slightly less creative? A worthy tradeoff. Still creative. <br>\- Better prompt / instruction adherence. --- Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum
speechbrain/tts-tacotron2-ljspeech
speechbrain
2024-02-19T15:16:07Z
866
117
speechbrain
[ "speechbrain", "text-to-speech", "TTS", "speech-synthesis", "Tacotron2", "en", "dataset:LJSpeech", "arxiv:1712.05884", "arxiv:2106.04624", "license:apache-2.0", "region:us" ]
text-to-speech
2022-05-28T21:09:37Z
--- language: "en" tags: - text-to-speech - TTS - speech-synthesis - Tacotron2 - speechbrain license: "apache-2.0" datasets: - LJSpeech metrics: - mos --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # Text-to-Speech (TTS) with Tacotron2 trained on LJSpeech This repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a [Tacotron2](https://arxiv.org/abs/1712.05884) pretrained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/). The pre-trained model takes in input a short text and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram. ## Install SpeechBrain ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform Text-to-Speech (TTS) ```python import torchaudio from speechbrain.inference.TTS import Tacotron2 from speechbrain.inference.vocoders import HIFIGAN # Intialize TTS (tacotron2) and Vocoder (HiFIGAN) tacotron2 = Tacotron2.from_hparams(source="speechbrain/tts-tacotron2-ljspeech", savedir="tmpdir_tts") hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", savedir="tmpdir_vocoder") # Running the TTS mel_output, mel_length, alignment = tacotron2.encode_text("Mary had a little lamb") # Running Vocoder (spectrogram-to-waveform) waveforms = hifi_gan.decode_batch(mel_output) # Save the waverform torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050) ``` If you want to generate multiple sentences in one-shot, you can do in this way: ``` from speechbrain.pretrained import Tacotron2 tacotron2 = Tacotron2.from_hparams(source="speechbrain/TTS_Tacotron2", savedir="tmpdir") items = [ "A quick brown fox jumped over the lazy dog", "How much wood would a woodchuck chuck?", "Never odd or even" ] mel_outputs, mel_lengths, alignments = tacotron2.encode_batch(items) ``` ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain. To train it from scratch follow these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ```bash cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ```bash cd recipes/LJSpeech/TTS/tacotron2/ python train.py --device=cuda:0 --max_grad_norm=1.0 --data_folder=/your_folder/LJSpeech-1.1 hparams/train.yaml ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1PKju-_Nal3DQqd-n0PsaHK-bVIOlbf26?usp=sharing). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/ # **Citing SpeechBrain** Please, cite SpeechBrain if you use it for your research or business. ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ```
osiria/bert-base-italian-uncased
osiria
2023-12-09T11:58:33Z
866
1
transformers
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "it", "arxiv:1810.04805", "arxiv:2010.05609", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-06-03T10:48:54Z
--- license: apache-2.0 language: - it widget: - text: "milano è una [MASK] dell'italia" example_title: "Example 1" - text: "giacomo leopardi è stato uno dei più grandi [MASK] del classicismo italiano" example_title: "Example 2" - text: "la pizza è un piatto tipico della [MASK] gastronomica italiana" example_title: "Example 3" --- -------------------------------------------------------------------------------------------------- <body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: BERT</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  Type: Uncased</span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body> -------------------------------------------------------------------------------------------------- <h3>Model description</h3> This is an uncased <b>BERT</b> <b>[1]</b> model for the <b>Italian</b> language, obtained using the uncased <b>mBERT</b> ([bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased)) as a starting point and focusing it on the Italian language by modifying the embedding layer (as in <b>[2]</b>, computing document-level frequencies over the <b>Wikipedia</b> dataset) The resulting model has 110M parameters, a vocabulary of 30.154 tokens, and a size of ~430 MB. <h3>Quick usage</h3> ```python from transformers import BertTokenizerFast, BertModel tokenizer = BertTokenizerFast.from_pretrained("osiria/bert-base-italian-uncased") model = BertModel.from_pretrained("osiria/bert-base-italian-uncased") ``` <h3>References</h3> [1] https://arxiv.org/abs/1810.04805 [2] https://arxiv.org/abs/2010.05609 <h3>License</h3> The model is released under <b>Apache-2.0</b> license
KoboldAI/LLaMA2-13B-TiefighterLR-GGUF
KoboldAI
2023-10-24T11:05:15Z
866
20
null
[ "gguf", "license:llama2", "region:us" ]
null
2023-10-23T11:09:29Z
--- license: llama2 --- # LLaMA2-13B-TiefighterLR TiefighterLR is a merged model achieved trough merging two different lora's on top of a well established existing merge. This LR version contains Less Rodeo, merged at 3% from the original 5% reducing its second person adventure bias. Testers found this model to understand your own character and instruction prompts better, at the sacrifice of lowering its own writing bias/style. To achieve this the following recipe was used: * We begin with the base model Undi95/Xwin-MLewd-13B-V0.2 which is a well established merge, contrary to the name this model does not have a strong NSFW bias. * Then we applied the PocketDoc/Dans-RetroRodeo-13b lora which is a finetune on the Choose your own Adventure datasets from our Skein model. * After applying this lora we merged the original model with the newly created PocketDoc/Dans-RetroRodeo-13b merge at 3% to weaken the newly introduced adventure bias. * The resulting merge was used as a new base model to which we applied Blackroot/Llama-2-13B-Storywriter-LORA and repeated the same trick, this time at 10%. This means this model contains the following ingredients from their upstream models for as far as we can track them: - Undi95/Xwin-MLewd-13B-V0.2 - - Undi95/ReMM-S-Light (base/private) - Undi95/CreativeEngine - Brouz/Slerpeno - - elinas/chronos-13b-v2 - jondurbin/airoboros-l2-13b-2.1 - NousResearch/Nous-Hermes-Llama2-13b+nRuaif/Kimiko-v2 LORA - CalderaAI/13B-Legerdemain-L2+lemonilia/limarp-llama2-v2 LORA - - KoboldAI/LLAMA2-13B-Holodeck-1 - NousResearch/Nous-Hermes-13b - OpenAssistant/llama2-13b-orca-8k-3319 - ehartford/WizardLM-1.0-Uncensored-Llama2-13b - Henk717/spring-dragon - The-Face-Of-Goonery/Huginn-v3-13b - zattio770/120-Days-of-LORA-v2-13B - PygmalionAI/pygmalion-2-13b - Undi95/StoryTelling - TokenBender/sakhi_13B_roleplayer_NSFW_chat_adapter - nRuaif/Kimiko-v2-13B - The-Face-Of-Goonery/Huginn-13b-FP16 - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT - Xwin-LM/Xwin-LM-13B-V0.2 - PocketDoc/Dans-RetroRodeo-13b - Blackroot/Llama-2-13B-Storywriter-LORA # Usage This model is meant to be creative, If you let it improvise you get better results than if you drown it in details. ## Story Writing Regular story writing in the traditional way is supported, simply copy paste your story and continue writing. Optionally use an instruction in memory or an authors note to guide the direction of your story. ### Generate a story on demand To generate stories on demand you can use an instruction (tested in the Alpaca format) such as "Write a novel about X, use chapters and dialogue" this will generate a story. The format can vary between generations depending on how the model chooses to begin, either write what you want as shown in the earlier example or write the beginning of the story yourself so the model can follow your style. A few retries can also help if the model gets it wrong. ## Chatbots and persona's Unlike the original Tiefighter this model is better at handling existing Character Cards as long as they do not contain a lot of second person writing or second person introductions (You), setting > as a custom stop sequence can help fix potential mistakes, as well as turning multi-line replies off. You can also use instructions to create your characters. For example, you can put this in memory in regular chat mode: ``` ### Instruction: Generate a conversation between Alice and Henk where they discuss language models. In this conversation Henk is excited to teach Alice about Tiefighter. ### Response: ``` Because the model is a merge of a variety of models, it should support a broad range of instruct formats, or plain chat mode. If you have a particular favourite try it, otherwise we recommend to either use the regular chat mode or Alpaca's format. ## Instruct Prompting This model features various instruct models on a variety of instruction styles, when testing the model we have used Alpaca for our own tests. If you prefer a different format chances are it can work. During instructions we have observed that in some cases the adventure data can leak, it may also be worth experimenting using > as the prefix for a user command to remedy this. But this may result in a stronger fiction bias. Keep in mind that while this model can be used as a factual instruct model, the focus was on fiction. Information provided by the model can be made up. ## Adventuring and Adventure Games This model contains a lora that was trained on the same adventure dataset as the KoboldAI Skein model. Adventuring is best done using an small introduction to the world and your objective while using the > prefix for a user command (KoboldAI's adventure mode). It is possible that the model does not immediately pick up on what you wish to do and does not engage in its Adventure mode behaviour right away. Simply manually correct the output to trim excess dialogue or other undesirable behaviour and continue to submit your actions using the appropriate mode. The model should pick up on this style quickly and will correctly follow this format within 3 turns. ## Discovered something cool and want to engage with us? Join our community at https://koboldai.org/discord ! ### This model would not be possible without the awesome work from: Undi95, PocketDoc, Blackroot, Brouz, The Face of Goonery, zattio770, PygmalionAI, TokenBender, nRuaif, lemonilia and Xwin-LM.
IkariDev/Athnete-13B
IkariDev
2023-10-29T03:13:42Z
866
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-29T01:00:24Z
--- license: cc-by-nc-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/UntR48TeeY7R0gvmByLdO.png) Athnete model. Use Alpaca format. Suitable for RP, ERP and general stuff. Despite the awful banner i made, this model could actually be better than raw Athena v3. <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains fp16 files of Athnete. <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-GGUF) --> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-GPTQ) --> <!-- [exl2 - by AzureBlack](https://huggingface.co/AzureBlack/Athena-v2-6.0bit-exl2) --> <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-AWQ) --> [fp16 - by IkariDev](https://huggingface.co/IkariDev/Athnete-13B) [GGUF - by IkariDev](https://huggingface.co/IkariDev/Athnete-13B-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v3-GGUF) --> ## Ratings: Note: I have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here! <!-- description end --> <!-- description start --> ## Models and loras used - IkariDev/Athena-v3 - Undi95/Nete-13B <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` HUGE thanks to [Undi95](https://huggingface.co/Undi95) for doing the merging (Recipe was my idea, he merged)
McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-unsup-simcse
McGill-NLP
2024-04-11T20:09:10Z
866
7
peft
[ "peft", "safetensors", "text-embedding", "embeddings", "information-retrieval", "beir", "text-classification", "language-model", "text-clustering", "text-semantic-similarity", "text-evaluation", "text-reranking", "feature-extraction", "sentence-similarity", "Sentence Similarity", "natural_questions", "ms_marco", "fever", "hotpot_qa", "mteb", "en", "arxiv:2404.05961", "license:mit", "model-index", "region:us" ]
sentence-similarity
2024-04-04T03:06:33Z
--- library_name: peft license: mit language: - en pipeline_tag: sentence-similarity tags: - text-embedding - embeddings - information-retrieval - beir - text-classification - language-model - text-clustering - text-semantic-similarity - text-evaluation - text-reranking - feature-extraction - sentence-similarity - Sentence Similarity - natural_questions - ms_marco - fever - hotpot_qa - mteb model-index: - name: LLM2Vec-Mistral-7B-unsupervised results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.94029850746269 - type: ap value: 41.01055096636703 - type: f1 value: 71.2582580801963 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 85.288275 - type: ap value: 80.9174293931393 - type: f1 value: 85.26284279319103 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.089999999999996 - type: f1 value: 46.42571856588491 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 25.676 - type: map_at_10 value: 41.705999999999996 - type: map_at_100 value: 42.649 - type: map_at_1000 value: 42.655 - type: map_at_3 value: 36.214 - type: map_at_5 value: 39.475 - type: mrr_at_1 value: 26.173999999999996 - type: mrr_at_10 value: 41.873 - type: mrr_at_100 value: 42.817 - type: mrr_at_1000 value: 42.823 - type: mrr_at_3 value: 36.427 - type: mrr_at_5 value: 39.646 - type: ndcg_at_1 value: 25.676 - type: ndcg_at_10 value: 51.001 - type: ndcg_at_100 value: 55.001 - type: ndcg_at_1000 value: 55.167 - type: ndcg_at_3 value: 39.713 - type: ndcg_at_5 value: 45.596 - type: precision_at_1 value: 25.676 - type: precision_at_10 value: 8.087 - type: precision_at_100 value: 0.983 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.619 - type: precision_at_5 value: 12.831000000000001 - type: recall_at_1 value: 25.676 - type: recall_at_10 value: 80.868 - type: recall_at_100 value: 98.29299999999999 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 49.858000000000004 - type: recall_at_5 value: 64.154 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.557333278165295 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 39.921940994207674 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 58.602773795071585 - type: mrr value: 72.93749725190169 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_spearman value: 83.29045204631967 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 86.1590909090909 - type: f1 value: 86.08993054539444 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 36.13784714320738 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 30.26284987791574 - task: type: Retrieval dataset: type: cqadupstack/android name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.611 - type: map_at_10 value: 37.838 - type: map_at_100 value: 39.446999999999996 - type: map_at_1000 value: 39.583 - type: map_at_3 value: 34.563 - type: map_at_5 value: 36.332 - type: mrr_at_1 value: 35.765 - type: mrr_at_10 value: 44.614 - type: mrr_at_100 value: 45.501000000000005 - type: mrr_at_1000 value: 45.558 - type: mrr_at_3 value: 42.513 - type: mrr_at_5 value: 43.515 - type: ndcg_at_1 value: 35.765 - type: ndcg_at_10 value: 44.104 - type: ndcg_at_100 value: 50.05500000000001 - type: ndcg_at_1000 value: 52.190000000000005 - type: ndcg_at_3 value: 39.834 - type: ndcg_at_5 value: 41.491 - type: precision_at_1 value: 35.765 - type: precision_at_10 value: 8.870000000000001 - type: precision_at_100 value: 1.505 - type: precision_at_1000 value: 0.2 - type: precision_at_3 value: 19.886 - type: precision_at_5 value: 14.277999999999999 - type: recall_at_1 value: 27.611 - type: recall_at_10 value: 55.065 - type: recall_at_100 value: 80.60199999999999 - type: recall_at_1000 value: 94.517 - type: recall_at_3 value: 41.281 - type: recall_at_5 value: 46.791 - task: type: Retrieval dataset: type: cqadupstack/english name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.599999999999998 - type: map_at_10 value: 38.218999999999994 - type: map_at_100 value: 39.336 - type: map_at_1000 value: 39.464 - type: map_at_3 value: 35.599 - type: map_at_5 value: 36.927 - type: mrr_at_1 value: 37.197 - type: mrr_at_10 value: 44.759 - type: mrr_at_100 value: 45.372 - type: mrr_at_1000 value: 45.422000000000004 - type: mrr_at_3 value: 42.941 - type: mrr_at_5 value: 43.906 - type: ndcg_at_1 value: 37.197 - type: ndcg_at_10 value: 43.689 - type: ndcg_at_100 value: 47.588 - type: ndcg_at_1000 value: 49.868 - type: ndcg_at_3 value: 40.434 - type: ndcg_at_5 value: 41.617 - type: precision_at_1 value: 37.197 - type: precision_at_10 value: 8.529 - type: precision_at_100 value: 1.325 - type: precision_at_1000 value: 0.181 - type: precision_at_3 value: 20.212 - type: precision_at_5 value: 13.987 - type: recall_at_1 value: 28.599999999999998 - type: recall_at_10 value: 52.266999999999996 - type: recall_at_100 value: 69.304 - type: recall_at_1000 value: 84.249 - type: recall_at_3 value: 41.281 - type: recall_at_5 value: 45.56 - task: type: Retrieval dataset: type: cqadupstack/gaming name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 33.168 - type: map_at_10 value: 44.690999999999995 - type: map_at_100 value: 45.804 - type: map_at_1000 value: 45.876 - type: map_at_3 value: 41.385 - type: map_at_5 value: 43.375 - type: mrr_at_1 value: 38.997 - type: mrr_at_10 value: 48.782 - type: mrr_at_100 value: 49.534 - type: mrr_at_1000 value: 49.57 - type: mrr_at_3 value: 46.134 - type: mrr_at_5 value: 47.814 - type: ndcg_at_1 value: 38.997 - type: ndcg_at_10 value: 50.707 - type: ndcg_at_100 value: 55.358 - type: ndcg_at_1000 value: 56.818999999999996 - type: ndcg_at_3 value: 45.098 - type: ndcg_at_5 value: 48.065999999999995 - type: precision_at_1 value: 38.997 - type: precision_at_10 value: 8.414000000000001 - type: precision_at_100 value: 1.159 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 20.564 - type: precision_at_5 value: 14.445 - type: recall_at_1 value: 33.168 - type: recall_at_10 value: 64.595 - type: recall_at_100 value: 85.167 - type: recall_at_1000 value: 95.485 - type: recall_at_3 value: 49.555 - type: recall_at_5 value: 56.871 - task: type: Retrieval dataset: type: cqadupstack/gis name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 17.254 - type: map_at_10 value: 23.925 - type: map_at_100 value: 25.008000000000003 - type: map_at_1000 value: 25.123 - type: map_at_3 value: 21.676000000000002 - type: map_at_5 value: 23.042 - type: mrr_at_1 value: 18.756999999999998 - type: mrr_at_10 value: 25.578 - type: mrr_at_100 value: 26.594 - type: mrr_at_1000 value: 26.680999999999997 - type: mrr_at_3 value: 23.371 - type: mrr_at_5 value: 24.721 - type: ndcg_at_1 value: 18.756999999999998 - type: ndcg_at_10 value: 27.878999999999998 - type: ndcg_at_100 value: 33.285 - type: ndcg_at_1000 value: 36.333 - type: ndcg_at_3 value: 23.461000000000002 - type: ndcg_at_5 value: 25.836 - type: precision_at_1 value: 18.756999999999998 - type: precision_at_10 value: 4.429 - type: precision_at_100 value: 0.754 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 9.981 - type: precision_at_5 value: 7.412000000000001 - type: recall_at_1 value: 17.254 - type: recall_at_10 value: 38.42 - type: recall_at_100 value: 63.50900000000001 - type: recall_at_1000 value: 86.787 - type: recall_at_3 value: 26.840999999999998 - type: recall_at_5 value: 32.4 - task: type: Retrieval dataset: type: cqadupstack/mathematica name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 10.495000000000001 - type: map_at_10 value: 16.505 - type: map_at_100 value: 17.59 - type: map_at_1000 value: 17.709 - type: map_at_3 value: 13.974 - type: map_at_5 value: 15.466 - type: mrr_at_1 value: 14.179 - type: mrr_at_10 value: 20.396 - type: mrr_at_100 value: 21.384 - type: mrr_at_1000 value: 21.47 - type: mrr_at_3 value: 17.828 - type: mrr_at_5 value: 19.387999999999998 - type: ndcg_at_1 value: 14.179 - type: ndcg_at_10 value: 20.852 - type: ndcg_at_100 value: 26.44 - type: ndcg_at_1000 value: 29.448999999999998 - type: ndcg_at_3 value: 16.181 - type: ndcg_at_5 value: 18.594 - type: precision_at_1 value: 14.179 - type: precision_at_10 value: 4.229 - type: precision_at_100 value: 0.8170000000000001 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 8.126 - type: precision_at_5 value: 6.493 - type: recall_at_1 value: 10.495000000000001 - type: recall_at_10 value: 30.531000000000002 - type: recall_at_100 value: 55.535999999999994 - type: recall_at_1000 value: 77.095 - type: recall_at_3 value: 17.805 - type: recall_at_5 value: 24.041 - task: type: Retrieval dataset: type: cqadupstack/physics name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.826999999999998 - type: map_at_10 value: 34.957 - type: map_at_100 value: 36.314 - type: map_at_1000 value: 36.437999999999995 - type: map_at_3 value: 31.328 - type: map_at_5 value: 33.254 - type: mrr_at_1 value: 31.375999999999998 - type: mrr_at_10 value: 40.493 - type: mrr_at_100 value: 41.410000000000004 - type: mrr_at_1000 value: 41.46 - type: mrr_at_3 value: 37.504 - type: mrr_at_5 value: 39.212 - type: ndcg_at_1 value: 31.375999999999998 - type: ndcg_at_10 value: 41.285 - type: ndcg_at_100 value: 46.996 - type: ndcg_at_1000 value: 49.207 - type: ndcg_at_3 value: 35.297 - type: ndcg_at_5 value: 37.999 - type: precision_at_1 value: 31.375999999999998 - type: precision_at_10 value: 7.960000000000001 - type: precision_at_100 value: 1.277 - type: precision_at_1000 value: 0.165 - type: precision_at_3 value: 17.132 - type: precision_at_5 value: 12.57 - type: recall_at_1 value: 24.826999999999998 - type: recall_at_10 value: 54.678000000000004 - type: recall_at_100 value: 78.849 - type: recall_at_1000 value: 93.36 - type: recall_at_3 value: 37.775 - type: recall_at_5 value: 44.993 - task: type: Retrieval dataset: type: cqadupstack/programmers name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 21.195 - type: map_at_10 value: 29.003 - type: map_at_100 value: 30.379 - type: map_at_1000 value: 30.508000000000003 - type: map_at_3 value: 25.927 - type: map_at_5 value: 27.784 - type: mrr_at_1 value: 26.941 - type: mrr_at_10 value: 34.305 - type: mrr_at_100 value: 35.32 - type: mrr_at_1000 value: 35.386 - type: mrr_at_3 value: 31.735000000000003 - type: mrr_at_5 value: 33.213 - type: ndcg_at_1 value: 26.941 - type: ndcg_at_10 value: 34.31 - type: ndcg_at_100 value: 40.242 - type: ndcg_at_1000 value: 42.9 - type: ndcg_at_3 value: 29.198 - type: ndcg_at_5 value: 31.739 - type: precision_at_1 value: 26.941 - type: precision_at_10 value: 6.507000000000001 - type: precision_at_100 value: 1.124 - type: precision_at_1000 value: 0.154 - type: precision_at_3 value: 13.850999999999999 - type: precision_at_5 value: 10.411 - type: recall_at_1 value: 21.195 - type: recall_at_10 value: 45.083 - type: recall_at_100 value: 70.14200000000001 - type: recall_at_1000 value: 88.34100000000001 - type: recall_at_3 value: 31.175000000000004 - type: recall_at_5 value: 37.625 - task: type: Retrieval dataset: type: mteb/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 20.293916666666664 - type: map_at_10 value: 28.353666666666665 - type: map_at_100 value: 29.524333333333335 - type: map_at_1000 value: 29.652583333333332 - type: map_at_3 value: 25.727916666666665 - type: map_at_5 value: 27.170833333333334 - type: mrr_at_1 value: 25.21375 - type: mrr_at_10 value: 32.67591666666667 - type: mrr_at_100 value: 33.56233333333334 - type: mrr_at_1000 value: 33.63283333333334 - type: mrr_at_3 value: 30.415333333333333 - type: mrr_at_5 value: 31.679583333333333 - type: ndcg_at_1 value: 25.21375 - type: ndcg_at_10 value: 33.37108333333333 - type: ndcg_at_100 value: 38.57725 - type: ndcg_at_1000 value: 41.246833333333335 - type: ndcg_at_3 value: 28.98183333333334 - type: ndcg_at_5 value: 30.986083333333337 - type: precision_at_1 value: 25.21375 - type: precision_at_10 value: 6.200583333333333 - type: precision_at_100 value: 1.0527499999999999 - type: precision_at_1000 value: 0.14675000000000002 - type: precision_at_3 value: 13.808333333333334 - type: precision_at_5 value: 9.976416666666669 - type: recall_at_1 value: 20.293916666666664 - type: recall_at_10 value: 43.90833333333333 - type: recall_at_100 value: 67.26575 - type: recall_at_1000 value: 86.18591666666666 - type: recall_at_3 value: 31.387416666666667 - type: recall_at_5 value: 36.73883333333333 - task: type: Retrieval dataset: type: cqadupstack/stats name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 15.043000000000001 - type: map_at_10 value: 22.203 - type: map_at_100 value: 23.254 - type: map_at_1000 value: 23.362 - type: map_at_3 value: 20.157 - type: map_at_5 value: 21.201999999999998 - type: mrr_at_1 value: 17.485 - type: mrr_at_10 value: 24.729 - type: mrr_at_100 value: 25.715 - type: mrr_at_1000 value: 25.796999999999997 - type: mrr_at_3 value: 22.725 - type: mrr_at_5 value: 23.829 - type: ndcg_at_1 value: 17.485 - type: ndcg_at_10 value: 26.31 - type: ndcg_at_100 value: 31.722 - type: ndcg_at_1000 value: 34.621 - type: ndcg_at_3 value: 22.414 - type: ndcg_at_5 value: 24.125 - type: precision_at_1 value: 17.485 - type: precision_at_10 value: 4.601 - type: precision_at_100 value: 0.7849999999999999 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 10.327 - type: precision_at_5 value: 7.331 - type: recall_at_1 value: 15.043000000000001 - type: recall_at_10 value: 36.361 - type: recall_at_100 value: 61.63999999999999 - type: recall_at_1000 value: 83.443 - type: recall_at_3 value: 25.591 - type: recall_at_5 value: 29.808 - task: type: Retrieval dataset: type: cqadupstack/tex name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 11.018 - type: map_at_10 value: 15.886 - type: map_at_100 value: 16.830000000000002 - type: map_at_1000 value: 16.956 - type: map_at_3 value: 14.222000000000001 - type: map_at_5 value: 15.110999999999999 - type: mrr_at_1 value: 14.625 - type: mrr_at_10 value: 19.677 - type: mrr_at_100 value: 20.532 - type: mrr_at_1000 value: 20.622 - type: mrr_at_3 value: 17.992 - type: mrr_at_5 value: 18.909000000000002 - type: ndcg_at_1 value: 14.625 - type: ndcg_at_10 value: 19.414 - type: ndcg_at_100 value: 24.152 - type: ndcg_at_1000 value: 27.433000000000003 - type: ndcg_at_3 value: 16.495 - type: ndcg_at_5 value: 17.742 - type: precision_at_1 value: 14.625 - type: precision_at_10 value: 3.833 - type: precision_at_100 value: 0.744 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 8.213 - type: precision_at_5 value: 6.036 - type: recall_at_1 value: 11.018 - type: recall_at_10 value: 26.346000000000004 - type: recall_at_100 value: 47.99 - type: recall_at_1000 value: 72.002 - type: recall_at_3 value: 17.762 - type: recall_at_5 value: 21.249000000000002 - task: type: Retrieval dataset: type: cqadupstack/unix name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 20.053 - type: map_at_10 value: 27.950000000000003 - type: map_at_100 value: 29.207 - type: map_at_1000 value: 29.309 - type: map_at_3 value: 25.612000000000002 - type: map_at_5 value: 26.793 - type: mrr_at_1 value: 24.813 - type: mrr_at_10 value: 32.297 - type: mrr_at_100 value: 33.312999999999995 - type: mrr_at_1000 value: 33.379999999999995 - type: mrr_at_3 value: 30.239 - type: mrr_at_5 value: 31.368000000000002 - type: ndcg_at_1 value: 24.813 - type: ndcg_at_10 value: 32.722 - type: ndcg_at_100 value: 38.603 - type: ndcg_at_1000 value: 41.11 - type: ndcg_at_3 value: 28.74 - type: ndcg_at_5 value: 30.341 - type: precision_at_1 value: 24.813 - type: precision_at_10 value: 5.83 - type: precision_at_100 value: 0.9860000000000001 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 13.433 - type: precision_at_5 value: 9.384 - type: recall_at_1 value: 20.053 - type: recall_at_10 value: 42.867 - type: recall_at_100 value: 68.90899999999999 - type: recall_at_1000 value: 87.031 - type: recall_at_3 value: 31.606 - type: recall_at_5 value: 35.988 - task: type: Retrieval dataset: type: cqadupstack/webmasters name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 20.696 - type: map_at_10 value: 29.741 - type: map_at_100 value: 30.958999999999996 - type: map_at_1000 value: 31.22 - type: map_at_3 value: 26.679000000000002 - type: map_at_5 value: 28.244999999999997 - type: mrr_at_1 value: 27.272999999999996 - type: mrr_at_10 value: 35.101 - type: mrr_at_100 value: 35.91 - type: mrr_at_1000 value: 35.987 - type: mrr_at_3 value: 32.378 - type: mrr_at_5 value: 33.732 - type: ndcg_at_1 value: 27.272999999999996 - type: ndcg_at_10 value: 36.136 - type: ndcg_at_100 value: 40.9 - type: ndcg_at_1000 value: 44.184 - type: ndcg_at_3 value: 31.123 - type: ndcg_at_5 value: 33.182 - type: precision_at_1 value: 27.272999999999996 - type: precision_at_10 value: 7.489999999999999 - type: precision_at_100 value: 1.506 - type: precision_at_1000 value: 0.24 - type: precision_at_3 value: 15.348999999999998 - type: precision_at_5 value: 11.344 - type: recall_at_1 value: 20.696 - type: recall_at_10 value: 48.041 - type: recall_at_100 value: 71.316 - type: recall_at_1000 value: 92.794 - type: recall_at_3 value: 32.983000000000004 - type: recall_at_5 value: 38.627 - task: type: Retrieval dataset: type: cqadupstack/wordpress name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 13.567000000000002 - type: map_at_10 value: 19.326 - type: map_at_100 value: 20.164 - type: map_at_1000 value: 20.283 - type: map_at_3 value: 17.613 - type: map_at_5 value: 18.519 - type: mrr_at_1 value: 15.157000000000002 - type: mrr_at_10 value: 21.38 - type: mrr_at_100 value: 22.163 - type: mrr_at_1000 value: 22.261 - type: mrr_at_3 value: 19.624 - type: mrr_at_5 value: 20.548 - type: ndcg_at_1 value: 15.157000000000002 - type: ndcg_at_10 value: 23.044999999999998 - type: ndcg_at_100 value: 27.586 - type: ndcg_at_1000 value: 30.848 - type: ndcg_at_3 value: 19.506999999999998 - type: ndcg_at_5 value: 21.101 - type: precision_at_1 value: 15.157000000000002 - type: precision_at_10 value: 3.7150000000000003 - type: precision_at_100 value: 0.651 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 8.626000000000001 - type: precision_at_5 value: 6.026 - type: recall_at_1 value: 13.567000000000002 - type: recall_at_10 value: 32.646 - type: recall_at_100 value: 54.225 - type: recall_at_1000 value: 79.12700000000001 - type: recall_at_3 value: 22.994 - type: recall_at_5 value: 26.912999999999997 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 7.26 - type: map_at_10 value: 15.109 - type: map_at_100 value: 17.155 - type: map_at_1000 value: 17.354 - type: map_at_3 value: 11.772 - type: map_at_5 value: 13.542000000000002 - type: mrr_at_1 value: 16.678 - type: mrr_at_10 value: 29.470000000000002 - type: mrr_at_100 value: 30.676 - type: mrr_at_1000 value: 30.714999999999996 - type: mrr_at_3 value: 25.44 - type: mrr_at_5 value: 27.792 - type: ndcg_at_1 value: 16.678 - type: ndcg_at_10 value: 22.967000000000002 - type: ndcg_at_100 value: 31.253999999999998 - type: ndcg_at_1000 value: 34.748000000000005 - type: ndcg_at_3 value: 17.058 - type: ndcg_at_5 value: 19.43 - type: precision_at_1 value: 16.678 - type: precision_at_10 value: 7.974 - type: precision_at_100 value: 1.6740000000000002 - type: precision_at_1000 value: 0.232 - type: precision_at_3 value: 13.681 - type: precision_at_5 value: 11.322000000000001 - type: recall_at_1 value: 7.26 - type: recall_at_10 value: 30.407 - type: recall_at_100 value: 59.073 - type: recall_at_1000 value: 78.58800000000001 - type: recall_at_3 value: 16.493 - type: recall_at_5 value: 22.453 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 5.176 - type: map_at_10 value: 11.951 - type: map_at_100 value: 16.208 - type: map_at_1000 value: 17.067 - type: map_at_3 value: 8.669 - type: map_at_5 value: 10.061 - type: mrr_at_1 value: 42.5 - type: mrr_at_10 value: 54.312000000000005 - type: mrr_at_100 value: 54.925999999999995 - type: mrr_at_1000 value: 54.959 - type: mrr_at_3 value: 52.292 - type: mrr_at_5 value: 53.554 - type: ndcg_at_1 value: 31.374999999999996 - type: ndcg_at_10 value: 25.480999999999998 - type: ndcg_at_100 value: 30.018 - type: ndcg_at_1000 value: 36.103 - type: ndcg_at_3 value: 27.712999999999997 - type: ndcg_at_5 value: 26.415 - type: precision_at_1 value: 42.5 - type: precision_at_10 value: 20.549999999999997 - type: precision_at_100 value: 6.387 - type: precision_at_1000 value: 1.204 - type: precision_at_3 value: 32.917 - type: precision_at_5 value: 27.400000000000002 - type: recall_at_1 value: 5.176 - type: recall_at_10 value: 18.335 - type: recall_at_100 value: 38.629999999999995 - type: recall_at_1000 value: 59.74699999999999 - type: recall_at_3 value: 10.36 - type: recall_at_5 value: 13.413 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 48.885 - type: f1 value: 44.330258440550644 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 25.211 - type: map_at_10 value: 37.946999999999996 - type: map_at_100 value: 38.852 - type: map_at_1000 value: 38.896 - type: map_at_3 value: 34.445 - type: map_at_5 value: 36.451 - type: mrr_at_1 value: 27.453 - type: mrr_at_10 value: 40.505 - type: mrr_at_100 value: 41.342 - type: mrr_at_1000 value: 41.377 - type: mrr_at_3 value: 36.971 - type: mrr_at_5 value: 39.013999999999996 - type: ndcg_at_1 value: 27.453 - type: ndcg_at_10 value: 45.106 - type: ndcg_at_100 value: 49.357 - type: ndcg_at_1000 value: 50.546 - type: ndcg_at_3 value: 38.063 - type: ndcg_at_5 value: 41.603 - type: precision_at_1 value: 27.453 - type: precision_at_10 value: 7.136000000000001 - type: precision_at_100 value: 0.9390000000000001 - type: precision_at_1000 value: 0.106 - type: precision_at_3 value: 16.677 - type: precision_at_5 value: 11.899 - type: recall_at_1 value: 25.211 - type: recall_at_10 value: 64.964 - type: recall_at_100 value: 84.23 - type: recall_at_1000 value: 93.307 - type: recall_at_3 value: 45.936 - type: recall_at_5 value: 54.489 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 11.434 - type: map_at_10 value: 20.325 - type: map_at_100 value: 22.267 - type: map_at_1000 value: 22.46 - type: map_at_3 value: 16.864 - type: map_at_5 value: 18.584999999999997 - type: mrr_at_1 value: 24.074 - type: mrr_at_10 value: 32.487 - type: mrr_at_100 value: 33.595000000000006 - type: mrr_at_1000 value: 33.649 - type: mrr_at_3 value: 29.578 - type: mrr_at_5 value: 31.044 - type: ndcg_at_1 value: 24.074 - type: ndcg_at_10 value: 27.244 - type: ndcg_at_100 value: 35.244 - type: ndcg_at_1000 value: 38.964999999999996 - type: ndcg_at_3 value: 22.709 - type: ndcg_at_5 value: 24.114 - type: precision_at_1 value: 24.074 - type: precision_at_10 value: 8.21 - type: precision_at_100 value: 1.627 - type: precision_at_1000 value: 0.22999999999999998 - type: precision_at_3 value: 15.741 - type: precision_at_5 value: 12.037 - type: recall_at_1 value: 11.434 - type: recall_at_10 value: 35.423 - type: recall_at_100 value: 66.056 - type: recall_at_1000 value: 88.63799999999999 - type: recall_at_3 value: 20.968 - type: recall_at_5 value: 26.540999999999997 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 30.506 - type: map_at_10 value: 44.864 - type: map_at_100 value: 46.016 - type: map_at_1000 value: 46.1 - type: map_at_3 value: 41.518 - type: map_at_5 value: 43.461 - type: mrr_at_1 value: 61.013 - type: mrr_at_10 value: 69.918 - type: mrr_at_100 value: 70.327 - type: mrr_at_1000 value: 70.342 - type: mrr_at_3 value: 68.226 - type: mrr_at_5 value: 69.273 - type: ndcg_at_1 value: 61.013 - type: ndcg_at_10 value: 54.539 - type: ndcg_at_100 value: 58.819 - type: ndcg_at_1000 value: 60.473 - type: ndcg_at_3 value: 49.27 - type: ndcg_at_5 value: 51.993 - type: precision_at_1 value: 61.013 - type: precision_at_10 value: 11.757 - type: precision_at_100 value: 1.5110000000000001 - type: precision_at_1000 value: 0.173 - type: precision_at_3 value: 31.339 - type: precision_at_5 value: 20.959 - type: recall_at_1 value: 30.506 - type: recall_at_10 value: 58.785 - type: recall_at_100 value: 75.55 - type: recall_at_1000 value: 86.455 - type: recall_at_3 value: 47.009 - type: recall_at_5 value: 52.397000000000006 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 77.954 - type: ap value: 73.06067313842448 - type: f1 value: 77.6469083443121 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 7.7170000000000005 - type: map_at_10 value: 14.696000000000002 - type: map_at_100 value: 15.973 - type: map_at_1000 value: 16.079 - type: map_at_3 value: 12.059000000000001 - type: map_at_5 value: 13.478000000000002 - type: mrr_at_1 value: 7.9079999999999995 - type: mrr_at_10 value: 14.972 - type: mrr_at_100 value: 16.235 - type: mrr_at_1000 value: 16.337 - type: mrr_at_3 value: 12.323 - type: mrr_at_5 value: 13.751 - type: ndcg_at_1 value: 7.9079999999999995 - type: ndcg_at_10 value: 19.131 - type: ndcg_at_100 value: 25.868000000000002 - type: ndcg_at_1000 value: 28.823999999999998 - type: ndcg_at_3 value: 13.611 - type: ndcg_at_5 value: 16.178 - type: precision_at_1 value: 7.9079999999999995 - type: precision_at_10 value: 3.4259999999999997 - type: precision_at_100 value: 0.687 - type: precision_at_1000 value: 0.094 - type: precision_at_3 value: 6.103 - type: precision_at_5 value: 4.951 - type: recall_at_1 value: 7.7170000000000005 - type: recall_at_10 value: 33.147999999999996 - type: recall_at_100 value: 65.55199999999999 - type: recall_at_1000 value: 88.748 - type: recall_at_3 value: 17.863 - type: recall_at_5 value: 24.083 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 95.48335613315093 - type: f1 value: 95.18813547597892 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 82.83857729138167 - type: f1 value: 63.61922697275075 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 76.65433759246805 - type: f1 value: 73.24385243140212 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 79.98655010087425 - type: f1 value: 79.3880305174127 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 30.109152457220606 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 26.928355856501696 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 29.73337424086118 - type: mrr value: 30.753319352871074 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 4.303 - type: map_at_10 value: 9.653 - type: map_at_100 value: 11.952 - type: map_at_1000 value: 13.126999999999999 - type: map_at_3 value: 6.976 - type: map_at_5 value: 8.292 - type: mrr_at_1 value: 35.913000000000004 - type: mrr_at_10 value: 45.827 - type: mrr_at_100 value: 46.587 - type: mrr_at_1000 value: 46.635 - type: mrr_at_3 value: 43.344 - type: mrr_at_5 value: 44.876 - type: ndcg_at_1 value: 34.056 - type: ndcg_at_10 value: 27.161 - type: ndcg_at_100 value: 25.552999999999997 - type: ndcg_at_1000 value: 34.671 - type: ndcg_at_3 value: 31.267 - type: ndcg_at_5 value: 29.896 - type: precision_at_1 value: 35.604 - type: precision_at_10 value: 19.969 - type: precision_at_100 value: 6.115 - type: precision_at_1000 value: 1.892 - type: precision_at_3 value: 29.825000000000003 - type: precision_at_5 value: 26.253999999999998 - type: recall_at_1 value: 4.303 - type: recall_at_10 value: 14.033999999999999 - type: recall_at_100 value: 28.250999999999998 - type: recall_at_1000 value: 58.751 - type: recall_at_3 value: 8.257 - type: recall_at_5 value: 10.761999999999999 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 14.668000000000001 - type: map_at_10 value: 26.593 - type: map_at_100 value: 28.094 - type: map_at_1000 value: 28.155 - type: map_at_3 value: 22.054000000000002 - type: map_at_5 value: 24.583 - type: mrr_at_1 value: 17.063 - type: mrr_at_10 value: 29.061999999999998 - type: mrr_at_100 value: 30.281000000000002 - type: mrr_at_1000 value: 30.325000000000003 - type: mrr_at_3 value: 24.754 - type: mrr_at_5 value: 27.281 - type: ndcg_at_1 value: 17.034 - type: ndcg_at_10 value: 34.157 - type: ndcg_at_100 value: 40.988 - type: ndcg_at_1000 value: 42.382999999999996 - type: ndcg_at_3 value: 25.076999999999998 - type: ndcg_at_5 value: 29.572 - type: precision_at_1 value: 17.034 - type: precision_at_10 value: 6.561 - type: precision_at_100 value: 1.04 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 12.167 - type: precision_at_5 value: 9.809 - type: recall_at_1 value: 14.668000000000001 - type: recall_at_10 value: 55.291999999999994 - type: recall_at_100 value: 85.82 - type: recall_at_1000 value: 96.164 - type: recall_at_3 value: 31.208999999999996 - type: recall_at_5 value: 41.766 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 66.20899999999999 - type: map_at_10 value: 80.024 - type: map_at_100 value: 80.73 - type: map_at_1000 value: 80.753 - type: map_at_3 value: 76.82900000000001 - type: map_at_5 value: 78.866 - type: mrr_at_1 value: 76.25 - type: mrr_at_10 value: 83.382 - type: mrr_at_100 value: 83.535 - type: mrr_at_1000 value: 83.538 - type: mrr_at_3 value: 82.013 - type: mrr_at_5 value: 82.931 - type: ndcg_at_1 value: 76.25999999999999 - type: ndcg_at_10 value: 84.397 - type: ndcg_at_100 value: 85.988 - type: ndcg_at_1000 value: 86.18299999999999 - type: ndcg_at_3 value: 80.778 - type: ndcg_at_5 value: 82.801 - type: precision_at_1 value: 76.25999999999999 - type: precision_at_10 value: 12.952 - type: precision_at_100 value: 1.509 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 35.323 - type: precision_at_5 value: 23.524 - type: recall_at_1 value: 66.20899999999999 - type: recall_at_10 value: 93.108 - type: recall_at_100 value: 98.817 - type: recall_at_1000 value: 99.857 - type: recall_at_3 value: 83.031 - type: recall_at_5 value: 88.441 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 41.82535503883439 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 62.077510084458055 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 3.383 - type: map_at_10 value: 8.839 - type: map_at_100 value: 10.876 - type: map_at_1000 value: 11.201 - type: map_at_3 value: 6.361 - type: map_at_5 value: 7.536 - type: mrr_at_1 value: 16.6 - type: mrr_at_10 value: 26.003999999999998 - type: mrr_at_100 value: 27.271 - type: mrr_at_1000 value: 27.354 - type: mrr_at_3 value: 22.900000000000002 - type: mrr_at_5 value: 24.58 - type: ndcg_at_1 value: 16.6 - type: ndcg_at_10 value: 15.345 - type: ndcg_at_100 value: 23.659 - type: ndcg_at_1000 value: 29.537000000000003 - type: ndcg_at_3 value: 14.283999999999999 - type: ndcg_at_5 value: 12.509999999999998 - type: precision_at_1 value: 16.6 - type: precision_at_10 value: 8.17 - type: precision_at_100 value: 2.028 - type: precision_at_1000 value: 0.34299999999999997 - type: precision_at_3 value: 13.633000000000001 - type: precision_at_5 value: 11.16 - type: recall_at_1 value: 3.383 - type: recall_at_10 value: 16.557 - type: recall_at_100 value: 41.123 - type: recall_at_1000 value: 69.67999999999999 - type: recall_at_3 value: 8.298 - type: recall_at_5 value: 11.322000000000001 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_spearman value: 75.55405115197729 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_spearman value: 67.65074099726466 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_spearman value: 83.89765011154986 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_spearman value: 76.97256789216159 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_spearman value: 83.80216382863031 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_spearman value: 81.90574806413879 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_spearman value: 85.58485422591949 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_spearman value: 65.92967262944444 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_spearman value: 80.41509666334721 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 77.81287769479543 - type: mrr value: 94.13409665860645 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 52.093999999999994 - type: map_at_10 value: 63.641999999999996 - type: map_at_100 value: 64.402 - type: map_at_1000 value: 64.416 - type: map_at_3 value: 60.878 - type: map_at_5 value: 62.778 - type: mrr_at_1 value: 55.333 - type: mrr_at_10 value: 65.139 - type: mrr_at_100 value: 65.75999999999999 - type: mrr_at_1000 value: 65.77199999999999 - type: mrr_at_3 value: 62.944 - type: mrr_at_5 value: 64.511 - type: ndcg_at_1 value: 55.333 - type: ndcg_at_10 value: 68.675 - type: ndcg_at_100 value: 71.794 - type: ndcg_at_1000 value: 72.18299999999999 - type: ndcg_at_3 value: 63.977 - type: ndcg_at_5 value: 66.866 - type: precision_at_1 value: 55.333 - type: precision_at_10 value: 9.232999999999999 - type: precision_at_100 value: 1.087 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 25.667 - type: precision_at_5 value: 17.0 - type: recall_at_1 value: 52.093999999999994 - type: recall_at_10 value: 82.506 - type: recall_at_100 value: 95.933 - type: recall_at_1000 value: 99.0 - type: recall_at_3 value: 70.078 - type: recall_at_5 value: 77.35600000000001 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.7128712871287 - type: cos_sim_ap value: 91.30057039245253 - type: cos_sim_f1 value: 85.35480624056368 - type: cos_sim_precision value: 85.91691995947315 - type: cos_sim_recall value: 84.8 - type: dot_accuracy value: 99.35346534653465 - type: dot_ap value: 67.929309733355 - type: dot_f1 value: 63.94205897568547 - type: dot_precision value: 66.2379421221865 - type: dot_recall value: 61.8 - type: euclidean_accuracy value: 99.69009900990099 - type: euclidean_ap value: 89.62179420600057 - type: euclidean_f1 value: 83.93039918116682 - type: euclidean_precision value: 85.9538784067086 - type: euclidean_recall value: 82.0 - type: manhattan_accuracy value: 99.70990099009902 - type: manhattan_ap value: 90.29611631593602 - type: manhattan_f1 value: 84.81729284611424 - type: manhattan_precision value: 87.38069989395547 - type: manhattan_recall value: 82.39999999999999 - type: max_accuracy value: 99.7128712871287 - type: max_ap value: 91.30057039245253 - type: max_f1 value: 85.35480624056368 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 67.33611278831218 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 34.504437768624214 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.80014786474266 - type: mrr value: 50.468909154570916 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.677648147466808 - type: cos_sim_spearman value: 30.191761045901888 - type: dot_pearson value: 23.16759191245942 - type: dot_spearman value: 23.186942570638486 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.214 - type: map_at_10 value: 1.2309999999999999 - type: map_at_100 value: 5.867 - type: map_at_1000 value: 14.671999999999999 - type: map_at_3 value: 0.519 - type: map_at_5 value: 0.764 - type: mrr_at_1 value: 82.0 - type: mrr_at_10 value: 87.519 - type: mrr_at_100 value: 87.519 - type: mrr_at_1000 value: 87.536 - type: mrr_at_3 value: 86.333 - type: mrr_at_5 value: 87.233 - type: ndcg_at_1 value: 77.0 - type: ndcg_at_10 value: 55.665 - type: ndcg_at_100 value: 39.410000000000004 - type: ndcg_at_1000 value: 37.21 - type: ndcg_at_3 value: 65.263 - type: ndcg_at_5 value: 61.424 - type: precision_at_1 value: 82.0 - type: precision_at_10 value: 55.400000000000006 - type: precision_at_100 value: 39.04 - type: precision_at_1000 value: 16.788 - type: precision_at_3 value: 67.333 - type: precision_at_5 value: 62.8 - type: recall_at_1 value: 0.214 - type: recall_at_10 value: 1.4200000000000002 - type: recall_at_100 value: 9.231 - type: recall_at_1000 value: 35.136 - type: recall_at_3 value: 0.544 - type: recall_at_5 value: 0.832 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 0.41000000000000003 - type: map_at_10 value: 2.32 - type: map_at_100 value: 4.077 - type: map_at_1000 value: 4.9430000000000005 - type: map_at_3 value: 1.087 - type: map_at_5 value: 1.466 - type: mrr_at_1 value: 6.122 - type: mrr_at_10 value: 13.999 - type: mrr_at_100 value: 16.524 - type: mrr_at_1000 value: 16.567999999999998 - type: mrr_at_3 value: 11.224 - type: mrr_at_5 value: 13.163 - type: ndcg_at_1 value: 5.102 - type: ndcg_at_10 value: 6.542000000000001 - type: ndcg_at_100 value: 14.127 - type: ndcg_at_1000 value: 24.396 - type: ndcg_at_3 value: 5.653 - type: ndcg_at_5 value: 5.5649999999999995 - type: precision_at_1 value: 6.122 - type: precision_at_10 value: 7.142999999999999 - type: precision_at_100 value: 3.51 - type: precision_at_1000 value: 0.9860000000000001 - type: precision_at_3 value: 6.802999999999999 - type: precision_at_5 value: 6.938999999999999 - type: recall_at_1 value: 0.41000000000000003 - type: recall_at_10 value: 5.627 - type: recall_at_100 value: 23.121 - type: recall_at_1000 value: 54.626 - type: recall_at_3 value: 1.763 - type: recall_at_5 value: 3.013 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 70.71119999999999 - type: ap value: 15.1342268718371 - type: f1 value: 55.043262693594855 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.89983022071308 - type: f1 value: 61.13086468149106 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 30.264802332456515 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 84.46086904690947 - type: cos_sim_ap value: 68.76039123104324 - type: cos_sim_f1 value: 63.002224839680665 - type: cos_sim_precision value: 62.503245910153204 - type: cos_sim_recall value: 63.50923482849604 - type: dot_accuracy value: 80.07391071109257 - type: dot_ap value: 53.43322643579626 - type: dot_f1 value: 52.6850065983149 - type: dot_precision value: 42.81471704339218 - type: dot_recall value: 68.46965699208444 - type: euclidean_accuracy value: 84.2701317279609 - type: euclidean_ap value: 67.55078414631596 - type: euclidean_f1 value: 62.90723537877797 - type: euclidean_precision value: 62.392940565792884 - type: euclidean_recall value: 63.43007915567283 - type: manhattan_accuracy value: 84.22244739822375 - type: manhattan_ap value: 67.92488847948273 - type: manhattan_f1 value: 62.99132210311383 - type: manhattan_precision value: 60.99851705388038 - type: manhattan_recall value: 65.11873350923483 - type: max_accuracy value: 84.46086904690947 - type: max_ap value: 68.76039123104324 - type: max_f1 value: 63.002224839680665 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 87.71296619707377 - type: cos_sim_ap value: 82.76174215711472 - type: cos_sim_f1 value: 75.73585592141168 - type: cos_sim_precision value: 71.79416430985721 - type: cos_sim_recall value: 80.1355097012627 - type: dot_accuracy value: 85.62502425583111 - type: dot_ap value: 77.50549495030725 - type: dot_f1 value: 71.47900863425035 - type: dot_precision value: 65.4587361546834 - type: dot_recall value: 78.71881736987989 - type: euclidean_accuracy value: 87.12694531765437 - type: euclidean_ap value: 81.63583409712018 - type: euclidean_f1 value: 74.50966015324268 - type: euclidean_precision value: 71.11764294212331 - type: euclidean_recall value: 78.24145364952264 - type: manhattan_accuracy value: 87.35009896379088 - type: manhattan_ap value: 82.20417545366242 - type: manhattan_f1 value: 74.84157622550805 - type: manhattan_precision value: 71.00898410504493 - type: manhattan_recall value: 79.11148752694795 - type: max_accuracy value: 87.71296619707377 - type: max_ap value: 82.76174215711472 - type: max_f1 value: 75.73585592141168 --- # LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders > LLM2Vec is a simple recipe to convert decoder-only LLMs into text encoders. It consists of 3 simple steps: 1) enabling bidirectional attention, 2) masked next token prediction, and 3) unsupervised contrastive learning. The model can be further fine-tuned to achieve state-of-the-art performance. - **Repository:** https://github.com/McGill-NLP/llm2vec - **Paper:** https://arxiv.org/abs/2404.05961 ## Installation ```bash pip install llm2vec ``` ## Usage ```python from llm2vec import LLM2Vec import torch from transformers import AutoTokenizer, AutoModel, AutoConfig from peft import PeftModel # Loading base Mistral model, along with custom code that enables bidirectional connections in decoder-only LLMs. MNTP LoRA weights are merged into the base model. tokenizer = AutoTokenizer.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp" ) config = AutoConfig.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", trust_remote_code=True ) model = AutoModel.from_pretrained( "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", trust_remote_code=True, config=config, torch_dtype=torch.bfloat16, device_map="cuda" if torch.cuda.is_available() else "cpu", ) model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp", ) model = model.merge_and_unload() # This can take several minutes on cpu # Loading unsupervised SimCSE model. This loads the trained LoRA weights on top of MNTP model. Hence the final weights are -- Base model + MNTP (LoRA) + SimCSE (LoRA). model = PeftModel.from_pretrained( model, "McGill-NLP/LLM2Vec-Mistral-7B-Instruct-v2-mntp-unsup-simcse" ) # Wrapper for encoding and pooling operations l2v = LLM2Vec(model, tokenizer, pooling_mode="mean", max_length=512) # Encoding queries using instructions instruction = ( "Given a web search query, retrieve relevant passages that answer the query:" ) queries = [ [instruction, "how much protein should a female eat"], [instruction, "summit define"], ] q_reps = l2v.encode(queries) # Encoding documents. Instruction are not required for documents documents = [ "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.", ] d_reps = l2v.encode(documents) # Compute cosine similarity q_reps_norm = torch.nn.functional.normalize(q_reps, p=2, dim=1) d_reps_norm = torch.nn.functional.normalize(d_reps, p=2, dim=1) cos_sim = torch.mm(q_reps_norm, d_reps_norm.transpose(0, 1)) print(cos_sim) """ tensor([[0.6175, 0.2535], [0.2298, 0.5792]]) """ ``` ## Questions If you have any question about the code, feel free to email Parishad (`[email protected]`) and Vaibhav (`[email protected]`).
nes470/wiki-tfidf-logreg-qa-2
nes470
2024-05-13T20:02:45Z
866
0
transformers
[ "transformers", "pytorch", "QA-umd-quizbowl", "question-answering", "custom_code", "arxiv:1910.09700", "license:mit", "region:us" ]
question-answering
2024-05-13T19:54:37Z
--- library_name: transformers license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gglabs/TinyLM-Chat-0611-7-epoch
gglabs
2024-06-11T16:56:26Z
866
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-11T14:14:28Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** gglabs - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Isaak-Carter/JOSIEv4o-8b-stage1-beta1-Q4_k_s-gguf
Isaak-Carter
2024-06-24T22:39:52Z
866
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "llama-cpp", "gguf-my-repo", "en", "base_model:Isaak-Carter/JOSIEv4o-8b-stage1-beta1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-24T22:39:30Z
--- base_model: Isaak-Carter/JOSIEv4o-8b-stage1-beta1 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft - llama-cpp - gguf-my-repo --- # Isaak-Carter/JOSIEv4o-8b-stage1-beta1-Q4_K_S-GGUF This model was converted to GGUF format from [`Isaak-Carter/JOSIEv4o-8b-stage1-beta1`](https://huggingface.co/Isaak-Carter/JOSIEv4o-8b-stage1-beta1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Isaak-Carter/JOSIEv4o-8b-stage1-beta1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta1-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta1-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta1-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta1-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta1-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta1-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Isaak-Carter/JOSIEv4o-8b-stage1-beta1-Q4_K_S-GGUF --hf-file josiev4o-8b-stage1-beta1-q4_k_s.gguf -c 2048 ```
AdamOswald1/Ultima-Waifu-Diffusion
AdamOswald1
2023-05-16T09:34:03Z
865
13
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "en", "dataset:Guizmus/AnimeChanStyle", "dataset:skytnt/fbanimehq", "dataset:skytnt/anime-segmentation", "dataset:Nerfgun3/bad_prompt", "dataset:Nerfgun3/shatter_style", "dataset:Nerfgun3/ouroboros_embeddings", "dataset:cattoroboto/waifudiffusion-marine-textual-inversion", "dataset:waifu-research-department/regularization", "dataset:waifu-research-department/embeddings", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-01T13:57:20Z
--- language: - en tags: - stable-diffusion - text-to-image inference: true license: creativeml-openrail-m datasets: - Guizmus/AnimeChanStyle - skytnt/fbanimehq - skytnt/anime-segmentation - Nerfgun3/bad_prompt - Nerfgun3/shatter_style - Nerfgun3/ouroboros_embeddings - cattoroboto/waifudiffusion-marine-textual-inversion - waifu-research-department/regularization - waifu-research-department/embeddings library_name: diffusers pipeline_tag: text-to-image ---
zekun-li/geolm-base-toponym-recognition
zekun-li
2023-09-07T15:58:01Z
865
3
transformers
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "toponym detection", "language model", "geospatial understanding", "geolm", "en", "dataset:GeoWebNews", "arxiv:1810.12368", "license:cc-by-nc-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-15T04:54:46Z
--- language: - en thumbnail: url to a thumbnail used in social sharing tags: - toponym detection - language model - geospatial understanding - geolm license: cc-by-nc-2.0 datasets: - GeoWebNews metrics: - f1 pipeline_tag: token-classification widget: - text: >- Minneapolis, officially the City of Minneapolis, is a city in the state of Minnesota and the county seat of Hennepin County. As of the 2020 census the population was 429,954, making it the largest city in Minnesota and the 46th-most-populous in the United States. Nicknamed the "City of Lakes", Minneapolis is abundant in water, with thirteen lakes, wetlands, the Mississippi River, creeks, and waterfalls. - text: >- Los Angeles, often referred to by its initials L.A., is the most populous city in California, the most populous U.S. state. It is the commercial, financial, and cultural center of Southern California. Los Angeles is the second-most populous city in the United States after New York City, with a population of roughly 3.9 million residents within the city limits as of 2020. --- # Model Card for GeoLM model for Toponym Recognition <!-- Provide a quick summary of what the model is/does. [Optional] --> A language model for detecting toponyms (i.e. place names) from sentences. We pretrain the GeoLM model on world-wide OpenStreetMap (OSM), WikiData and Wikipedia data, then fine-tune it for Toponym Recognition task on GeoWebNews dataset # Model Details ## Model Description <!-- Provide a longer summary of what this model is/does. --> Pretrain the GeoLM model on world-wide OpenStreetMap (OSM), WikiData and Wikipedia data, then fine-tune it for Toponym Recognition task on GeoWebNews dataset <!--- **Developed by:** UMN Knowledge Computing Lab & USC LUKA Lab --> - **Model type:** Language model for geospatial understanding - **Language(s) (NLP):** en - **License:** cc-by-nc-2.0 - **Parent Model:** https://huggingface.co/zekun-li/geolm-base-cased <!---- **Resources for more information:** Zekun Li (li002666[Shift+2]umn.edu) --> # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> This is a fine-tuned GeoLM model for toponym detection task. The inputs are sentences and outputs are detected toponyms. <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> To use this model, please refer to the code below. * **Option 1:** Load weights to a BERT model (Same procedure as the demo on the right side panel) ```python import torch from transformers import AutoModelForTokenClassification, AutoTokenizer # Model name from Hugging Face model hub model_name = "zekun-li/geolm-base-toponym-recognition" # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForTokenClassification.from_pretrained(model_name) # Example input sentence input_sentence = "Minneapolis, officially the City of Minneapolis, is a city in the state of Minnesota and the county seat of Hennepin County." # Tokenize input sentence tokens = tokenizer.encode(input_sentence, return_tensors="pt") # Pass tokens through the model outputs = model(tokens) # Retrieve predicted labels for each token predicted_labels = torch.argmax(outputs.logits, dim=2) predicted_labels = predicted_labels.detach().cpu().numpy() # Decode predicted labels predicted_labels = [model.config.id2label[label] for label in predicted_labels[0]] # Print predicted labels print(predicted_labels) # ['O', 'B-Topo', 'O', 'O', 'O', 'O', 'O', 'B-Topo', 'O', 'O', 'O', 'O', 'O', 'O', # 'O', 'O', 'B-Topo', 'O', 'O', 'O', 'O', 'O', 'B-Topo', 'I-Topo', 'I-Topo', 'O', 'O', 'O'] ``` * **Option 2:** Load weights to a GeoLM model To appear soon # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> **GeoWebNews** (Credit to [Gritta et al.](https://arxiv.org/pdf/1810.12368.pdf)) Download link: https://github.com/milangritta/Pragmatic-Guide-to-Geoparsing-Evaluation/blob/master/data/GWN.xml ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> More information needed # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data & Metrics & Results ### Testing Data <!-- This should link to a Data Card if possible. --> More information needed ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> More information needed ### Results More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. # Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** More information needed **APA:** More information needed # Model Card Author [optional] <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. --> <!--- Zekun Li (li002666[Shift+2]umn.edu) -->
KennethTM/MiniLM-L6-danish-encoder
KennethTM
2024-06-26T20:05:26Z
865
0
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "sentence-similarity", "da", "dataset:squad", "dataset:eli5", "dataset:sentence-transformers/embedding-training-data", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2024-01-09T18:07:09Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity license: mit datasets: - squad - eli5 - sentence-transformers/embedding-training-data language: - da library_name: sentence-transformers --- *New version available, trained on more data and otherwise identical [KennethTM/MiniLM-L6-danish-encoder-v2](https://huggingface.co/KennethTM/MiniLM-L6-danish-encoder-v2)* # MiniLM-L6-danish-encoder This is a lightweight (~22 M parameters) [sentence-transformers](https://www.SBERT.net) model for Danish NLP: It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for tasks like clustering or semantic search. The maximum sequence length is 512 tokens. The model was not pre-trained from scratch but adapted from the English version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) with a [Danish tokenizer](https://huggingface.co/KennethTM/bert-base-uncased-danish). Trained on ELI5 and SQUAD data machine translated from English to Danish. # Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Kører der cykler på vejen?", "En panda løber på vejen.", "En mand kører hurtigt forbi på cykel."] model = SentenceTransformer('KennethTM/MiniLM-L6-danish-encoder') embeddings = model.encode(sentences) print(embeddings) ``` # Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Kører der cykler på vejen?", "En panda løber på vejen.", "En mand kører hurtigt forbi på cykel."] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('KennethTM/MiniLM-L6-danish-encoder') model = AutoModel.from_pretrained('KennethTM/MiniLM-L6-danish-encoder') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ```
linoyts/huggy_v29
linoyts
2023-11-10T16:19:45Z
864
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2023-11-10T15:39:39Z
--- license: openrail++ base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A webpage in the style of <s0><s1> tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - LinoyTsaban/huggy_v29 These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on A webpage in the style of <s0><s1> using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
TheBloke/neural-chat-7B-v3-3-GGUF
TheBloke
2023-12-12T09:51:45Z
864
15
transformers
[ "transformers", "gguf", "mistral", "base_model:Intel/neural-chat-7b-v3-3", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
2023-12-12T09:47:17Z
--- base_model: Intel/neural-chat-7b-v3-3 inference: false license: apache-2.0 model_creator: Intel model_name: Neural Chat 7B V3-3 model_type: mistral prompt_template: '### System: {system_message} ### User: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Neural Chat 7B V3-3 - GGUF - Model creator: [Intel](https://huggingface.co/Intel) - Original model: [Neural Chat 7B V3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) <!-- description start --> ## Description This repo contains GGUF format model files for [Intel's Neural Chat 7B V3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF) * [Intel's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Intel/neural-chat-7b-v3-3) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: System-User-Assistant ``` ### System: {system_message} ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [neural-chat-7b-v3-3.Q2_K.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [neural-chat-7b-v3-3.Q3_K_S.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [neural-chat-7b-v3-3.Q3_K_M.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [neural-chat-7b-v3-3.Q3_K_L.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [neural-chat-7b-v3-3.Q4_0.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [neural-chat-7b-v3-3.Q4_K_S.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [neural-chat-7b-v3-3.Q4_K_M.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [neural-chat-7b-v3-3.Q5_0.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [neural-chat-7b-v3-3.Q5_K_S.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [neural-chat-7b-v3-3.Q5_K_M.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [neural-chat-7b-v3-3.Q6_K.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [neural-chat-7b-v3-3.Q8_0.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-3-GGUF/blob/main/neural-chat-7b-v3-3.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/neural-chat-7B-v3-3-GGUF and below it, a specific filename to download, such as: neural-chat-7b-v3-3.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/neural-chat-7B-v3-3-GGUF neural-chat-7b-v3-3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/neural-chat-7B-v3-3-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/neural-chat-7B-v3-3-GGUF neural-chat-7b-v3-3.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m neural-chat-7b-v3-3.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n### User:\n{prompt}\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./neural-chat-7b-v3-3.Q4_K_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "### System:\n{system_message}\n### User:\n{prompt}\n### Assistant:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./neural-chat-7b-v3-3.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Intel's Neural Chat 7B V3-3 ## Model Details: Neural-Chat-v3-3 This model is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from the [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) on the [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) dataset. The model was aligned using the Direct Performance Optimization (DPO) method with [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs). The [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) was originally fine-tuned from [mistralai/Mistral-7B-v-0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). For more information, refer to our blog [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Intel Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3). **Note:** Adjust lora modules to trade off truthfulqa and gsm8k performance on DPO stage. | Model Detail | Description | | ----------- | ----------- | | Model Authors - Company | Intel. The NeuralChat team with members from Intel/DCAI/AISE/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen.| | Date | December, 2023 | | Version | v3-3 | | Type | 7B Large Language Model | | Paper or Other Resources | [Medium Blog](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3) | | License | Apache 2.0 | | Questions or Comments | [Community Tab](https://huggingface.co/Intel/neural-chat-7b-v3-3/discussions) and [Intel Developers Discord](https://discord.gg/rv2Gp55UJQ)| | Intended Use | Description | | ----------- | ----------- | | Primary intended uses | You can use the fine-tuned model for several language-related tasks. Checkout the [LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) to see how this model and others from Intel are doing. | | Primary intended users | Anyone doing inference on language-related tasks. | | Out-of-scope uses | This model in most cases will need to be fine-tuned for your particular task. The model should not be used to intentionally create hostile or alienating environments for people.| ## How to use and Sample Code Here is the sample code to reproduce the model: [Sample Code](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/examples/finetuning/finetune_neuralchat_v3/README.md). ## Prompt Template ```plaintext ### System: {system} ### User: {usr} ### Assistant: ``` ## [Quantitative Analyses: Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-3) (**note:** the leaderboard removed drop task) | Metric | Value | |-----------------------|---------------------------| | Avg. | 69.83 | | ARC (25-shot) | 66.89 | | HellaSwag (10-shot) | 85.26 | | MMLU (5-shot) | 63.07 | | TruthfulQA (0-shot) | 63.01 | | Winogrande (5-shot) | 79.64 | | GSM8K (5-shot) | 61.11 | ## Useful links * Intel Neural Compressor [link](https://github.com/intel/neural-compressor) * Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers) ## Ethical Considerations and Limitations neural-chat-7b-v3-3 can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of neural-chat-7b-v3-3, developers should perform safety testing. ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. <!-- original-model-card end -->
Aryanne/Westest-7B
Aryanne
2024-03-04T14:45:48Z
864
2
transformers
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "mergekit", "merge", "base_model:chargoddard/piano-medley-7b", "base_model:senseable/WestLake-7B-v2", "license:cc-by-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-26T04:29:52Z
--- license: cc-by-sa-4.0 tags: - mergekit - merge base_model: - chargoddard/piano-medley-7b - senseable/WestLake-7B-v2 model-index: - name: Westest-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.18 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.52 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.43 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 66.72 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 86.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 65.73 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B name: Open LLM Leaderboard --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the task_anysize merge method using [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) as a base. ### Models Merged The following models were included in the merge: * [chargoddard/piano-medley-7b](https://huggingface.co/chargoddard/piano-medley-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: senseable/WestLake-7B-v2 dtype: bfloat16 merge_method: task_anysize slices: - sources: - layer_range: [0, 32] model: model: path: chargoddard/piano-medley-7b parameters: weight: 0.55 - layer_range: [0, 32] model: model: path: senseable/WestLake-7B-v2 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__Westest-7B) | Metric |Value| |---------------------------------|----:| |Avg. |74.03| |AI2 Reasoning Challenge (25-Shot)|72.18| |HellaSwag (10-Shot) |88.52| |MMLU (5-Shot) |64.43| |TruthfulQA (0-shot) |66.72| |Winogrande (5-shot) |86.58| |GSM8k (5-shot) |65.73|
PrunaAI/dbrx-instruct-bnb-4bit
PrunaAI
2024-03-30T17:07:34Z
864
10
pruna-engine
[ "pruna-engine", "safetensors", "dbrx", "custom_code", "4-bit", "bitsandbytes", "region:us" ]
null
2024-03-30T14:46:11Z
--- library_name: pruna-engine thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help. **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed by using bitsandbytes. - ***How does the model quality change?*** The quality of the model output will slightly degrade. - ***What is the model format?*** We the standard safetensors format. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). ## Usage ## Quickstart Guide Getting started with DBRX models is easy with the `transformers` library. The model requires ~264GB of RAM and the following packages: ```bash pip install "transformers>=4.39.2" "tiktoken>=0.6.0" ``` If you'd like to speed up download time, you can use the `hf_transfer` package as described by Huggingface [here](https://huggingface.co/docs/huggingface_hub/en/guides/download#faster-downloads). ```bash pip install hf_transfer export HF_HUB_ENABLE_HF_TRANSFER=1 ``` You will need to request access to this repository to download the model. Once this is granted, [obtain an access token](https://huggingface.co/docs/hub/en/security-tokens) with `read` permission, and supply the token below. ### Run the model on multiple GPUs: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("PrunaAI/dbrx-instruct-bnb-4bit", trust_remote_code=True, token="hf_YOUR_TOKEN") model = AutoModelForCausalLM.from_pretrained("PrunaAI/dbrx-instruct-bnb-4bit", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True, token="hf_YOUR_TOKEN") input_text = "What does it take to build a great LLM?" messages = [{"role": "user", "content": input_text}] input_ids = tokenizer.apply_chat_template(messages, return_dict=True, tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda") outputs = model.generate(**input_ids, max_new_tokens=200) print(tokenizer.decode(outputs[0])) ``` ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model databricks/dbrx-instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
lmstudio-community/c4ai-command-r-v01-GGUF
lmstudio-community
2024-04-15T23:12:32Z
864
20
transformers
[ "transformers", "gguf", "text-generation", "en", "fr", "de", "es", "it", "pt", "ja", "ko", "zh", "ar", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-generation
2024-04-02T16:32:17Z
--- library_name: transformers language: - en - fr - de - es - it - pt - ja - ko - zh - ar license: cc-by-nc-4.0 quantized_by: bartowski pipeline_tag: text-generation lm_studio: param_count: 35b use_case: general release_date: 11-03-2024 model_creator: CohereForAI prompt_template: cohere_command_r system_prompt: none base_model: cohere original_repo: CohereForAI/c4ai-command-r-v01 --- ## 💫 Community Model> C4AI Command-R 35B by Cohere For AI *👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [Cohere For AI](https://huggingface.co/CohereForAI)<br> **Original model**: [c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)<br> **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2536](https://github.com/ggerganov/llama.cpp/releases/tag/b2536)<br> ## Model Summary: C4AI Command-R is a 35B parameter model released by Cohere For AI. Command-R is optimized for a variety of use cases including reasoning, summarization, and question answering. Command-R has the capability for multilingual generation evaluated in 10 languages and highly performant RAG capabilities. ## Prompt Template: Choose the `Cohere Command R` preset in your LM Studio. Under the hood, the model will see a prompt that's formatted like so: ``` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> ``` Note that this model does not support a System prompt. ## Use case and examples Command-R is tuned for a broad range of uses such as reasoning, summarizing, general knowledge, and coding. ### Reasoning: ``` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|> Statements: All mathematicians are logical thinkers. No logical thinker is irrational. Some scientists are mathematicians. Conclusions: I. Some scientists are logical thinkers. II. No mathematician is irrational. III. Some scientists are irrational. Analyze these statements and decide which conclusions are correct based on the logical relationships.<|END_OF_TURN_TOKEN|> <|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> The logical relationships between the statements and the correctness of each conclusion are as follows: I. Some scientists are logical thinkers: This conclusion follows logically from the given statements. Since some scientists are mathematicians and all mathematicians are logical thinkers, it can be deduced that some scientists indeed exhibit logical thinking. This conclusion is correct. II. No mathematician is irrational: This conclusion is a direct inference from the given statements. As no logical thinker is irrational, and all mathematicians are logical thinkers, it's reasonable to assume that no mathematician can be described as irrational. This conclusion is also correct. III. Some scientists are irrational: This conclusion does not necessarily follow from the given statements. While it's true that not all scientists may be logical thinkers (as indicated in the second conclusion), it doesn't imply that they are irrational. The statement about scientists being irrational cannot be definitively concluded from the given information, so this conclusion is not correct. In summary, only I and II are correct conclusions based on the logical relationships within the statements provided.<|END_OF_TURN_TOKEN|> ``` ### Coding: ``` <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|> Can you provide a merge sort implementation in python?<|END_OF_TURN_TOKEN|> <|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|> Here's a merge sort implementation in Python: def merge_sort(arr): if len(arr) > 1: mid = len(arr) // 2 left = arr[:mid] right = arr[mid:] merge_sort(left) merge_sort(right) i = j = k = 0 while i < len(left) and j < len(right): if left[i] < right[j]: arr[k] = left[i] i += 1 else: arr[k] = right[j] j += 1 k += 1 while i < len(left): arr[k] = left[i] i += 1 k += 1 while j < len(right): arr[k] = right[j] j += 1 k += 1 # Example usage my_list = [12, 11, 13, 5, 6, 7] merge_sort(my_list) print(my_list) The `merge_sort` function takes a list `arr` as input and recursively divides it into two halves until each half contains a single element or is empty. It then merges the two sorted halves in a sorted manner. The example usage demonstrates how to use the function to sort a list.<|END_OF_TURN_TOKEN|> ``` ## Technical Details This model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. Multilingual capabilities: English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic. Pre-training data additionally included the following 13 languages: Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, Persian. Supports a context length of 128k. For more information on prompting, you can reference the official documentation [here](https://docs.cohere.com/docs/prompting-command-r) ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
nes470/exp-buzzer-with-gen-pipeline
nes470
2024-05-15T23:26:26Z
864
0
transformers
[ "transformers", "pytorch", "QA-umd-quizbowl", "text2text-generation", "custom_code", "arxiv:1910.09700", "license:mit", "autotrain_compatible", "region:us" ]
text2text-generation
2024-05-15T23:24:34Z
--- library_name: transformers license: mit --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
spacy/en_core_web_trf
spacy
2024-06-13T12:37:07Z
863
36
spacy
[ "spacy", "token-classification", "en", "license:mit", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - spacy - token-classification language: - en license: mit model-index: - name: en_core_web_trf results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9008293365 - name: NER Recall type: recall value: 0.9029947917 - name: NER F Score type: f_score value: 0.9019107643 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.9812819818 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.9526219812 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.9390746643 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.9011225055 --- ### Details: https://spacy.io/models/en#en_core_web_trf English transformer pipeline (Transformer(name='roberta-base', piece_encoder='byte-bpe', stride=104, type='roberta', width=768, window=144, vocab_size=50265)). Components: transformer, tagger, parser, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `en_core_web_trf` | | **Version** | `3.7.3` | | **spaCy** | `>=3.7.2,<3.8.0` | | **Default Pipeline** | `transformer`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `transformer`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br />[roberta-base](https://github.com/pytorch/fairseq/tree/master/examples/roberta) (Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and Luke Zettlemoyer and Veselin Stoyanov) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (112 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.86 | | `TOKEN_P` | 99.57 | | `TOKEN_R` | 99.58 | | `TOKEN_F` | 99.57 | | `TAG_ACC` | 98.13 | | `SENTS_P` | 94.89 | | `SENTS_R` | 85.79 | | `SENTS_F` | 90.11 | | `DEP_UAS` | 95.26 | | `DEP_LAS` | 93.91 | | `ENTS_P` | 90.08 | | `ENTS_R` | 90.30 | | `ENTS_F` | 90.19 |
Aniemore/wavlm-emotion-russian-resd
Aniemore
2023-04-07T18:09:05Z
863
2
transformers
[ "transformers", "pytorch", "safetensors", "wavlm", "audio-classification", "doi:10.57967/hf/1276", "endpoints_compatible", "region:us" ]
audio-classification
2023-02-13T17:59:48Z
Entry not found
potsawee/longformer-large-4096-answering-race
potsawee
2023-05-27T22:13:04Z
863
17
transformers
[ "transformers", "pytorch", "longformer", "multiple-choice", "question-answering", "en", "dataset:race", "arxiv:2301.12307", "license:apache-2.0", "region:us" ]
question-answering
2023-03-12T16:15:58Z
--- license: apache-2.0 datasets: - race language: - en library_name: transformers pipeline_tag: question-answering inference: false --- # longformer-large-4096 fine-tuned to RACE for (Multiple-Choice) Question Answering - Input: `context`, `question`, `options` - Output: logit (or probability over the options) ## Model Details longformer-large-4096 model is fine-tuned to the RACE dataset where the input is a concatenation of ```context + question + option```. We follow the architecture/setup described in https://openreview.net/forum?id=HJgJtT4tvB). The output is the logit over the options. This is the question answering (QA) component in our [MQAG paper](https://arxiv.org/abs/2301.12307), or please refer to the GitHub repo of this project: https://github.com/potsawee/mqag0. ## How to Use the Model Use the code below to get started with the model. ```python >>> import torch >>> import numpy as np >>> from transformers import LongformerTokenizer, LongformerForMultipleChoice >>> tokenizer = LongformerTokenizer.from_pretrained("potsawee/longformer-large-4096-answering-race") >>> model = LongformerForMultipleChoice.from_pretrained("potsawee/longformer-large-4096-answering-race") >>> context = r"""Chelsea's mini-revival continued with a third victory in a row as they consigned struggling Leicester City to a fifth consecutive defeat. Buoyed by their Champions League win over Borussia Dortmund, Chelsea started brightly and Ben Chilwell volleyed in from a tight angle against his old club. Chelsea's Joao Felix and Leicester's Kiernan Dewsbury-Hall hit the woodwork in the space of two minutes, then Felix had a goal ruled out by the video assistant referee for offside. Patson Daka rifled home an excellent equaliser after Ricardo Pereira won the ball off the dawdling Felix outside the box. But Kai Havertz pounced six minutes into first-half injury time with an excellent dinked finish from Enzo Fernandez's clever aerial ball. Mykhailo Mudryk thought he had his first goal for the Blues after the break but his effort was disallowed for offside. Mateo Kovacic sealed the win as he volleyed in from Mudryk's header. The sliding Foxes, who ended with 10 men following Wout Faes' late dismissal for a second booking, now just sit one point outside the relegation zone. """.replace('\n', ' ') >>> question = "Who had a goal ruled out for offside?" >>> options = ['Ricardo Pereira', 'Ben Chilwell', 'Joao Felix', 'The Foxes'] >>> inputs = prepare_answering_input( tokenizer=tokenizer, question=question, options=options, context=context, ) >>> outputs = model(**inputs) >>> prob = torch.softmax(outputs.logits, dim=-1)[0].tolist() >>> selected_answer = options[np.argmax(prob)] >>> print(prob) [0.00145158, 0.00460851, 0.99049687, 0.00344302] >>> print(selected_answer) Joao Felix ``` where the function that prepare the input to the answering model is: ```python def prepare_answering_input( tokenizer, # longformer_tokenizer question, # str options, # List[str] context, # str max_seq_length=4096, ): c_plus_q = context + ' ' + tokenizer.bos_token + ' ' + question c_plus_q_4 = [c_plus_q] * len(options) tokenized_examples = tokenizer( c_plus_q_4, options, max_length=max_seq_length, padding="longest", truncation=True, return_tensors="pt", ) input_ids = tokenized_examples['input_ids'].unsqueeze(0) attention_mask = tokenized_examples['attention_mask'].unsqueeze(0) example_encoded = { "input_ids": input_ids, "attention_mask": attention_mask, } return example_encoded ``` ## Related Models - Question/Answering Generation ```Context ---> Question + Answer```: - https://huggingface.co/potsawee/t5-large-generation-race-QuestionAnswer - https://huggingface.co/potsawee/t5-large-generation-squad-QuestionAnswer - Distractor (False options) Generation: - https://huggingface.co/potsawee/t5-large-generation-race-Distractor ## Citation ```bibtex @article{manakul2023mqag, title={MQAG: Multiple-choice Question Answering and Generation for Assessing Information Consistency in Summarization}, author={Manakul, Potsawee and Liusie, Adian and Gales, Mark JF}, journal={arXiv preprint arXiv:2301.12307}, year={2023} } ```
Yntec/Citrus
Yntec
2023-09-23T09:23:41Z
863
2
diffusers
[ "diffusers", "safetensors", "anime", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-23T03:01:26Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - anime - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Citrus A mix from the best of CitrineDreamMix and CetusRemix, with the Splash LoRA and 70% of Add_Detail baked in. Comparison: ![Comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/dvkbTwZGl5vXMzVakwZVV.png) Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/gg0SgMrsKgnQQElnut3ob.png) pretty Tiny mischievous CUTE girl wearing a puffy teal jacket, DETAILED EYES, greatly drawn face, Magazine Ad, playing, lush market overgrown city, smooth, intricate, elegant, digital painting, artstation, concept art, sharp focus, illustration, art by sam spratt and ROSSDRAWS, valorant character Original pages: https://civitai.com/models/18116?modelVersionId=21839 (CitrineDreamMix) https://huggingface.co/Yntec/CetusRemix https://civitai.com/models/58390 (add_detail) For trigger words you can use, check the original page at: https://civitai.com/models/81619?modelVersionId=86612 (Splash 1.0) # CitrusDreamMix A version without Add_Detail, for alternate eyes and backgrounds. ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/jNjayIKCbww8mBoIm2YeY.png) # Recipe - SuperMerger Weight sum TrainDifference MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1 Model A: CetusRemix Model B: CitrineDreamMix Output: CetusDreamMix - Merge LoRA to checkpoint 1.0 Model A: CetusDreamMix LoRA: Splash 1.0 Output: CitrusDreamMix - Merge LoRA to checkpoint 0.7 Model A: CitrusDreamMix LoRA: Add_Detail Output: Citrus
Meina/MeinaHentai_V5
Meina
2024-01-12T11:32:46Z
863
2
diffusers
[ "diffusers", "safetensors", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-12T11:30:31Z
--- license: creativeml-openrail-m ---
MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct
MBZUAI
2024-04-27T16:48:31Z
863
11
transformers
[ "transformers", "safetensors", "llava_llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-26T05:10:24Z
--- {} --- [![CODE](https://img.shields.io/badge/GitHub-Repository-<COLOR>)](https://github.com/mbzuai-oryx/LLaVA-pp) # LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct ## Repository Overview This repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding. ## Training Strategy - **Pretraining:** Only Vision-to-Language projector is trained. The rest of the model is frozen. - **Fine-tuning:** LLM is LoRA fine-tuned. Only the vision-backbone (CLIP) is kept frozen. - **Note:** The repository contains merged weights. ## Key Components - **Base Large Language Model (LLM):** [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - **Base Large Multimodal Model (LMM):** [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) ## Training Data - **Pretraining Dataset:** [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) - **Fine-tuning Dataset:** [LLaVA-Instruct-665K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json) ## Download It As ``` git lfs install git clone https://huggingface.co/MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct ``` --- ## Contributions Contributions are welcome! Please 🌟 our repository [LLaVA++](https://github.com/mbzuai-oryx/LLaVA-pp) if you find this model useful. ---
iRpro16/sicilian_translator
iRpro16
2024-06-23T20:01:55Z
863
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "translation", "generated_from_trainer", "base_model:t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
translation
2024-06-23T19:12:56Z
--- license: apache-2.0 base_model: t5-small tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: model_trainer_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_trainer_2 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.4702 - Bleu: 4.2489 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.41.2 - Pytorch 2.0.1 - Datasets 2.20.0 - Tokenizers 0.19.1
timm/regnety_006.pycls_in1k
timm
2024-02-10T23:33:12Z
862
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2003.13678", "license:mit", "region:us" ]
image-classification
2023-03-21T06:37:09Z
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for regnety_006.pycls_in1k A RegNetY-600MF image classification model. Pretrained on ImageNet-1k by paper authors. The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including: * stochastic depth * gradient checkpointing * layer-wise LR decay * configurable output stride (dilation) * configurable activation and norm layers * option for a pre-activation bottleneck block used in RegNetV variant * only known RegNetZ model definitions with pretrained weights ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 6.1 - GMACs: 0.6 - Activations (M): 4.3 - Image size: 224 x 224 - **Papers:** - Designing Network Design Spaces: https://arxiv.org/abs/2003.13678 - **Dataset:** ImageNet-1k - **Original:** https://github.com/facebookresearch/pycls ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('regnety_006.pycls_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_006.pycls_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 112, 112]) # torch.Size([1, 48, 56, 56]) # torch.Size([1, 112, 28, 28]) # torch.Size([1, 256, 14, 14]) # torch.Size([1, 608, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnety_006.pycls_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 608, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`. |model |img_size|top1 |top5 |param_count|gmacs|macts | |-------------------------|--------|------|------|-----------|-----|------| |[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 | |[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 | |[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 | |[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 | |[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49| |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 | |[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 | |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 | |[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 | |[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83| |[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 | |[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 | |[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 | |[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 | |[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 | |[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 | |[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 | |[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 | |[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 | |[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 | |[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 | |[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 | |[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 | |[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 | |[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 | |[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 | |[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 | |[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 | |[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 | |[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 | |[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 | |[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 | |[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 | |[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 | |[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 | |[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 | |[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 | |[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 | |[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 | |[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 | |[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 | |[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 | |[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 | |[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 | |[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 | |[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 | |[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 | |[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 | |[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 | |[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 | ## Citation ```bibtex @InProceedings{Radosavovic2020, title = {Designing Network Design Spaces}, author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r}, booktitle = {CVPR}, year = {2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
timm/fastvit_ma36.apple_in1k
timm
2023-08-23T20:55:01Z
862
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2303.14189", "license:other", "region:us" ]
image-classification
2023-08-23T20:54:29Z
--- tags: - image-classification - timm library_name: timm license: other datasets: - imagenet-1k --- # Model card for fastvit_ma36.apple_in1k A FastViT image classification model. Trained on ImageNet-1k by paper authors. Please observe [original license](https://github.com/apple/ml-fastvit/blob/8af5928238cab99c45f64fc3e4e7b1516b8224ba/LICENSE). ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 44.1 - GMACs: 7.8 - Activations (M): 40.4 - Image size: 256 x 256 - **Papers:** - FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization: https://arxiv.org/abs/2303.14189 - **Original:** https://github.com/apple/ml-fastvit - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('fastvit_ma36.apple_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'fastvit_ma36.apple_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 76, 64, 64]) # torch.Size([1, 152, 32, 32]) # torch.Size([1, 304, 16, 16]) # torch.Size([1, 608, 8, 8]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'fastvit_ma36.apple_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 608, 8, 8) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @inproceedings{vasufastvit2023, author = {Pavan Kumar Anasosalu Vasu and James Gabriel and Jeff Zhu and Oncel Tuzel and Anurag Ranjan}, title = {FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, year = {2023} } ```
ikawrakow/various-2bit-sota-gguf
ikawrakow
2024-01-10T14:22:32Z
862
79
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-01-04T17:16:56Z
--- license: apache-2.0 --- Various models in GGUF format quantized with a new 2-bit approach. Intended for use with llama.cpp. Requires llama.cpp PR 4773. Update: PR 4773 has been merged into `llama.cpp`, but I have added new models that require PR 4856. The new models are those that have around 2.3-2.4 bpw. They have a lower quantization error at the xpense of being ~10% larger.
KorBon/5FpTUAtsnsoYvGXfKRKaXhSJty2ufoFMy8BfRf1KdHkFT8hm_cnn
KorBon
2024-02-08T20:28:01Z
862
0
keras
[ "keras", "region:us" ]
null
2024-02-02T07:14:31Z
Entry not found
Henrychur/MMedLM2
Henrychur
2024-05-24T14:38:17Z
862
9
transformers
[ "transformers", "pytorch", "internlm", "feature-extraction", "medical", "custom_code", "en", "zh", "ja", "fr", "ru", "es", "dataset:Henrychur/MMedC", "arxiv:2402.13963", "license:apache-2.0", "region:us" ]
feature-extraction
2024-02-20T07:28:20Z
--- license: apache-2.0 datasets: - Henrychur/MMedC language: - en - zh - ja - fr - ru - es tags: - medical --- # MMedLM [💻Github Repo](https://github.com/MAGIC-AI4Med/MMedLM) [🖨️arXiv Paper](https://arxiv.org/abs/2402.13963) The official model weights for "Towards Building Multilingual Language Model for Medicine". ## Introduction This repo contains MMedLM 2, a multilingual medical foundation model with 7 billion parameters. MMedLM 2 builds upon the foundation of InternLM 2 and has been further pretrained on MMedC, a comprehensive multilingual medical corpus. This further pretraining enhances the model's medical-domain knowledge. The model underwent further pretraining on MMedC with the following hyperparameters: - Iterations: 15000 - Global batch size: 512 - Cutoff length: 2048 - Learning rate: 2e-5 The model can be loaded as follows: ```py import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Henrychur/MMedLM2", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("Henrychur/MMedLM2", torch_dtype=torch.float16, trust_remote_code=True) ``` - Note that this is a foundation model that has not undergone instruction fine-tuning. ## News [2024.2.21] Our pre-print paper is released ArXiv. Dive into our findings [here](https://arxiv.org/abs/2402.13963). [2024.2.20] We release [MMedLM](https://huggingface.co/Henrychur/MMedLM) and [MMedLM 2](https://huggingface.co/Henrychur/MMedLM2). With an auto-regressive continues training on MMedC, these models achieves superior performance compared to all other open-source models, even rivaling GPT-4 on MMedBench. [2023.2.20] We release [MMedC](https://huggingface.co/datasets/Henrychur/MMedC), a multilingual medical corpus containing 25.5B tokens. [2023.2.20] We release [MMedBench](https://huggingface.co/datasets/Henrychur/MMedBench), a new multilingual medical multi-choice question-answering benchmark with rationale. Check out the leaderboard [here](https://henrychur.github.io/MultilingualMedQA/). ## Evaluation on MMedBench The further pretrained MMedLM 2 showcast it's great performance in medical domain across different language. | Method | Size | Year | MMedC | MMedBench | English | Chinese | Japanese | French | Russian | Spanish | Avg. | |------------------|------|---------|-----------|-----------|----------------|----------------|----------------|----------------|----------------|----------------|----------------| | GPT-3.5 | - | 2022.12 | &#10007; | &#10007; | 56.88 | 52.29 | 34.63 | 32.48 | 66.36 | 66.06 | 51.47 | | GPT-4 | - | 2023.3 | &#10007; | &#10007; | 78.00 | 75.07 | 72.91 | 56.59 | 83.62 | 85.67 | 74.27 | | Gemini-1.0 pro | - | 2024.1 | &#10007; | &#10007; | 53.73 | 60.19 | 44.22 | 29.90 | 73.44 | 69.69 | 55.20 | | BLOOMZ | 7B | 2023.5 | &#10007; | trainset | 43.28 | 58.06 | 32.66 | 26.37 | 62.89 | 47.34 | 45.10 | | InternLM | 7B | 2023.7 | &#10007; | trainset | 44.07 | 64.62 | 37.19 | 24.92 | 58.20 | 44.97 | 45.67 | | Llama 2 | 7B | 2023.7 | &#10007; | trainset | 43.36 | 50.29 | 25.13 | 20.90 | 66.80 | 47.10 | 42.26 | | MedAlpaca | 7B | 2023.3 | &#10007; | trainset | 46.74 | 44.80 | 29.64 | 21.06 | 59.38 | 45.00 | 41.11 | | ChatDoctor | 7B | 2023.4 | &#10007; | trainset | 43.52 | 43.26 | 25.63 | 18.81 | 62.50 | 43.44 | 39.53 | | PMC-LLaMA | 7B | 2023.4 | &#10007; | trainset | 47.53 | 42.44 | 24.12 | 20.74 | 62.11 | 43.29 | 40.04 | | Mistral | 7B | 2023.10 | &#10007; | trainset | 61.74 | 71.10 | 44.72 | 48.71 | 74.22 | 63.86 | 60.73 | | InternLM 2 | 7B | 2024.2 | &#10007; | trainset | 57.27 | 77.55 | 47.74 | 41.00 | 68.36 | 59.59 | 58.59 | | MMedLM(Ours) | 7B | - | &#10003; | trainset | 49.88 | 70.49 | 46.23 | 36.66 | 72.27 | 54.52 | 55.01 | | MMedLM 2(Ours) | 7B | - | &#10003; | trainset | 61.74 | 80.01 | 61.81 | 52.09 | 80.47 | 67.65 | 67.30 | - GPT and Gemini is evluated under zero-shot setting through API - Open-source models first undergo training on the trainset of MMedBench before evaluate. ## Contact If you have any question, please feel free to contact [email protected]. ## Citation ``` @misc{qiu2024building, title={Towards Building Multilingual Language Model for Medicine}, author={Pengcheng Qiu and Chaoyi Wu and Xiaoman Zhang and Weixiong Lin and Haicheng Wang and Ya Zhang and Yanfeng Wang and Weidi Xie}, year={2024}, eprint={2402.13963}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
sophosympatheia/Midnight-Miqu-70B-v1.5
sophosympatheia
2024-04-16T20:53:41Z
862
105
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:sophosympatheia/Midnight-Miqu-70B-v1.0", "base_model:migtissera/Tess-70B-v1.6", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-11T05:14:04Z
--- base_model: - sophosympatheia/Midnight-Miqu-70B-v1.0 - migtissera/Tess-70B-v1.6 library_name: transformers tags: - mergekit - merge license: other --- <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/Tn9MBg6.png" alt="MidnightMiqu" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> ### Overview Looking for the 103B version? You can get it from [FluffyKaeloky/Midnight-Miqu-103B-v1.5](https://huggingface.co/FluffyKaeloky/Midnight-Miqu-103B-v1.5). This is a DARE Linear merge between [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) and [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6). This version is close in feel and performance to Midnight Miqu v1.0 but I think it picked up some goodness from Tess. Their EQ Bench scores are virtually the same and their post-EXL2 quant perplexity scores were the same too. However, Midnight Miqu v1.5 passes some tests I use that Midnight Miqu v1.0 fails, without sacrificing writing quality. This model is uncensored. *You are responsible for whatever you do with it.* This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas. ### Long Context Tips You can run this model out to 32K context with alpha_rope set to 1, just like with Miqu. ### Sampler Tips * I recommend using Quadratic Sampling (i.e. smoothing factor) for creative work. I think this version performs best with a smoothing factor close to 0.2. * I recommend using Min-P. Experiment to find your best setting. * You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor. * You don't need to use a high repetition penalty with this model, such as going above 1.10, but experiment with it. Experiment with any and all of the settings below! What suits my preferences may not suit yours. If you save the below settings as a .json file, you can import them directly into Silly Tavern. ``` { "temp": 1, "temperature_last": true, "top_p": 1, "top_k": 0, "top_a": 0, "tfs": 1, "epsilon_cutoff": 0, "eta_cutoff": 0, "typical_p": 1, "min_p": 0.12, "rep_pen": 1.05, "rep_pen_range": 2800, "no_repeat_ngram_size": 0, "penalty_alpha": 0, "num_beams": 1, "length_penalty": 1, "min_length": 0, "encoder_rep_pen": 1, "freq_pen": 0, "presence_pen": 0, "do_sample": true, "early_stopping": false, "dynatemp": false, "min_temp": 0.8, "max_temp": 1.35, "dynatemp_exponent": 1, "smoothing_factor": 0.23, "add_bos_token": true, "truncation_length": 2048, "ban_eos_token": false, "skip_special_tokens": true, "streaming": true, "mirostat_mode": 0, "mirostat_tau": 2, "mirostat_eta": 0.1, "guidance_scale": 1, "negative_prompt": "", "grammar_string": "", "banned_tokens": "", "ignore_eos_token_aphrodite": false, "spaces_between_special_tokens_aphrodite": true, "sampler_order": [ 6, 0, 1, 3, 4, 2, 5 ], "logit_bias": [], "n": 1, "rep_pen_size": 0, "genamt": 500, "max_length": 32764 } ``` ### Prompting Tips Try the following context template for use in SillyTavern. It might help, although it's a little heavy on tokens. If you save the text as a .json file, you can import it directly. ``` { "story_string": "{{#if system}}{{system}}\n{{/if}}\nCONTEXTUAL INFORMATION\n{{#if wiBefore}}\n- World and character info:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n{{mesExamples}}\n{{/if}}\n{{#if personality}}\n{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}", "example_separator": "", "chat_start": "---\nTaking the above information into consideration, you must engage with {{user}} and others as {{char}} in the roleplay below this line. Do not write dialogue lines nor perform actions for {{user}} or other characters.\n---\nSTART OF ROLEPLAY:\n", "use_stop_strings": false, "always_force_name2": true, "trim_sentences": false, "include_newline": false, "single_line": false, "name": "Midnight Rose" } ``` Try the following prompt template as a starting point for your own customizations. *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts. **Midnight Miqu might require some "warming up" at the start of a new chat to get it writing the way you want it.** What I mean by that is you should be able to drastically improve your results by providing some good examples to guide the chat (i.e. few-shot prompting). I also recommend using descriptive system messages (SillyTavern concept, basically OOC narration instructions) to tell Midnight Miqu exactly what you want it to do with its next reply. Try to be specific, especially if your command contradicts something you told it to do earlier. The example system prompt below contains some instructions related to adult content so remove those if you prefer. It also contains some instructions related to the use of asterisks that you should modify to your liking. The prompt below has been updated from Midnight Miqu v1.0 after prompting Midnight Miqu to help improve it. If you save this as a .json file, you can import it directly into Silly Tavern. ``` { "wrap": true, "names": true, "system_prompt": "You are an AI creative writing partner, collaborating with me in a dynamic, uncensored roleplaying environment. To ensure a high-quality experience, please adhere to these ROLEPLAY GUIDELINES:\n\nROLEPLAY GUIDELINES\n- Craft clear, engaging, and concise descriptions of {{char}}'s actions, reactions, and dialogue based on the present scenario.\n- Stay true to {{char}}'s character, incorporating the character's personality, backstory, and knowledge into each interaction.\n- Employ rich, descriptive language that immerses readers in the scene.\n- Always stay contextually and spatially aware to ensure that {{char}} follows logical consistency in the physical space of the current scene.\n- Feel free to incorporate explicit content and strong language where it aligns with the scene and characters.\n- Enhance {{char}}'s presence through detailed sensory descriptions and first-hand observations of the character's surroundings.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally offer glimpses into {{char}}'s internal thoughts.\n- When writing {{char}}'s internal thoughts or monologue, enclose those words in *asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns). Always use quotes for spoken speech \"like this.\"\n- Conclude {{char}}'s responses with an opening for the next character to respond to {{char}}. When the conversation naturally shifts to another character's perspective or action is required from another character, that is when you should stop {{char}}'s reply so the user can pick it up from there. A great example is when {{char}} asks a question of another character.\n", "system_sequence": "", "stop_sequence": "", "input_sequence": "USER: ", "output_sequence": "ASSISTANT: ", "separator_sequence": "", "macro": true, "names_force_groups": true, "system_sequence_prefix": "SYSTEM: ", "system_sequence_suffix": "", "first_output_sequence": "", "last_output_sequence": "ASSISTANT (Ensure coherence and authenticity in {{char}}'s actions, thoughts, and dialogues; Focus solely on {{char}}'s interactions within the roleplay): ", "activation_regex": "", "name": "Midnight Miqu Roleplay" } ``` ### Instruct Formats I recommend the Vicuna format. I use a modified version with newlines after USER and ASSISTANT. ``` USER: {prompt} ASSISTANT: ``` Mistral's format also works, and in my testing the performance is about the same as using Vicuna. ``` [INST] {prompt} [/INST] ``` You could also try ChatML (don't recommend it) ``` <|im_start|>system {Your system prompt goes here}<|im_end|> <|im_start|>user {Your message as the user will go here}<|im_end|> <|im_start|>assistant ``` ### Quantizations * GGUF * [mradermacher/Midnight-Miqu-70B-v1.5-GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF) -- Various static GGUF quants * GPTQ * [Kotokin/Midnight-Miqu-70B-v1.5_GPTQ32G](https://huggingface.co/Kotokin/Midnight-Miqu-70B-v1.5_GPTQ32G) * EXL2 * [Dracones/Midnight-Miqu-70B-v1.5_exl2_4.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_4.0bpw) * [Dracones/Midnight-Miqu-70B-v1.5_exl2_4.5bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_4.5bpw) * [Dracones/Midnight-Miqu-70B-v1.5_exl2_5.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_5.0bpw) * [Dracones/Midnight-Miqu-70B-v1.5_exl2_6.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_6.0bpw) * If you don't see something you're looking for, [try searching Hugging Face](https://huggingface.co/models?search=midnight-miqu-70b-v1.5). There may be newer quants available than what I've documented here. ### Licence and usage restrictions <font color="red">152334H/miqu-1-70b-sf was based on a leaked version of one of Mistral's models.</font> All miqu-derived models, including this merge, are **only suitable for personal use.** Mistral has been cool about it so far, but you should be aware that by downloading this merge you are assuming whatever legal risk is inherent in acquiring and using a model based on leaked weights. This merge comes with no warranties or guarantees of any kind, but you probably already knew that. I am not a lawyer and I do not profess to know what we have gotten ourselves into here. You should consult with a lawyer before using any Hugging Face model beyond private use... but definitely don't use this one for that! ## Merge Details ### Merge Method This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [152334H_miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) as a base. ### Models Merged The following models were included in the merge: * [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) * [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_linear base_model: /home/llm/mergequant/models/BASE/152334H_miqu-1-70b-sf # base model models: - model: /home/llm/mergequant/models/midnight-miqu-70b-v1.0 - model: /home/llm/mergequant/models/BASE/Tess-70B-v1.6 parameters: weight: 1.0 dtype: float16 ``` ### Notes I tried several methods of merging Midnight Miqu v1.0 with Tess v1.6, and this dare_linear approach worked the best by far. I tried the same approach with other Miqu finetunes like ShinojiResearch/Senku-70B-Full and abideen/Liberated-Miqu-70B, but there was a huge difference in performance. The merge with Tess was the best one. I also tried the SLERP approach I used to create Midnight Miqu v1.0, only using Tess instead of 152334H_miqu-1-70b in that config, and that result was nowhere near as good either.