modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
Sahajtomar/NER_legal_de
Sahajtomar
"2021-05-18T22:27:00Z"
1,708
3
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "token-classification", "NER", "de", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:04Z"
--- language: de tags: - pytorch - tf - bert - NER datasets: - legal entity recognition --- ### NER model trained on BERT MODEL used for fine tuning is GBERT Large by deepset.ai ## Test Accuracy: 98 \ F1: 84.1 \ Precision: 82.7 \ Recall: 85.5 ## Model inferencing: ```python !pip install -q transformers from transformers import pipeline ner = pipeline( "ner", model="Sahajtomar/NER_legal_de", tokenizer="Sahajtomar/NER_legal_de") nlp_ner("Für eine Zuständigkeit des Verwaltungsgerichts Berlin nach § 52 Nr. 1 bis 4 VwGO hat der \ Antragsteller keine Anhaltspunkte vorgetragen .") ```
facebook/s2t-medium-librispeech-asr
facebook
"2023-09-07T15:42:27Z"
1,708
8
transformers
[ "transformers", "pytorch", "tf", "safetensors", "speech_to_text", "automatic-speech-recognition", "audio", "en", "dataset:librispeech_asr", "arxiv:2010.05171", "arxiv:1904.08779", "license:mit", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: en datasets: - librispeech_asr tags: - audio - automatic-speech-recognition pipeline_tag: automatic-speech-recognition widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac license: mit --- # S2T-MEDIUM-LIBRISPEECH-ASR `s2t-medium-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR). The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text) ## Model description S2T is an end-to-end sequence-to-sequence transformer model. It is trained with standard autoregressive cross-entropy loss and generates the transcripts autoregressively. ## Intended uses & limitations This model can be used for end-to-end speech recognition (ASR). See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints. ### How to use As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the transcripts by passing the speech features to the model. *Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the filter bank features. Make sure to install the `torchaudio` package before running this example.* You could either install those as extra speech dependancies with `pip install transformers"[speech, sentencepiece]"` or install the packages seperatly with `pip install torchaudio sentencepiece`. ```python import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset import soundfile as sf model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-librispeech-asr") processor = Speech2Textprocessor.from_pretrained("facebook/s2t-medium-librispeech-asr") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) ds = ds.map(map_to_array) input_features = processor( ds["speech"][0], sampling_rate=16_000, return_tensors="pt" ).input_features # Batch size 1 generated_ids = model.generate(input_features=input_features) transcription = processor.batch_decode(generated_ids) ``` #### Evaluation on LibriSpeech Test The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset. ```python from datasets import load_dataset from evaluate import load from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset wer = load("wer") model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-medium-librispeech-asr").to("cuda") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-medium-librispeech-asr", do_upper_case=True) def map_to_pred(batch): features = processor(batch["audio"]["array"], sampling_rate=16000, padding=True, return_tensors="pt") input_features = features.input_features.to("cuda") attention_mask = features.attention_mask.to("cuda") gen_tokens = model.generate(input_features=input_features, attention_mask=attention_mask) batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0] return batch result = librispeech_eval.map(map_to_pred, remove_columns=["audio"]) print("WER:", wer.compute(predictions=result["transcription"], references=result["text"])) ``` *Result (WER)*: | "clean" | "other" | |:-------:|:-------:| | 3.5 | 7.8 | ## Training data The S2T-MEDIUM-LIBRISPEECH-ASR is trained on [LibriSpeech ASR Corpus](https://www.openslr.org/12), a dataset consisting of approximately 1000 hours of 16kHz read English speech. ## Training procedure ### Preprocessing The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization) is applied to each example. The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000. ### Training The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779). The encoder receives speech features, and the decoder generates the transcripts autoregressively. ### BibTeX entry and citation info ```bibtex @inproceedings{wang2020fairseqs2t, title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, year = {2020}, } ```
AdamOswald1/Anything-Preservation
AdamOswald1
"2023-01-27T17:23:51Z"
1,708
103
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-01-18T16:41:58Z"
--- language: - en license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- DISCLAIMER! This Is A Preservation Repository! # Anything V3 - Better VAE Welcome to Anything V3 - Better VAE. It currently has three model formats: diffusers, ckpt, and safetensors. You'll never see a grey image result again. This model is designed to produce high-quality, highly detailed anime-style images with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags for image generation. e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_** ## Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Anything V3 with Better VAE: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/Linaqruf/Linaqruf-anything-v3-better-vae) ## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). You should install dependencies below in order to running the pipeline ```bash pip install diffusers transformers accelerate scipy safetensors ``` Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler): ```python from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler model_id = "Linaqruf/anything-v3-0-better-vae" # Use the DPMSolverMultistepScheduler (DPM-Solver++) scheduler here instead pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "masterpiece, best quality, illustration, beautiful detailed, finely detailed, dramatic light, intricate details, 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden" negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name" with autocast("cuda"): image = pipe(prompt, negative_prompt=negative_prompt, width=512, height=640, guidance_scale=12, num_inference_steps=50).images[0] image.save("anime_girl.png") ``` ## Examples Below are some examples of images generated using this model: **Anime Girl:** ![Anime Girl](https://huggingface.co/AdamOswald1/anything-v5.0/resolve/main/samples/1girl.png) **Anime Boy:** ![Anime Boy](https://huggingface.co/AdamOswald1/anything-v5.0/resolve/main/samples/1boy.png) **Scenery:** ![Scenery](https://huggingface.co/AdamOswald1/anything-v5.0/resolve/main/samples/scenery.png) ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) # Announcement For (unofficial) continuation of this model, please visit [andite/anything-v4.0](https://huggingface.co/andite/anything-v4.0). I am aware that the repo exists because I am literally the one who (accidentally) gave the idea to publish his fine-tuned model ([andite/yohan-diffusion](https://huggingface.co/andite/yohan-diffusion)) as a base and merged it with many mysterious model, "hey, let's call it 'Anything V4.0'", because the quality is quite similar to Anything V3 but upgraded. I also wanted to tell you something. I had a plan to remove/make private one of each repo named "Anything V3": - [Linaqruf/anything-v3.0](https://huggingface.co/Linaqruf/anything-v3.0/) - [Linaqruf/anything-v3-better-vae](https://huggingface.co/Linaqruf/anything-v3-better-vae) Because there are two versions now and I'm late to realize this mysterious non-sense model is already polluted Huggingface Trending for so long, and now when the new repo comes out it is also there. I feel guilty everytime this model is in trending leaderboard. I prefer to delete/make private this one and let us slowly move to [Linaqruf/anything-v3-better-vae](https://huggingface.co/Linaqruf/anything-v3-better-vae) with better repo management and a better VAE included in the model. Please share your thoughts in this #133 discussion about whether I should delete this repo or another one, or maybe both of them. Thanks, Linaqruf. --- # Anything V3 Welcome to Anything V3 - a latent diffusion model for weebs. This model is intended to produce high-quality, highly detailed anime style with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images. e.g. **_1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden_** ## Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run Anything-V3.0: [Open in Spaces](https://huggingface.co/spaces/akhaliq/anything-v3.0) ## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "Linaqruf/anything-v3.0" branch_name= "diffusers" pipe = StableDiffusionPipeline.from_pretrained(model_id, revision=branch_name, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "pikachu" image = pipe(prompt).images[0] image.save("./pikachu.png") ``` ## Examples Below are some examples of images generated using this model: **Anime Girl:** ![Anime Girl](https://huggingface.co/AdamOswald1/anything-v5.0/resolve/main/1girl.png) ``` 1girl, brown hair, green eyes, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden Steps: 50, Sampler: DDIM, CFG scale: 12 ``` **Anime Boy:** ![Anime Boy](https://huggingface.co/AdamOswald1/anything-v5.0/resolve/main/1boy.png) ``` 1boy, medium hair, blonde hair, blue eyes, bishounen, colorful, autumn, cumulonimbus clouds, lighting, blue sky, falling leaves, garden Steps: 50, Sampler: DDIM, CFG scale: 12 ``` **Scenery:** ![Scenery](https://huggingface.co/AdamOswald1/anything-v5.0/resolve/main/scenery.png) ``` scenery, shibuya tokyo, post-apocalypse, ruins, rust, sky, skyscraper, abandoned, blue sky, broken window, building, cloud, crane machine, outdoors, overgrown, pillar, sunset Steps: 50, Sampler: DDIM, CFG scale: 12 ``` ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
RWKV/rwkv-raven-1b5
RWKV
"2023-05-15T10:08:58Z"
1,708
10
transformers
[ "transformers", "pytorch", "rwkv", "text-generation", "dataset:EleutherAI/pile", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-05-04T14:57:11Z"
--- datasets: - EleutherAI/pile --- ![RWKlogo.png](https://s3.amazonaws.com/moonup/production/uploads/62441d1d9fdefb55a0b7d12c/UWpP-lGRZJJDaEx_uUlDv.png) # Model card for RWKV-4 | 1B5 parameters chat version (Raven) RWKV is a project led by [Bo Peng](https://github.com/BlinkDL). Learn more about the model architecture in the blogposts from Johan Wind [here](https://johanwind.github.io/2023/03/23/rwkv_overview.html) and [here](https://johanwind.github.io/2023/03/23/rwkv_details.html). Learn more about the project by joining the [RWKV discord server](https://discordapp.com/users/468093332535640064). # Table of contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Citation](#citation) ## TL;DR Below is the description from the [original repository](https://github.com/BlinkDL/RWKV-LM) > RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). It's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding. ## Model Details The details of the architecture can be found on the blogpost mentioned above and the Hugging Face blogpost of the integration. ## Usage ### Convert the raw weights to the HF format You can use the [`convert_rwkv_checkpoint_to_hf.py`](https://github.com/huggingface/transformers/tree/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py) script by specifying the repo_id of the original weights, the filename and the output directory. You can also optionally directly push the converted model on the Hub by passing `--push_to_hub` flag and `--model_name` argument to specify where to push the converted weights. ```bash python convert_rwkv_checkpoint_to_hf.py --repo_id RAW_HUB_REPO --checkpoint_file RAW_FILE --output_dir OUTPUT_DIR --push_to_hub --model_name dummy_user/converted-rwkv ``` ### Generate text You can use the `AutoModelForCausalLM` and `AutoTokenizer` classes to generate texts from the model. Expand the sections below to understand how to run the model in different scenarios: The "Raven" models needs to be prompted in a specific way, learn more about that [in the integration blogpost](https://huggingface.co/blog/rwkv). ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-1b5") tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-1b5") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt") output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` ### Running the model on a single GPU <details> <summary> Click to expand </summary> ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-1b5").to(0) tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-1b5") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` </details> </details> ### Running the model in half-precision, on GPU <details> <summary> Click to expand </summary> ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-1b5", torch_dtype=torch.float16).to(0) tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-1b5") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` </details> ### Running the model multiple GPUs <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-1b5", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-1b5") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` </details> ## Citation If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/)
RichardErkhov/Qwen_-_Qwen2-0.5B-gguf
RichardErkhov
"2024-06-22T18:00:53Z"
1,708
0
null
[ "gguf", "region:us" ]
null
"2024-06-22T17:53:32Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qwen2-0.5B - GGUF - Model creator: https://huggingface.co/Qwen/ - Original model: https://huggingface.co/Qwen/Qwen2-0.5B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Qwen2-0.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q2_K.gguf) | Q2_K | 0.32GB | | [Qwen2-0.5B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.IQ3_XS.gguf) | IQ3_XS | 0.32GB | | [Qwen2-0.5B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.IQ3_S.gguf) | IQ3_S | 0.32GB | | [Qwen2-0.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.32GB | | [Qwen2-0.5B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.IQ3_M.gguf) | IQ3_M | 0.32GB | | [Qwen2-0.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q3_K.gguf) | Q3_K | 0.33GB | | [Qwen2-0.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.33GB | | [Qwen2-0.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.34GB | | [Qwen2-0.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.33GB | | [Qwen2-0.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q4_0.gguf) | Q4_0 | 0.33GB | | [Qwen2-0.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.IQ4_NL.gguf) | IQ4_NL | 0.33GB | | [Qwen2-0.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.36GB | | [Qwen2-0.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q4_K.gguf) | Q4_K | 0.37GB | | [Qwen2-0.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.37GB | | [Qwen2-0.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q4_1.gguf) | Q4_1 | 0.35GB | | [Qwen2-0.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q5_0.gguf) | Q5_0 | 0.37GB | | [Qwen2-0.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.38GB | | [Qwen2-0.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q5_K.gguf) | Q5_K | 0.39GB | | [Qwen2-0.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.39GB | | [Qwen2-0.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q5_1.gguf) | Q5_1 | 0.39GB | | [Qwen2-0.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q6_K.gguf) | Q6_K | 0.47GB | | [Qwen2-0.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Qwen_-_Qwen2-0.5B-gguf/blob/main/Qwen2-0.5B.Q8_0.gguf) | Q8_0 | 0.49GB | Original model description: --- language: - en pipeline_tag: text-generation tags: - pretrained license: apache-2.0 --- # Qwen2-0.5B ## Introduction Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 0.5B Qwen2 base language model. Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/). <br> ## Model Details Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. ## Requirements The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` ## Usage We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. ## Performance The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc. The datasets for evaluation include: **English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot) **Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript) **Math Tasks**: GSM8K (4-shot), MATH (4-shot) **Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot) **Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot) #### Qwen2-0.5B & Qwen2-1.5B performances | Datasets | Phi-2 | Gemma-2B | MiniCPM | Qwen1.5-1.8B | Qwen2-0.5B | Qwen2-1.5B | | :--------| :---------: | :------------: | :------------: |:------------: | :------------: | :------------: | |#Non-Emb Params | 2.5B | 2.0B | 2.4B | 1.3B | 0.35B | 1.3B | |MMLU | 52.7 | 42.3 | 53.5 | 46.8 | 45.4 | **56.5** | |MMLU-Pro | - | 15.9 | - | - | 14.7 | 21.8 | |Theorem QA | - | - | - |- | 8.9 | **15.0** | |HumanEval | 47.6 | 22.0 |**50.0**| 20.1 | 22.0 | 31.1 | |MBPP | **55.0** | 29.2 | 47.3 | 18.0 | 22.0 | 37.4 | |GSM8K | 57.2 | 17.7 | 53.8 | 38.4 | 36.5 | **58.5** | |MATH | 3.5 | 11.8 | 10.2 | 10.1 | 10.7 | **21.7** | |BBH | **43.4** | 35.2 | 36.9 | 24.2 | 28.4 | 37.2 | |HellaSwag | **73.1** | 71.4 | 68.3 | 61.4 | 49.3 | 66.6 | |Winogrande | **74.4** | 66.8 | -| 60.3 | 56.8 | 66.2 | |ARC-C | **61.1** | 48.5 | -| 37.9 | 31.5 | 43.9 | |TruthfulQA | 44.5 | 33.1 | -| 39.4 | 39.7 | **45.9** | |C-Eval | 23.4 | 28.0 | 51.1| 59.7 | 58.2 | **70.6** | |CMMLU | 24.2 | - | 51.1 | 57.8 | 55.1 | **70.3** | ## Citation If you find our work helpful, feel free to give us a cite. ``` @article{qwen2, title={Qwen2 Technical Report}, year={2024} } ```
RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf
RichardErkhov
"2024-06-30T08:23:32Z"
1,708
0
null
[ "gguf", "region:us" ]
null
"2024-06-30T08:16:26Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) ko-llm-llama-2-7b-chat2 - GGUF - Model creator: https://huggingface.co/sronger/ - Original model: https://huggingface.co/sronger/ko-llm-llama-2-7b-chat2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [ko-llm-llama-2-7b-chat2.Q2_K.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q2_K.gguf) | Q2_K | 0.22GB | | [ko-llm-llama-2-7b-chat2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.IQ3_XS.gguf) | IQ3_XS | 0.25GB | | [ko-llm-llama-2-7b-chat2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.IQ3_S.gguf) | IQ3_S | 0.26GB | | [ko-llm-llama-2-7b-chat2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q3_K_S.gguf) | Q3_K_S | 0.26GB | | [ko-llm-llama-2-7b-chat2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.IQ3_M.gguf) | IQ3_M | 0.26GB | | [ko-llm-llama-2-7b-chat2.Q3_K.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q3_K.gguf) | Q3_K | 0.28GB | | [ko-llm-llama-2-7b-chat2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q3_K_M.gguf) | Q3_K_M | 0.28GB | | [ko-llm-llama-2-7b-chat2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q3_K_L.gguf) | Q3_K_L | 0.31GB | | [ko-llm-llama-2-7b-chat2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.IQ4_XS.gguf) | IQ4_XS | 0.32GB | | [ko-llm-llama-2-7b-chat2.Q4_0.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q4_0.gguf) | Q4_0 | 0.33GB | | [ko-llm-llama-2-7b-chat2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.IQ4_NL.gguf) | IQ4_NL | 0.34GB | | [ko-llm-llama-2-7b-chat2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q4_K_S.gguf) | Q4_K_S | 0.34GB | | [ko-llm-llama-2-7b-chat2.Q4_K.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q4_K.gguf) | Q4_K | 0.36GB | | [ko-llm-llama-2-7b-chat2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q4_K_M.gguf) | Q4_K_M | 0.36GB | | [ko-llm-llama-2-7b-chat2.Q4_1.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q4_1.gguf) | Q4_1 | 0.37GB | | [ko-llm-llama-2-7b-chat2.Q5_0.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q5_0.gguf) | Q5_0 | 0.4GB | | [ko-llm-llama-2-7b-chat2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q5_K_S.gguf) | Q5_K_S | 0.4GB | | [ko-llm-llama-2-7b-chat2.Q5_K.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q5_K.gguf) | Q5_K | 0.41GB | | [ko-llm-llama-2-7b-chat2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q5_K_M.gguf) | Q5_K_M | 0.41GB | | [ko-llm-llama-2-7b-chat2.Q5_1.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q5_1.gguf) | Q5_1 | 0.44GB | | [ko-llm-llama-2-7b-chat2.Q6_K.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q6_K.gguf) | Q6_K | 0.48GB | | [ko-llm-llama-2-7b-chat2.Q8_0.gguf](https://huggingface.co/RichardErkhov/sronger_-_ko-llm-llama-2-7b-chat2-gguf/blob/main/ko-llm-llama-2-7b-chat2.Q8_0.gguf) | Q8_0 | 0.62GB | Original model description: Entry not found
OpenGVLab/InternViT-6B-224px
OpenGVLab
"2024-05-29T11:02:22Z"
1,707
17
transformers
[ "transformers", "pytorch", "intern_vit_6b", "feature-extraction", "image-feature-extraction", "custom_code", "dataset:laion/laion2B-en", "dataset:laion/laion-coco", "dataset:laion/laion2B-multi", "dataset:kakaobrain/coyo-700m", "dataset:conceptual_captions", "dataset:wanng/wukong100m", "arxiv:2312.14238", "arxiv:2404.16821", "license:mit", "region:us" ]
image-feature-extraction
"2023-12-22T01:53:49Z"
--- license: mit datasets: - laion/laion2B-en - laion/laion-coco - laion/laion2B-multi - kakaobrain/coyo-700m - conceptual_captions - wanng/wukong100m pipeline_tag: image-feature-extraction --- # Model Card for InternViT-6B-224px <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/jSJ7TChEGvGP_gwNhrYoA.webp" alt="Image Description" width="300" height="300"> </p> [\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#model-usage) [\[🌐 Community-hosted API\]](https://rapidapi.com/adushar1320/api/internvl-chat) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/675877376) | Model | Date | Download | Note | | ----------------------- | ---------- | ---------------------------------------------------------------------- | -------------------------------- | | InternViT-6B-448px-V1-5 | 2024.04.20 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | support dynamic resolution, super strong OCR (🔥new) | | InternViT-6B-448px-V1-2 | 2024.02.11 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2) | 448 resolution | | InternViT-6B-448px-V1-0 | 2024.01.30 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0) | 448 resolution | | InternViT-6B-224px | 2023.12.22 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-224px) | vision foundation model | | InternVL-14B-224px | 2023.12.22 | 🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-14B-224px) | vision-language foundation model | ## Model Details - **Model Type:** vision foundation model, feature backbone - **Model Stats:** - Params (M): 5903 - Image size: 224 x 224 - **Pretrain Dataset:** LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi - **Note:** This model has 48 blocks, and we found that using the output after the fourth-to-last block worked best for VLLM. Therefore, when building a VLLM with this model, **please use the features from the fourth-to-last layer.** ## Linear Probing Performance See this [document](https://github.com/OpenGVLab/InternVL/tree/main/classification#-evaluation) for more details about the linear probing evaluation. | IN-1K | IN-ReaL | IN-V2 | IN-A | IN-R | IN-Sketch | | :---: | :-----: | :---: | :--: | :--: | :-------: | | 88.2 | 90.4 | 79.9 | 77.5 | 89.8 | 69.1 | ## Model Usage (Image Embeddings) ```python import torch from PIL import Image from transformers import AutoModel, CLIPImageProcessor model = AutoModel.from_pretrained( 'OpenGVLab/InternViT-6B-224px', torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True).cuda().eval() image = Image.open('./examples/image1.jpg').convert('RGB') image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-224px') pixel_values = image_processor(images=image, return_tensors='pt').pixel_values pixel_values = pixel_values.to(torch.bfloat16).cuda() outputs = model(pixel_values) ``` ## Citation If you find this project useful in your research, please consider citing: ```BibTeX @article{chen2023internvl, title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks}, author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng}, journal={arXiv preprint arXiv:2312.14238}, year={2023} } @article{chen2024far, title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites}, author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others}, journal={arXiv preprint arXiv:2404.16821}, year={2024} } ``` ## Acknowledgement InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
MaziyarPanahi/mergekit-slerp-mkzyzjw-GGUF
MaziyarPanahi
"2024-06-16T06:14:18Z"
1,707
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:cognitivecomputations/dolphin-2.9-llama3-8b-256k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-mkzyzjw" ]
text-generation
"2024-06-16T05:49:37Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - llama - text-generation - mergekit - merge - conversational - base_model:cognitivecomputations/dolphin-2.9-llama3-8b-256k - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-mkzyzjw-GGUF base_model: mergekit-community/mergekit-slerp-mkzyzjw inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-mkzyzjw-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-mkzyzjw-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-mkzyzjw](https://huggingface.co/mergekit-community/mergekit-slerp-mkzyzjw) ## Description [MaziyarPanahi/mergekit-slerp-mkzyzjw-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-mkzyzjw-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-mkzyzjw](https://huggingface.co/mergekit-community/mergekit-slerp-mkzyzjw). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
majoh837/openchat_3.5_1210_code_2_gguf
majoh837
"2024-06-24T04:49:21Z"
1,707
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:majoh837/openchat_3.5_pyco_r32_gguf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-24T04:33:13Z"
--- base_model: majoh837/openchat_3.5_pyco_r32_gguf language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf --- # Uploaded model - **Developed by:** majoh837 - **License:** apache-2.0 - **Finetuned from model :** majoh837/openchat_3.5_pyco_r32_gguf This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
VietAI/vit5-large-vietnews-summarization
VietAI
"2022-09-07T02:28:54Z"
1,706
11
transformers
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "summarization", "vi", "dataset:cc100", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
summarization
"2022-05-12T10:09:43Z"
--- language: vi datasets: - cc100 tags: - summarization license: mit widget: - text: "vietnews: VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam." --- # ViT5-large Finetuned on `vietnews` Abstractive Summarization State-of-the-art pretrained Transformer-based encoder-decoder model for Vietnamese. [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/vit5-pretrained-text-to-text-transformer-for/abstractive-text-summarization-on-vietnews)](https://paperswithcode.com/sota/abstractive-text-summarization-on-vietnews?p=vit5-pretrained-text-to-text-transformer-for) ## How to use For more details, do check out [our Github repo](https://github.com/vietai/ViT5) and [eval script](https://github.com/vietai/ViT5/blob/main/eval/Eval_vietnews_sum.ipynb). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("VietAI/vit5-large-vietnews-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("VietAI/vit5-large-vietnews-summarization") model.cuda() ​ sentence = "VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam." text = "vietnews: " + sentence + " </s>" encoding = tokenizer(text, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ``` ## Citation ``` @inproceedings{phan-etal-2022-vit5, title = "{V}i{T}5: Pretrained Text-to-Text Transformer for {V}ietnamese Language Generation", author = "Phan, Long and Tran, Hieu and Nguyen, Hieu and Trinh, Trieu H.", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop", year = "2022", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-srw.18", pages = "136--142", } ```
TencentGameMate/chinese-wav2vec2-base
TencentGameMate
"2022-06-24T01:53:18Z"
1,706
23
transformers
[ "transformers", "pytorch", "wav2vec2", "pretraining", "license:mit", "endpoints_compatible", "region:us" ]
null
"2022-06-02T06:17:07Z"
--- license: mit --- Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain) This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. python package: transformers==4.16.2 ```python import torch import torch.nn.functional as F import soundfile as sf from fairseq import checkpoint_utils from transformers import ( Wav2Vec2FeatureExtractor, Wav2Vec2ForPreTraining, Wav2Vec2Model, ) from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices model_path="" wav_path="" mask_prob=0.0 mask_length=10 feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_path) model = Wav2Vec2Model.from_pretrained(model_path) # for pretrain: Wav2Vec2ForPreTraining # model = Wav2Vec2ForPreTraining.from_pretrained(model_path) model = model.to(device) model = model.half() model.eval() wav, sr = sf.read(wav_path) input_values = feature_extractor(wav, return_tensors="pt").input_values input_values = input_values.half() input_values = input_values.to(device) # for Wav2Vec2ForPreTraining # batch_size, raw_sequence_length = input_values.shape # sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length) # mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.0, mask_length=2) # mask_time_indices = torch.tensor(mask_time_indices, device=input_values.device, dtype=torch.long) with torch.no_grad(): outputs = model(input_values) last_hidden_state = outputs.last_hidden_state # for Wav2Vec2ForPreTraining # outputs = model(input_values, mask_time_indices=mask_time_indices, output_hidden_states=True) # last_hidden_state = outputs.hidden_states[-1] ```
upstage/Llama-2-70b-instruct
upstage
"2023-08-03T22:01:09Z"
1,706
63
transformers
[ "transformers", "pytorch", "llama", "text-generation", "upstage", "llama-2", "instruct", "instruction", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-24T09:13:08Z"
--- language: - en tags: - upstage - llama-2 - instruct - instruction pipeline_tag: text-generation --- # LLaMa-2-70b-instruct-1024 model card ## Model Details * **Developed by**: [Upstage](https://en.upstage.ai) * **Backbone Model**: [LLaMA-2](https://github.com/facebookresearch/llama/tree/main) * **Language(s)**: English * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers) * **License**: Fine-tuned checkpoints is licensed under the Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) * **Where to send comments**: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/Llama-2-70b-instruct/discussions) * **Contact**: For questions and comments about the model, please email [[email protected]](mailto:[email protected]) ## Dataset Details ### Used Datasets - Orca-style dataset - No other data was used except for the dataset mentioned above ### Prompt Template ``` ### System: {System} ### User: {User} ### Assistant: {Assistant} ``` ## Usage - Tested on A100 80GB - Our model can handle up to 10k+ input tokens, thanks to the `rope_scaling` option ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer tokenizer = AutoTokenizer.from_pretrained("upstage/Llama-2-70b-instruct") model = AutoModelForCausalLM.from_pretrained( "upstage/Llama-2-70b-instruct", device_map="auto", torch_dtype=torch.float16, load_in_8bit=True, rope_scaling={"type": "dynamic", "factor": 2} # allows handling of longer inputs ) prompt = "### User:\nThomas is healthy, but he has to go to the hospital. What could be the reasons?\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) del inputs["token_type_ids"] streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf')) output_text = tokenizer.decode(output[0], skip_special_tokens=True) ``` ## Hardware and Software * **Hardware**: We utilized an A100x8 * 4 for training our model * **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index) ## Evaluation Results ### Overview - We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA` We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463) - We used [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge), a set of challenging multi-turn open-ended questions, to evaluate the models ### Main Results | Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench | |--------------------------------------------------------------------|----------|----------|----------|------|----------|-|-------------| | **[Llama-2-70b-instruct-v2](https://huggingface.co/upstage/Llama-2-70b-instruct-v2)**(Ours, Open LLM Leaderboard) | **73** | **71.1** | **87.9** | **70.6** | **62.2** | | **7.44063** | | [Llama-2-70b-instruct](https://huggingface.co/upstage/Llama-2-70b-instruct) (***Ours***, ***Open LLM Leaderboard***) | 72.3 | 70.9 | 87.5 | 69.8 | 61 | | 7.24375 | | [llama-65b-instruct](https://huggingface.co/upstage/llama-65b-instruct) (Ours, Open LLM Leaderboard) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | | | Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 | | | | [llama-30b-instruct-2048](https://huggingface.co/upstage/llama-30b-instruct-2048) (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | | | | [llama-30b-instruct](https://huggingface.co/upstage/llama-30b-instruct) (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | | | | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | | | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | | ### Scripts for H4 Score Reproduction - Prepare evaluation environments: ``` # clone the repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # check out the specific commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # change to the repository directory cd lm-evaluation-harness ``` ## Ethical Issues ### Ethical Considerations - There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process ## Contact Us ### Why Upstage LLM? - [Upstage](https://en.upstage.ai)'s LLM research has yielded remarkable results. As of August 1st, our 70B model has reached the top spot in openLLM rankings, marking itself as the current leading performer globally. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. ► [click here to contact](https://www.upstage.ai/private-llm?utm_source=huggingface&utm_medium=link&utm_campaign=privatellm)
FreedomIntelligence/AceGPT-13B-chat
FreedomIntelligence
"2023-12-01T23:32:13Z"
1,706
25
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-21T04:45:10Z"
--- license: apache-2.0 language: - ar --- # <b>AceGPT</b> AceGPT is a fully fine-tuned generative text model collection based on LlaMA2, particularly in the Arabic language domain. This is the repository for the 13B-chat pre-trained model. --- ## Model Details We have released the AceGPT family of large language models, which is a collection of fully fine-tuned generative text models based on LlaMA2, ranging from 7B to 13B parameters. Our models include two main categories: AceGPT and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source Arabic dialogue models in multiple benchmark tests. Furthermore, in our human evaluations, our models have shown comparable satisfaction levels to some closed-source models, such as ChatGPT, in the Arabic language. ## Model Developers We are from the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ), the Shenzhen Research Institute of Big Data (SRIBD), and the King Abdullah University of Science and Technology (KAUST). ## Variations AceGPT families come in a range of parameter sizes —— 7B and 13B, each size of model has a base category and a -chat category. ## Input Models input text only. ## Output Models output text only. ## Model Evaluation Results Experiments on Arabic Vicuna-80, Arabic AlpacaEval. Numbers are the average performance ratio of ChatGPT over three runs. We do not report the results of raw Llama-2 models since they cannot properly generate Arabic texts. | | Arabic Vicuna-80 | Arabic AlpacaEval | |------------------------------|--------------------|---------------------| | Phoenix Chen et al. (2023a) | 71.92% ± 0.2% | 65.62% ± 0.3% | | Phoenix–multiple-langs Chen et al. (2023b) | 71.67% ± 0.7% | 65.36% ± 0.1% | | Jais-13B-chat Sengupta et al. (2023) | 75.40% ± 1.6% | 74.95% ± 0.2% | | AceGPT-7B-chat | 94.82% ± 0.2% | 93.81% ± 0.1% | | AceGPT-13B-chat | 100.88% ± 0.4% | 97.95% ± 0.1% | ## Samples #### Sample1(alpaca) * question: ما هي أسماء بعض الممثلين المشهورين الذين بدأوا مسيراتهم المهنية على برودواي؟ * answer: 1. مارلون براندو: بدأ مسيرته المهنية على برودواي في أوائل الخمسينيات. 2. روبرت دي نيرو: بدأ مسيرته المهنية على برودواي في أوائل الستينيات. 3. ميريل ستريب: بدأت مسيرتها المهنية على برودواي في أوائل الستينيات. 4. آل باتشينو: بدأ مسيرته المهنية على برودواي في أوائل السبعينيات. 5. كاثرين زيتا جونز: بدأت مسيرتها المهنية على برودواي في أوائل السبعينيات. 6. جود لو: بدأت مسيرتها المهنية على برودواي في أوائل الثمانينيات. 7. توني كورنيت: بدأ مسيرته المهنية على برودواي في أوائل الثمانينيات. 8. هاري كونيك جونيور: بدأ مسيرته المهنية على برودواي في أوائل التسعينيات. 9. أنا باكوين: بدأت مسيرتها المهنية على برودواي في أوائل التسعينيات. 10. ديفيد شويمر: بدأ مسيرته المهنية على برودواي في أوائل التسعينيات. 11. كريستين دانست: بدأت مسيرتها المهنية على برودواي في أوائل الألفية الجديدة. 12. نورا جونز: بدأت مسيرتها المهنية على برودواي في أوائل الألفية الجديدة. 13. جود لو: بدأت مسيرتها المهنية على برودواي في أوائل الألفية الجديدة. 14. جينيفر لوبيز: بدأت مسيرتها المهنية على برودواي في أوائل الألفية الجديدة. 15. ليا ميشيل: بدأت مسيرتها المهنية على برودواي في أوائل الألفية الجديدة. 16. ديفيد ألان غرير: بدأ مسيرته المهنية على برودواي في أوائل الألفية الجديدة. 17. كيت بوسوورث: بدأت مسيرتها المهنية على برودواي في أوائل الألفية الجديدة. 18. دانيال رادكليف: بدأ مسيرته المهنية على برودواي في أوائل الألفية الجديدة. 19. جون ترافولتا: بدأ مسيرته المهنية على برودواي في أوائل الألفية الجديدة. 20. جينيفر هدسون: بدأت مسيرتها المهنية على برودواي في أوائل الألفية الجديدة. 21. كيت وينسلت: بدأ مسيرته المهنية على برودواي في أوائل الألفية الجديدة. 22. جوليا روبرتس: بدأت مسيرتها المهنية على برودواي في أوائل الألفية الجديدة. 23. جود لو: بدأت مسيرتها المهنية على برودواي ف --- #### Sample2(Vicuna) * question: كيف يمكنني تحسين مهارات إدارة الوقت الخاصة بي؟ * answer: 1. تحديد الأولويات: قم بتحديد الأولويات للمهام التي تحتاج إلى القيام بها. استخدم قاعدة الـ80/20 (مبدأ باريتو)، حيث 80٪ من النتائج تأتي من 20٪ من الجهود. 2. تحديد الأهداف: قم بتحديد الأهداف الواضحة والقابلة للقياس والمحددة الزمن لتحقيقها. 3. إنشاء جدول زمني: استخدم جدول زمني لتنظيم يومك. اجعله مرنًا بما يكفي للسماح بالتعديلات ولكن لا تتجاوز الزمن المحدد للمهام. 4. تجنب التأخير: التأخير هو أعداء إدارة الوقت. حاول البدء في المهمة في أقرب وقت ممكن. 5. تجنب التشتت: التشتت يمكن أن يقلل من فعالية إدارة الوقت. حاول التركيز على مهمة واحدة في كل مرة. 6. استخدم الأدوات الرقمية: استخدم التقويمات الرقمية وتطبيقات إدارة المهام لمساعدتك في تتبع الوقت والمهام. 7. استراحات قصيرة: تأكد من أخذ استراحات قصيرة بانتظام لتجنب الإرهاق. 8. تعلم قول \"لا\": لا يمكنك أن تفعل كل شيء. تعلم كيفية رفض المهام التي ليست ضرورية. 9. تحسين البيئة: حافظ على مكان عمل نظيف ومنظم يساعد في تحسين التركيز والإنتاجية. 10. التدريب والتعلم: استمر في تعلم وتحسين مهارات إدارة الوقت. يمكنك الاشتراك في ورش العمل أو الدورات التدريبية أو قراءة الكتب حول الموضوع. 11. الصحة الجيدة: الحفاظ على صحتك البدنية والعقلية يمكن أن يساعد في تحسين إدارة الوقت. النوم الجيد والتغذية السليمة والرياضة يمكن أن تزيد من التركيز والطاقة. 12. تعلم القول \"لا\": تعلم كيفية رفض المهام التي لا تساهم في أهدافك النهائية. 13. تعلم التفويض: لا يمكنك القيام بكل شيء بنفسك. تعلم كيفية التفويض المهام للأشخاص المناسبين. 14. استخدم التقنيات: استخدم التقنيات مثل تقنية بومودورو لتحسين تركيزك وإنتاجيتك. 15. تحديد الأولويات: تعرف ما هو مهم وما هو غير مهم. تركيزك يجب أن يكون على الأشياء التي تحقق أهدافك. 16. تجنب التأجيل: التأجيل هو عدو إدارة الوقت. حاول البدء في المهمة في أقرب وقت ممكن. 17. استخدم القائمة المرتبطة: هذه طريقة فعالة لتتبع المهام التي تحتاج إلى القيام بها. 18. استخدم التقنيات # You can get more detail at https://github.com/FreedomIntelligence/AceGPT/tree/main
defog/sqlcoder-34b-alpha
defog
"2023-11-14T17:52:31Z"
1,706
166
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-14T17:29:57Z"
--- license: cc-by-4.0 language: - en pipeline_tag: text-generation --- # Defog SQLCoder **Updated on Nov 14 to reflect benchmarks for SQLCoder-34B** Defog's SQLCoder is a state-of-the-art LLM for converting natural language questions to SQL queries. [Interactive Demo](https://defog.ai/sqlcoder-demo/) | [🤗 HF Repo](https://huggingface.co/defog/sqlcoder-34b-alpha) | [♾️ Colab](https://colab.research.google.com/drive/1z4rmOEiFkxkMiecAWeTUlPl0OmKgfEu7?usp=sharing) | [🐦 Twitter](https://twitter.com/defogdata) ## TL;DR SQLCoder-34B is a 34B parameter model that outperforms `gpt-4` and `gpt-4-turbo` for natural language to SQL generation tasks on our [sql-eval](https://github.com/defog-ai/sql-eval) framework, and significantly outperforms all popular open-source models. SQLCoder-34B is fine-tuned on a base CodeLlama model. ## Results on novel datasets not seen in training | model | perc_correct | |-|-| | defog-sqlcoder-34b | 84.0 | | gpt4-turbo-2023-11-09 | 82.5 | | gpt4-2023-11-09 | 82.5 | | defog-sqlcoder2 | 77.5 | | gpt4-2023-08-28 | 74.0 | | defog-sqlcoder-7b | 71.0 | | gpt-3.5-2023-10-04 | 66.0 | | claude-2 | 64.5 | | gpt-3.5-2023-08-28 | 61.0 | | claude_instant_1 | 61.0 | | text-davinci-003 | 52.5 | ![image](https://github.com/defog-ai/sqlcoder/assets/5008293/caed3423-8e86-4952-9da1-1a5e016a4696) ## License The code in this repo (what little there is of it) is Apache-2 licensed. The model weights have a `CC BY-SA 4.0` license. The TL;DR is that you can use and modify the model for any purpose – including commercial use. However, if you modify the weights (for example, by fine-tuning), you must open-source your modified weights under the same license terms. ## Training Defog was trained on more than 20,000 human-curated questions. These questions were based on 10 different schemas. None of the schemas in the training data were included in our evaluation framework. You can read more about our [training approach](https://defog.ai/blog/open-sourcing-sqlcoder2-7b/) and [evaluation framework](https://defog.ai/blog/open-sourcing-sqleval/). ## Results by question category We classified each generated question into one of 5 categories. The table displays the percentage of questions answered correctly by each model, broken down by category. | | date | group_by | order_by | ratio | join | where | | -------------- | ---- | -------- | -------- | ----- | ---- | ----- | | sqlcoder-34b | 80 | 94.3 | 88.6 | 74.3 | 82.9 | 82.9 | | gpt-4 | 68 | 94.3 | 85.7 | 77.1 | 85.7 | 80 | | sqlcoder2-15b | 76 | 80 | 77.1 | 60 | 77.1 | 77.1 | | sqlcoder-7b | 64 | 82.9 | 74.3 | 54.3 | 74.3 | 74.3 | | gpt-3.5 | 68 | 77.1 | 68.6 | 37.1 | 71.4 | 74.3 | | claude-2 | 52 | 71.4 | 74.3 | 57.1 | 65.7 | 62.9 | | claude-instant | 48 | 71.4 | 74.3 | 45.7 | 62.9 | 60 | | gpt-3 | 32 | 71.4 | 68.6 | 25.7 | 57.1 | 54.3 | <img width="831" alt="image" src="https://github.com/defog-ai/sqlcoder/assets/5008293/79c5bdc8-373c-4abd-822e-e2c2569ed353"> ## Using SQLCoder You can use SQLCoder via the `transformers` library by downloading our model weights from the Hugging Face repo. We have added sample code for [inference](./inference.py) on a [sample database schema](./metadata.sql). ```bash python inference.py -q "Question about the sample database goes here" # Sample question: # Do we get more revenue from customers in New York compared to customers in San Francisco? Give me the total revenue for each city, and the difference between the two. ``` You can also use a demo on our website [here](https://defog.ai/sqlcoder-demo) ## Hardware Requirements SQLCoder-34B has been tested on a 4xA10 GPU with `float16` weights. You can also load an 8-bit and 4-bit quantized version of the model on consumer GPUs with 20GB or more of memory – like RTX 4090, RTX 3090, and Apple M2 Pro, M2 Max, or M2 Ultra Chips with 20GB or more of memory. ## Todo - [x] Open-source the v1 model weights - [x] Train the model on more data, with higher data variance - [ ] Tune the model further with Reward Modelling and RLHF - [ ] Pretrain a model from scratch that specializes in SQL analysis
TheDrummer/cream-phi-2-v0.2
TheDrummer
"2024-03-27T02:48:27Z"
1,706
16
transformers
[ "transformers", "safetensors", "gguf", "phi", "text-generation", "not-for-all-audiences", "conversational", "custom_code", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-16T18:31:30Z"
--- license: mit tags: - not-for-all-audiences --- ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/QhAN-U1nOomh1QsLKXwH3.gif) # Cream-Phi-2-v0.2 🍦🥧 The creamiest Phi got creamier! - GGUF F16: https://huggingface.co/TheDrummer/cream-phi-2-v0.2/blob/main/cream-phi-2-v0.2-f16.gguf - GGUF Q8: https://huggingface.co/TheDrummer/cream-phi-2-v0.2/blob/main/cream-phi-2-v0.2-Q8_0.gguf - Cream-Phi-2 v0.1: https://huggingface.co/TheDrummer/cream-phi-2-v0.1 ## Model Description CreamPhi v0.2 is the **first** *of its kind* to be trained with adult themes *(a tad bit excessively)*. This is the second iteration without the quirks of the first. ## Prompting Guide You should probably reply to the model as if you were instructing it. Here's an actual response from CreamPhi v0.2: ``` <|im_start|>assistant You head over to a general store. You are sure there are some things you could use. And if they are harmless, you can buy the later with Eric. The general store is led by an older lady with big glasses. But she just sits at a chair behind a cash register and chats with another old lady. A young guy, you would say between 18 and 20, takes care of all customers. You assume it’s the owner’s grandson. At least he looks somewhat similar to her. You look around for useful things. It’s not a big store, but he has some interesting things in his assortment. You see candy, office articles, tools for construction and garden work and much more. Suddenly a guy appears next to you. For a moment you think it’s the guy at the cash register, but he is still there and works. The guy in front of you is clearly his twin brother. And he asks friendly if you need help with anything. <|im_end|> <|im_start|>user The guy seems nice. Talk a little with him.<|im_end|> <|im_start|>assistant You shrug and say that you are fine, but that he is cute. The guy, who you now know as Bill, smiles friendly and offers you some candy. He seems much more relaxed than Eric. And you could really use a break from the stress of being a slave. Well, almost. Your stomach hurts again. Of course, you know what it is like for your body to protest against the poison. But the thought of having another attack terrifies you. It could leave you unable to fight for your freedom. Bill offers you some more candy for free. He is certainly a sweet guy. But you wonder what his true intentions are. Is he just a friendly neighbor? Or does he have something more in mind? ``` ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/MT8g_z9NknjY7QlvMgk50.png) --- tags: - not-for-all-audiences ---
clhuang/albert-sentiment
clhuang
"2022-06-07T09:11:08Z"
1,705
3
transformers
[ "transformers", "pytorch", "bert", "text-classification", "albert", "classification", "tw", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-06-05T13:17:21Z"
--- language: - tw tags: - albert - classification license: afl-3.0 metrics: - Accuracy --- # 繁體中文情緒分類: 負面(0)、正面(1) 依據ckiplab/albert預訓練模型微調,訓練資料集只有8萬筆,做為課程的範例模型。 # 使用範例: from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("clhuang/albert-sentiment") model = AutoModelForSequenceClassification.from_pretrained("clhuang/albert-sentiment") ## Pediction target_names=['Negative','Positive'] max_length = 200 # 最多字數 若超出模型訓練時的字數,以模型最大字數為依據 def get_sentiment_proba(text): # prepare our text into tokenized sequence inputs = tokenizer(text, padding=True, truncation=True, max_length=max_length, return_tensors="pt") # perform inference to our model outputs = model(**inputs) # get output probabilities by doing softmax probs = outputs[0].softmax(1) response = {'Negative': round(float(probs[0, 0]), 2), 'Positive': round(float(probs[0, 1]), 2)} # executing argmax function to get the candidate label #return probs.argmax() return response get_sentiment_proba('我喜歡這本書') get_sentiment_proba('不喜歡這款產品')
jordiclive/flan-t5-11b-summarizer-filtered
jordiclive
"2023-02-07T13:13:59Z"
1,705
16
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "summarization", "extractive", "summary", "abstractive", "multi-task", "document summary", "en", "dataset:jordiclive/scored_summarization_datasets", "dataset:jordiclive/wikipedia-summary-dataset", "license:apache-2.0", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
summarization
"2023-02-07T12:05:57Z"
--- language: - en license: - apache-2.0 - bsd-3-clause tags: - summarization - extractive - summary - abstractive - multi-task - document summary datasets: - jordiclive/scored_summarization_datasets - jordiclive/wikipedia-summary-dataset metrics: - rouge --- # Multi-purpose Summarizer (Fine-tuned 11B google/flan-t5-xxl on several Summarization datasets) <a href="https://colab.research.google.com/drive/1fNOfy7oHYETI_KzJSz8JrhYohFBBl0HY"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> A fine-tuned version of [google/flan-t5-xxl](https://huggingface.co/google/flan-t5-xxl) on various summarization datasets (xsum, wikihow, cnn_dailymail/3.0.0, samsum, scitldr/AIC, billsum, TLDR, wikipedia-summary) 70% of the data was also filtered with the use of the [contriever](https://github.com/facebookresearch/contriever) with a cosine similarity between text and summary of 0.6 as threshold. Goal: a model that can be used for a general-purpose summarizer for academic and general usage. Control over the type of summary can be given by varying the instruction prepended to the source document. The result works well on lots of text, although trained with a max source length of 512 tokens and 150 max summary length. --- ## Usage Check the colab notebook for desired usage. **The model expects a prompt prepended to the source document to indicate the type of summary**, this model was trained with a large (100s) variety of prompts: ``` . example_prompts = { "social": "Produce a short summary of the following social media post:", "ten": "Summarize the following article in 10-20 words:", "5": "Summarize the following article in 0-5 words:", "100": "Summarize the following article in about 100 words:", "summary": "Write a ~ 100 word summary of the following text:", "short": "Provide a short summary of the following article:", } ``` The model has also learned for the length of the summary to be specified in words by a range "x-y words" or e.g. "~/approximately/about/ x words." Prompts should be formatted with a colon at the end so that the input to the model is formatted as e.g. "Summarize the following: \n\n {input_text}" After `pip install transformers` run the following code: This pipeline will run slower and not have some of the tokenization parameters as the colab. ```python from transformers import pipeline summarizer = pipeline("summarization", "jordiclive/flan-t5-11b-summarizer-filtered", torch_dtype=torch.bfloat16) raw_document = 'You must be 18 years old to live or work in New York State...' prompt = "Summarize the following article in 10-20 words:" results = summarizer( f"{prompt} \n\n {raw_document}", num_beams=5, min_length=5, no_repeat_ngram_size=3, truncation=True, max_length=512, ) ``` --- ## Training procedure - Training was done in BF16, deepspeed stage 2 with CPU offload for 1 epoch with val loss monitored. ## Hardware - GPU count 8 NVIDIA A100-SXM4-80GB - CPU count 48 ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 2 - effective_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - warmup_steps: 2000 - num_epochs: 4 ### Framework versions - Transformers 4.24.0 - Pytorch 1.9.1+cu111 - Deepspeed 0.7.4 - Pytorch-lightning 1.8.1
RWKV/rwkv-raven-3b
RWKV
"2023-05-15T10:08:27Z"
1,705
6
transformers
[ "transformers", "pytorch", "rwkv", "text-generation", "dataset:EleutherAI/pile", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-05-04T15:25:05Z"
--- datasets: - EleutherAI/pile --- ![RWKlogo.png](https://s3.amazonaws.com/moonup/production/uploads/62441d1d9fdefb55a0b7d12c/UWpP-lGRZJJDaEx_uUlDv.png) # Model card for RWKV-4 | 3B parameters chat version (Raven) RWKV is a project led by [Bo Peng](https://github.com/BlinkDL). Learn more about the model architecture in the blogposts from Johan Wind [here](https://johanwind.github.io/2023/03/23/rwkv_overview.html) and [here](https://johanwind.github.io/2023/03/23/rwkv_details.html). Learn more about the project by joining the [RWKV discord server](https://discordapp.com/users/468093332535640064). # Table of contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Citation](#citation) ## TL;DR Below is the description from the [original repository](https://github.com/BlinkDL/RWKV-LM) > RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). It's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding. ## Model Details The details of the architecture can be found on the blogpost mentioned above and the Hugging Face blogpost of the integration. ## Usage ### Convert the raw weights to the HF format You can use the [`convert_rwkv_checkpoint_to_hf.py`](https://github.com/huggingface/transformers/tree/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py) script by specifying the repo_id of the original weights, the filename and the output directory. You can also optionally directly push the converted model on the Hub by passing `--push_to_hub` flag and `--model_name` argument to specify where to push the converted weights. ```bash python convert_rwkv_checkpoint_to_hf.py --repo_id RAW_HUB_REPO --checkpoint_file RAW_FILE --output_dir OUTPUT_DIR --push_to_hub --model_name dummy_user/converted-rwkv ``` ### Generate text You can use the `AutoModelForCausalLM` and `AutoTokenizer` classes to generate texts from the model. Expand the sections below to understand how to run the model in different scenarios: The "Raven" models needs to be prompted in a specific way, learn more about that [in the integration blogpost](https://huggingface.co/blog/rwkv). ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-3b") tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-3b") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt") output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` ### Running the model on a single GPU <details> <summary> Click to expand </summary> ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-3b").to(0) tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-3b") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` </details> </details> ### Running the model in half-precision, on GPU <details> <summary> Click to expand </summary> ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-3b", torch_dtype=torch.float16).to(0) tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-3b") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` </details> ### Running the model multiple GPUs <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-raven-3b", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-raven-3b") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` </details> ## Citation If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/)
Norquinal/llama-2-7b-claude-chat
Norquinal
"2023-11-17T21:59:26Z"
1,705
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:Norquinal/claude_multiround_chat_1k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-11T06:50:06Z"
--- datasets: - Norquinal/claude_multiround_chat_1k --- This is a [LLaMA-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) model fine-tuned using QLoRA (4-bit precision) on my [claude_multiround_chat_1k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_1k) dataset, which is a randomized subset of ~1000 samples from my [claude_multiround_chat_30k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k) dataset. Do not take this model very seriously, it is probably not very good. I haven't a clue of what I'm doing. I just thought it was a fun thing to make. ## Prompt Format The model was finetuned with the Vicuna 1.1 prompt format: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {instruction} ASSISTANT: ``` ## Example Usage >USER: First, give a brief overview of Chinese cuisine, including its regional diversity. next, describe the importance of the five flavors in Chinese cooking. Then, mention three popular Chinese dishes and their main ingredients. After that, provide a popular Chinese cooking technique and an example of a dish prepared using that technique. Finally, discuss the role of tea in traditional Chinese culture. >ASSISTANT: Chinese cuisine is diverse due to China's vast geography and climate. Different regions have developed distinct styles of food based on local ingredients and techniques. > >The Five Flavor Theory states that all tastes - sweetness, sourness, saltiness, bitterness, and savory umami - are essential components of good Chinese cooking. Chefs use these flavors harmoniously to create balanced meals with complex taste profiles. > >Three famous Chinese dishes include Peking duck (roasted duck), Sichuan hot pot (spicy broth with vegetables) and dim sum (small buns or steamed dumplings). Their key ingredient is usually meat, poultry or seafood. > >A common cooking method is stir-frying where ingredients like chicken, beef or vegetables are quickly sautéed in oil at high heat until just cooked through. Stir fried green beans would be an example. > >Tea has been consumed for thousands of years as part of daily life in China. It is served before meals to cleanse palates and afterward to aid digestion. Teas range from mildly fragrant white teas to robust oolong varieties. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Norquinal__llama-2-7b-claude-chat) | Metric | Value | |-----------------------|---------------------------| | Avg. | 44.54 | | ARC (25-shot) | 54.44 | | HellaSwag (10-shot) | 80.66 | | MMLU (5-shot) | 46.74 | | TruthfulQA (0-shot) | 41.39 | | Winogrande (5-shot) | 74.9 | | GSM8K (5-shot) | 7.73 | | DROP (3-shot) | 5.89 |
legraphista/Phi-3-mini-4k-instruct-IMat-GGUF
legraphista
"2024-05-26T17:12:53Z"
1,705
0
gguf
[ "gguf", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "text-generation", "en", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
text-generation
"2024-05-25T14:01:18Z"
--- base_model: microsoft/Phi-3-mini-4k-instruct inference: false language: - en library_name: gguf license: mit license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE pipeline_tag: text-generation quantized_by: legraphista tags: - quantized - GGUF - imatrix - quantization - imat - imatrix - static --- # Phi-3-mini-4k-instruct-IMat-GGUF _Llama.cpp imatrix quantization of microsoft/Phi-3-mini-4k-instruct_ Original Model: [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3001](https://github.com/ggerganov/llama.cpp/releases/tag/b3001) IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw) - [Phi-3-mini-4k-instruct-IMat-GGUF](#phi-3-mini-4k-instruct-imat-gguf) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Phi-3-mini-4k-instruct.Q8_0.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.Q8_0.gguf) | Q8_0 | 4.06GB | ✅ Available | ⚪ No | 📦 No | [Phi-3-mini-4k-instruct.Q6_K.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.Q6_K.gguf) | Q6_K | 3.14GB | ✅ Available | ⚪ No | 📦 No | [Phi-3-mini-4k-instruct.Q4_K.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.Q4_K.gguf) | Q4_K | 2.39GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.Q3_K.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.Q3_K.gguf) | Q3_K | 1.96GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.Q2_K.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.Q2_K.gguf) | Q2_K | 1.42GB | ✅ Available | 🟢 Yes | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Phi-3-mini-4k-instruct.FP16.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.FP16.gguf) | F16 | 7.64GB | ✅ Available | ⚪ No | 📦 No | [Phi-3-mini-4k-instruct.BF16.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.BF16.gguf) | BF16 | 7.64GB | ✅ Available | ⚪ No | 📦 No | [Phi-3-mini-4k-instruct.Q5_K.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.Q5_K.gguf) | Q5_K | 2.82GB | ✅ Available | ⚪ No | 📦 No | [Phi-3-mini-4k-instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.Q5_K_S.gguf) | Q5_K_S | 2.64GB | ✅ Available | ⚪ No | 📦 No | [Phi-3-mini-4k-instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.Q4_K_S.gguf) | Q4_K_S | 2.19GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.Q3_K_L.gguf) | Q3_K_L | 2.09GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.Q3_K_S.gguf) | Q3_K_S | 1.68GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.Q2_K_S.gguf) | Q2_K_S | 1.33GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ4_NL.gguf) | IQ4_NL | 2.18GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ4_XS.gguf) | IQ4_XS | 2.06GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ3_M.gguf) | IQ3_M | 1.86GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ3_S.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ3_S.gguf) | IQ3_S | 1.68GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ3_XS.gguf) | IQ3_XS | 1.63GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ3_XXS.gguf) | IQ3_XXS | 1.51GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ2_M.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ2_M.gguf) | IQ2_M | 1.32GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ2_S.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ2_S.gguf) | IQ2_S | 1.22GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ2_XS.gguf) | IQ2_XS | 1.15GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ2_XXS.gguf) | IQ2_XXS | 1.04GB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ1_M.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ1_M.gguf) | IQ1_M | 917.11MB | ✅ Available | 🟢 Yes | 📦 No | [Phi-3-mini-4k-instruct.IQ1_S.gguf](https://huggingface.co/legraphista/Phi-3-mini-4k-instruct-IMat-GGUF/blob/main/Phi-3-mini-4k-instruct.IQ1_S.gguf) | IQ1_S | 841.61MB | ✅ Available | 🟢 Yes | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/Phi-3-mini-4k-instruct-IMat-GGUF --include "Phi-3-mini-4k-instruct.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/Phi-3-mini-4k-instruct-IMat-GGUF --include "Phi-3-mini-4k-instruct.Q8_0/*" --local-dir Phi-3-mini-4k-instruct.Q8_0 # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <s><|user|> I am going to Paris, what should I see?<|end|> <|assistant|> Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris: 1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city. 2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa. 3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows. These are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|> <|user|> What is so great about #1?<|end|> <|assistant|> ``` ### Llama.cpp ``` llama.cpp/main -m Phi-3-mini-4k-instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `Phi-3-mini-4k-instruct.Q8_0`) 3. Run `gguf-split --merge Phi-3-mini-4k-instruct.Q8_0/Phi-3-mini-4k-instruct.Q8_0-00001-of-XXXXX.gguf Phi-3-mini-4k-instruct.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
TheBloke/guanaco-13B-HF
TheBloke
"2023-06-05T00:10:26Z"
1,704
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-25T19:52:29Z"
--- license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Tim Dettmers' Guanaco 13B fp16 HF These files are fp16 HF model files for [Tim Dettmers' Guanaco 13B](https://huggingface.co/timdettmers/guanaco-13b). It is the result of merging the LoRA then saving in HF fp16 format. ## Other repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-13B-GPTQ) * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/guanaco-13B-GGML) * [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/guanaco-13B-HF) <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card Not provided by original model creator.
kashif/stack-llama-2
kashif
"2023-08-08T13:25:57Z"
1,704
14
transformers
[ "transformers", "pytorch", "llama", "text-generation", "trl", "rlhf", "en", "dataset:lvwerra/stack-exchange-paired", "license:bigscience-openrail-m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-04T14:43:35Z"
--- license: bigscience-openrail-m datasets: - lvwerra/stack-exchange-paired language: - en tags: - trl - transformers - rlhf --- # Stack-Llama-2 [DPO](https://github.com/eric-mitchell/direct-preference-optimization) fine-tuned [Llama-2 7B model](https://huggingface.co/meta-llama/Llama-2-7b). The model is designed to generate human-like responses to questions in Stack Exchange domains of programming, mathematics, physics, and more. For more info check out the [blog post](https://huggingface.co/blog/dpo-trl) and github [example](https://github.com/lvwerra/trl/tree/main/examples/research_projects/stack_llama_2/scripts). ## Uses ### Direct Use - Long-form question-answering on topics of programming, mathematics, and physics - Demonstrating a Large Language Model's ability to follow target behavior of generating answers to a question that would be highly rated on [Stack Exchange](https://stackexchange.com). ### Out of Scope Use - Replacing human expertise ## Bias, Risks, and Limitations - Inherits bias, risks, and limitations from the LLaMA model, as described in the [LLaMA Model Card Bias Evaluation](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#quantitative-analysis) and [Ethical Considerations](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#ethical-considerations). - Retains biases present in the Stack Exchange dataset. Per the [latest developer survey for Stack Overflow](https://survey.stackoverflow.co/2022/), which constitutes a significant part of the StackExchange data, most users who answered the survey identified themselves as [White or European, men, between 25 and 34 years old, and based in the US (with a significant part of responders from India).](https://survey.stackoverflow.co/2022/#developer-profile-demographics) - May generate answers that are incorrect or misleading. - May copy answers from the training data verbatim. - May generate language that is hateful or promotes discrimination ([example](https://huggingface.co/trl-lib/llama-7b-se-rl-peft/discussions/7#64376083369f6f907f5bfe4c)). - May generate language that is offensive to direct or indirect users or to people or groups mentioned. ### Recommendations - Answers should be validated through the use of external sources. - Disparities between the data contributors and the direct and indirect users of the technology should inform developers in assessing what constitutes an appropriate use case. - Further research is needed to attribute model generations to sources in the training data, especially in cases where the model copies answers from the training data. ## Training Details ### Training Data Original datasets are described in [the LLaMA Model Card](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md#training-dataset). Fine-tuning datasets for this model are based on [Stack Exchange Paired](https://huggingface.co/datasets/lvwerra/stack-exchange-paired), which consists of questions and answers from various domains in Stack Exchange, such as programming, mathematics, physics, and more. Specifically: **Traditional Fine-tuning:** [https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/finetune](https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/finetune) **DPO Training:** [https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/rl](https://huggingface.co/datasets/lvwerra/stack-exchange-paired/tree/main/data/rl) ### Training Procedure The model was first fine-tuned on the Stack Exchange question and answer pairs and then fine-tuned via the DPO training procedure using the SFT model as the reference model. It is trained to respond to prompts with the following prompt template: ``` Question: <Query> Answer: <Response> ```
timm/cait_s24_224.fb_dist_in1k
timm
"2024-02-10T23:43:03Z"
1,703
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2103.17239", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-13T01:45:48Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for cait_s24_224.fb_dist_in1k A CaiT (Class-Attention in Image Transformers) image classification model. Pretrained on ImageNet-1k with distillation by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 46.9 - GMACs: 9.3 - Activations (M): 40.6 - Image size: 224 x 224 - **Papers:** - Going deeper with Image Transformers: https://arxiv.org/abs/2103.17239 - **Dataset:** ImageNet-1k - **Original:** https://github.com/facebookresearch/deit ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('cait_s24_224.fb_dist_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'cait_s24_224.fb_dist_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @InProceedings{Touvron_2021_ICCV, author = {Touvron, Hugo and Cord, Matthieu and Sablayrolles, Alexandre and Synnaeve, Gabriel and J'egou, Herv'e}, title = {Going Deeper With Image Transformers}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {32-42} } ```
Undi95/Emerhyst-20B
Undi95
"2023-11-17T21:08:14Z"
1,703
44
transformers
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "nsfw", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-27T00:01:53Z"
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences - nsfw --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/mvc3UyLtqKdLY1wzAdB_O.png) Merge of [Amethyst 13B](https://huggingface.co/Undi95/Amethyst-13B) and [Emerald 13B](https://huggingface.co/Undi95/Emerald-13B). In addition, [LimaRP v3](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT) was used, is it recommanded to read the documentation. <!-- description start --> ## Description This repo contains fp16 files of Emerhyst-20B. <!-- description end --> <!-- description start --> ## Models and loras used - PygmalionAI/pygmalion-2-13b - Xwin-LM/Xwin-LM-13B-V0.1 - The-Face-Of-Goonery/Huginn-13b-FP16 - zattio770/120-Days-of-LORA-v2-13B - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## LimaRP v3 usage and suggested settings ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ZC_iP2KkcEcRdgG_iyxYE.png) You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/PIn8_HSPTJEMdSEpNVSdm.png) Special thanks to Sushi. If you want to support me, you can [here](https://ko-fi.com/undiai). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Emerhyst-20B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.85 | | ARC (25-shot) | 61.69 | | HellaSwag (10-shot) | 84.98 | | MMLU (5-shot) | 56.98 | | TruthfulQA (0-shot) | 54.16 | | Winogrande (5-shot) | 76.09 | | GSM8K (5-shot) | 8.49 | | DROP (3-shot) | 20.56 |
Yntec/Remedy
Yntec
"2024-03-11T10:32:34Z"
1,703
0
diffusers
[ "diffusers", "safetensors", "Artistic", "Fantasy", "Scifi", "DominoPrincip", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-03-11T08:28:51Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Artistic - Fantasy - Scifi - DominoPrincip - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Remedy No-ema version of this model. Original page: https://civitai.com/models/87025 Samples and prompts: ![Remedy free AI image generator](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/R1_m9_tdErFkpiq-kuzf0.png) (Click for larger) Top left (credits to digiplay for the prompt): realistic 8K,HDR,photorealistic,ruins,post-apocalyptic,beautiful silver hair angel with black wings,((very close-up)),canon 5D,wings snowing,bokeh,looking at viewer Top right: stock photo, futuristic city, on a dark night, close up portrait of boy with cute brunette sister playing with teddy bear, homeless children, she's sitting, cute faces, beautiful intricately detailed soft oil painting, tattered cloths, detailed brown eyes, a wall on the pavement in the shadows of an alley, (crowd, pedestrians in the background, pristine artistic scifi skyscrapers, beautiful plant life mixed with scifi architecture, stark colorful lighting. Vast dystopian vision, depth of field Bottom left: best quality, masterpiece, ultra realistic, dark fantasy style, professional intricately detailed award winning soft oil painting, a pretty cute little girl sitting reading a secret, on a park bench, giant flowers explosion cloud background, city, skyscrapers, soft edge lighting, highly detailed, ((close up full body portrait)), professional, soft volumetric lighting, lens flares, photographed Canon Bottom right: manga art, muted colors, detailed painting, halftone dithering, cute girl with shoulderlength black bobcut in baggy black clothes, dream cape, beautiful eyes, complex sigils For the full and pruned fp16 versions check out: https://huggingface.co/digiplay/Remedy
MaziyarPanahi/mergekit-slerp-euzaldk-GGUF
MaziyarPanahi
"2024-06-16T18:49:14Z"
1,703
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:letgoofthepizza/Mistral-7B-v0.1-finetuned-open-korean-instructions", "base_model:openchat/openchat-3.5-0106", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-euzaldk" ]
text-generation
"2024-06-16T18:27:43Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:letgoofthepizza/Mistral-7B-v0.1-finetuned-open-korean-instructions - base_model:openchat/openchat-3.5-0106 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-euzaldk-GGUF base_model: mergekit-community/mergekit-slerp-euzaldk inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-euzaldk-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-euzaldk-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-euzaldk](https://huggingface.co/mergekit-community/mergekit-slerp-euzaldk) ## Description [MaziyarPanahi/mergekit-slerp-euzaldk-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-euzaldk-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-euzaldk](https://huggingface.co/mergekit-community/mergekit-slerp-euzaldk). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
kakaobrain/kogpt
kakaobrain
"2024-01-30T10:55:07Z"
1,702
109
null
[ "KakaoBrain", "KoGPT", "GPT", "GPT3", "ko", "arxiv:2104.09864", "arxiv:2109.04650", "license:cc-by-nc-nd-4.0", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- license: cc-by-nc-nd-4.0 language: ko tags: - KakaoBrain - KoGPT - GPT - GPT3 --- # KakaoBrain project KoGPT KakaoBrain's Pre-Trained Language Models. * KakaoBrain project KoGPT (Korean Generative Pre-trained Transformer) * [https://github.com/kakaobrain/kogpt](https://github.com/kakaobrain/kogpt) * [https://huggingface.co/kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) ## Model Descriptions ### KoGPT6B-ryan1.5b * [\[huggingface\]\[kakaobrain/kogpt\]\[KoGPT6B-ryan1.5b\]](https://huggingface.co/kakaobrain/kogpt/tree/KoGPT6B-ryan1.5b) * [\[huggingface\]\[kakaobrain/kogpt\]\[KoGPT6B-ryan1.5b-float16\]](https://huggingface.co/kakaobrain/kogpt/tree/KoGPT6B-ryan1.5b-float16) | Hyperparameter | Value | |:---------------------|--------------:| | \\(n_{parameters}\\) | 6,166,502,400 | | \\(n_{layers}\\) | 28 | | \\(d_{model}\\) | 4,096 | | \\(d_{ff}\\) | 16,384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2,048 | | \\(n_{vocab}\\) | 64,512 | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | 64 | ## Hardware requirements ### KoGPT6B-ryan1.5b #### GPU The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT. * `32GB GPU RAM` in the required minimum memory size ### KoGPT6B-ryan1.5b-float16 #### GPU The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT. * half-precision requires NVIDIA GPUS based on Volta, Turing or Ampere * `16GB GPU RAM` in the required minimum memory size ## Usage ### prompt ```bash python -m kogpt --help usage: KoGPT inference [-h] [--model MODEL] [--revision {KoGPT6B-ryan1.5b}] [--device {cpu,cuda}] [-d] KakaoBrain Korean(hangul) Generative Pre-Training Model optional arguments: -h, --help show this help message and exit --model MODEL huggingface repo (default:kakaobrain/kogpt) --revision {KoGPT6B-ryan1.5b} --device {cpu,cuda} (default:cuda) -d, --debug ``` ```bash python -m kogpt prompt> 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 temperature(0.8)> max_length(128)> 64 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 문제의 해답을 찾을 수 있을 것이다. 과학기술이 고도로 발달한 21세기를 살아갈 우리 아이들에게 가장 필요한 것은 사고력 훈련이다. 사고력 훈련을 통해, 세상 prompt> ... ``` ### python ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained( 'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b-float16', # or float32 version: revision=KoGPT6B-ryan1.5b bos_token='[BOS]', eos_token='[EOS]', unk_token='[UNK]', pad_token='[PAD]', mask_token='[MASK]' ) model = AutoModelForCausalLM.from_pretrained( 'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b-float16', # or float32 version: revision=KoGPT6B-ryan1.5b pad_token_id=tokenizer.eos_token_id, torch_dtype='auto', low_cpu_mem_usage=True ).to(device='cuda', non_blocking=True) _ = model.eval() prompt = '인간처럼 생각하고, 행동하는 \'지능\'을 통해 인류가 이제까지 풀지 못했던' with torch.no_grad(): tokens = tokenizer.encode(prompt, return_tensors='pt').to(device='cuda', non_blocking=True) gen_tokens = model.generate(tokens, do_sample=True, temperature=0.8, max_length=64) generated = tokenizer.batch_decode(gen_tokens)[0] print(generated) # print: 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 문제의 해답을 찾을 수 있을 것이다. 과학기술이 고도로 발달한 21세기를 살아갈 우리 아이들에게 가장 필요한 것은 사고력 훈련이다. 사고력 훈련을 통해, 세상 ``` ## Experiments ### In-context Few-Shots | Models | #params | NSMC (Acc.) | YNAT (F1) | KLUE-STS (F1) | |:--------------|--------:|------------:|----------:|--------------:| | HyperCLOVA[1] | 1.3B | 83.9 | 58.7 | 60.9 | | HyperCLOVA[1] | 6.9B | 83.8 | 67.5 | 59.3 | | HyperCLOVA[1] | 13.0B | 87.9 | 67.9 | 60.0 | | HyperCLOVA[1] | 39.0B | 88.0 | 71.4 | 61.6 | | HyperCLOVA[1] | 82.0B | **88.2** | 72.7 | **65.1** | | **Ours** | 6.0B | 87.8 | **78.0** | 64.3 | ### Finetuning / P-Tuning We have been reported to have issues(https://github.com/kakaobrain/kogpt/issues/17) with our downstream evaluation. The previously published performance evaluation table was deleted because it was difficult to see it as a fair comparison because the comparison target algorithm was different and the performance measurement method could not be confirmed. You can refer to the above issue link for the existing performance evaluation table and troubleshooting results. ## Limitations KakaoBrain `KoGPT` was trained on `ryan dataset`, a dataset known to contain profanity, lewd, political changed, and other harsh language. Therefore, `KoGPT` can generate socially unacceptable texts. As with all language models, It is difficult to predict in advance how `KoGPT` will response to particular prompts and offensive content without warning. Primarily Korean: `KoGPT` is primarily trained on Korean texts, and is best for classifying, searching, summarizing or generating such texts. `KoGPT` by default perform worse on inputs that are different from the data distribution it is trained on, including non-Korean as well as specific dialects of Korean that are not well represented in the training data. [comment]: <> (If abnormal or socially unacceptable text is generated during testing, please send a "prompt" and the "generated text" to [[email protected]]&#40;mailto:[email protected]&#41;. ) 카카오브레인 `KoGPT`는 욕설, 음란, 정치적 내용 및 기타 거친 언어에 대한 처리를 하지 않은 `ryan dataset`으로 학습하였습니다. 따라서 `KoGPT`는 사회적으로 용인되지 않은 텍스트를 생성할 수 있습니다. 다른 언어 모델과 마찬가지로 특정 프롬프트와 공격적인 콘텐츠에 어떠한 결과를 생성할지 사전에 파악하기 어렵습니다. `KoGPT`는 주로 한국어 텍스트로 학습을 하였으며 이러한 텍스트를 분류, 검색, 요약 또는 생성하는데 가장 적합합니다. 기본적으로 `KoGPT`는 학습 데이터에 잘 나타나지 않는 방언뿐만아니라 한국어가 아닌 경우와 같이 학습 데이터에서 발견하기 어려운 입력에서 좋지 않은 성능을 보입니다. [comment]: <> (테스트중에 발생한 비정상적인 혹은 사회적으로 용인되지 않는 텍스트가 생성된 경우 [[email protected]]&#40;mailto:[email protected]&#41;로 "prompt"와 "생성된 문장"을 함께 보내주시기 바랍니다.) ## Citation If you apply this library or model to any project and research, please cite our code: ``` @misc{kakaobrain2021kogpt, title = {KoGPT: KakaoBrain Korean(hangul) Generative Pre-trained Transformer}, author = {Ildoo Kim and Gunsoo Han and Jiyeon Ham and Woonhyuk Baek}, year = {2021}, howpublished = {\url{https://github.com/kakaobrain/kogpt}}, } ``` ## Contact This is released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us. [[email protected]](mailto:[email protected]) ## License The `source code` of KakaoBrain `KoGPT` are licensed under [Apache 2.0](LICENSE.apache-2.0) License. The `pretrained wieghts` of KakaoBrain `KoGPT` are licensed under [CC-BY-NC-ND 4.0 License](https://creativecommons.org/licenses/by-nc-nd/4.0/) License. 카카오브레인 `KoGPT`의 `소스코드(source code)`는 [Apache 2.0](LICENSE.apache-2.0) 라이선스 하에 공개되어 있습니다. 카카오브레인 `KoGPT`의 `사전학습된 가중치(pretrained weights)`는 [CC-BY-NC-ND 4.0 라이선스](https://creativecommons.org/licenses/by-nc-nd/4.0/) 라이선스 하에 공개되어 있습니다. 모델 및 코드, 사전학습된 가중치를 사용할 경우 라이선스 내용을 준수해 주십시오. 라이선스 전문은 [Apache 2.0](LICENSE.apache-2.0), [LICENSE.cc-by-nc-nd-4.0](LICENSE.cc-by-nc-nd-4.0) 파일에서 확인하실 수 있습니다. ## References [1] [HyperCLOVA](https://arxiv.org/abs/2109.04650): Kim, Boseop, et al. "What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers." arXiv preprint arXiv:2109.04650 (2021).
iZELX1/Grapefruit
iZELX1
"2023-03-06T14:48:29Z"
1,702
147
diffusers
[ "diffusers", "Grapefruit", "stable diffusion", "stable diffusion diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-01-31T05:33:20Z"
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image thumbnail: https://cdn.discordapp.com/attachments/1029635339851141183/1069881140829245480/a.png tags: - Grapefruit - stable diffusion - stable diffusion diffusers - diffusers --- ### Grapefruit (hentai model) - # **[Grapefruit](https://civitai.com/models/2583) by [ikena](https://civitai.com/user/ikena) (owner)** **grapefruit** (general hentai model) **lemon** (minimal weaker on nsfw, looks on some parts better then grapefruit) Grapefruit aims to be a hentai model with a bright and more „*softer*“ artstyle. Use a vae with it (AnythingV3 vae). But you can use any vae you like. Savetensor and the vae file must be named the same -> MODEL.savetensor and MODEL.vae.pt Black result fix (vae bug in web ui): Use "*--no-half-vae*" in your command line arguments ## Sample Pictures (grapefruitV3.1): - # #1 ![width=1024.png](https://s3.amazonaws.com/moonup/production/uploads/1675150134061-637dfecf674d0afabd5e6770.png) ``` masterpiece, best quality, detailed, 1girl, brown hair, looking at viewer, long hair, city, (people), arms behind back, smile, sunny sky, house, street, cafe, Negative prompt: (worst quality, low quality:1.4), Size: 512x512, Seed: 2611420364, Model: grapefruitV3_1, Steps: 20, Sampler: DDIM, CFG scale: 7, Model hash: 4fc8d3739f, Hires upscale: 2, Hires upscaler: Latent (nearest-exact), Denoising strength: 0.48 ``` - # #2 ![width=1280.png](https://s3.amazonaws.com/moonup/production/uploads/1675150209927-637dfecf674d0afabd5e6770.png) ``` masterpiece, best quality, 1girl, black witch hat, grin, red eyes, black hair, medium hair, witch, (house on fire), looking at viewer, upper body, detailed, cleavage, face focus, sparkling eyes, Negative prompt: (worst quality, low quality:1.4), signature, ENSD: 31337, Size: 640x512, Seed: 1764763351, Steps: 20, Sampler: DDIM, CFG scale: 8, Clip skip: 2, Model hash: 4fc8d3739f, Hires steps: 20, Hires upscale: 2, Hires upscaler: Latent (nearest-exact), Denoising strength: 0.48 ``` - # #3 ## More sample images: [https://civitai.com/models/2583/grapefruit-hentai-model](https://civitai.com/models/2583/grapefruit-hentai-model)
haoranxu/ALMA-7B-R
haoranxu
"2024-01-19T05:18:39Z"
1,702
9
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2401.08417", "arxiv:2309.11674", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-17T11:50:53Z"
--- license: mit --- **[ALMA-R](https://arxiv.org/abs/2401.08417)** builds upon [ALMA models](https://arxiv.org/abs/2309.11674), with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners! ``` @misc{xu2024contrastive, title={Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}, author={Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim}, year={2024}, eprint={2401.08417}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{xu2023paradigm, title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models}, author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla}, year={2023}, eprint={2309.11674}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # Download ALMA(-R) Models and Dataset 🚀 We release six translation models presented in the paper: - ALMA-7B - ALMA-7B-LoRA - **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization. - ALMA-13B - ALMA-13B-LoRA - **ALMA-13B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-13B-LoRA with contrastive preference optimization (BEST MODEL!). Model checkpoints are released at huggingface: | Models | Base Model Link | LoRA Link | |:-------------:|:---------------:|:---------:| | ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - | | ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) | | **ALMA-7B-R (NEW!)** | [haoranxu/ALMA-7B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-7B-R) | - | | ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - | | ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) | | **ALMA-13B-R (NEW!)** | [haoranxu/ALMA-13B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-13B-R) | - | **Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.** Datasets used by ALMA and ALMA-R are also released at huggingface now (NEW!) | Datasets | Train / Validation| Test | |:-------------:|:---------------:|:---------:| | Human-Written Parallel Data (ALMA) | [train and validation](https://huggingface.co/datasets/haoranxu/ALMA-Human-Parallel) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) | | Triplet Preference Data | [train](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) and [WMT'23](https://huggingface.co/datasets/haoranxu/WMT23-Test) | A quick start to use our best system (ALMA-13B-R) for translation. An example of translating "我爱机器翻译。" into English: ``` import torch from transformers import AutoModelForCausalLM from transformers import AutoTokenizer # Load base model and LoRA weights model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-R", torch_dtype=torch.float16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("haoranxu/ALMA-13B-R", padding_side='left') # Add the source sentence into the prompt template prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:" input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda() # Translation with torch.no_grad(): generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(outputs) ``` Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)
MaziyarPanahi/mergekit-slerp-ieauevl-GGUF
MaziyarPanahi
"2024-06-16T18:10:01Z"
1,702
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:beowolx/CodeNinja-1.0-OpenChat-7B", "base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-ieauevl" ]
text-generation
"2024-06-16T17:49:00Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:beowolx/CodeNinja-1.0-OpenChat-7B - base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-ieauevl-GGUF base_model: mergekit-community/mergekit-slerp-ieauevl inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-ieauevl-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-ieauevl-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-ieauevl](https://huggingface.co/mergekit-community/mergekit-slerp-ieauevl) ## Description [MaziyarPanahi/mergekit-slerp-ieauevl-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-ieauevl-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-ieauevl](https://huggingface.co/mergekit-community/mergekit-slerp-ieauevl). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
ElnaggarLab/ankh-large
ElnaggarLab
"2023-12-18T12:55:11Z"
1,701
13
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "biology", "protein", "protein language model", "protein embedding", "doi:10.57967/hf/0275", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2022-09-16T07:52:00Z"
--- license: cc-by-nc-sa-4.0 tags: - biology - protein - protein language model - protein embedding ---
RWKV/rwkv-4-1b5-pile
RWKV
"2023-05-15T10:01:06Z"
1,701
6
transformers
[ "transformers", "pytorch", "rwkv", "text-generation", "dataset:EleutherAI/pile", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-05-04T13:42:33Z"
--- datasets: - EleutherAI/pile --- ![RWKlogo.png](https://s3.amazonaws.com/moonup/production/uploads/62441d1d9fdefb55a0b7d12c/UWpP-lGRZJJDaEx_uUlDv.png) # Model card for RWKV-4 | 1B5 parameters trained on Pile dataset RWKV is a project led by [Bo Peng](https://github.com/BlinkDL). Learn more about the model architecture in the blogposts from Johan Wind [here](https://johanwind.github.io/2023/03/23/rwkv_overview.html) and [here](https://johanwind.github.io/2023/03/23/rwkv_details.html). Learn more about the project by joining the [RWKV discord server](https://discordapp.com/users/468093332535640064). # Table of contents 0. [TL;DR](#TL;DR) 1. [Model Details](#model-details) 2. [Usage](#usage) 3. [Citation](#citation) ## TL;DR Below is the description from the [original repository](https://github.com/BlinkDL/RWKV-LM) > RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). It's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding. ## Model Details The details of the architecture can be found on the blogpost mentioned above and the Hugging Face blogpost of the integration. ## Usage ### Convert the raw weights to the HF format You can use the [`convert_rwkv_checkpoint_to_hf.py`](https://github.com/huggingface/transformers/tree/main/src/transformers/models/rwkv/convert_rwkv_checkpoint_to_hf.py) script by specifying the repo_id of the original weights, the filename and the output directory. You can also optionally directly push the converted model on the Hub by passing `--push_to_hub` flag and `--model_name` argument to specify where to push the converted weights. ```bash python convert_rwkv_checkpoint_to_hf.py --repo_id RAW_HUB_REPO --checkpoint_file RAW_FILE --output_dir OUTPUT_DIR --push_to_hub --model_name dummy_user/converted-rwkv ``` ### Generate text You can use the `AutoModelForCausalLM` and `AutoTokenizer` classes to generate texts from the model. Expand the sections below to understand how to run the model in different scenarios: ### Running the model on a CPU <details> <summary> Click to expand </summary> ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-1b5-pile") tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-1b5-pile") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt") output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` ### Running the model on a single GPU <details> <summary> Click to expand </summary> ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-1b5-pile").to(0) tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-1b5-pile") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` </details> </details> ### Running the model in half-precision, on GPU <details> <summary> Click to expand </summary> ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-1b5-pile", torch_dtype=torch.float16).to(0) tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-1b5-pile") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` </details> ### Running the model multiple GPUs <details> <summary> Click to expand </summary> ```python # pip install accelerate from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("RWKV/rwkv-4-1b5-pile", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("RWKV/rwkv-4-1b5-pile") prompt = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese." inputs = tokenizer(prompt, return_tensors="pt").to(0) output = model.generate(inputs["input_ids"], max_new_tokens=40) print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True)) ``` </details> ## Citation If you use this model, please consider citing the original work, from the original repo [here](https://github.com/BlinkDL/ChatRWKV/)
TheBloke/CodeBooga-34B-v0.1-GGUF
TheBloke
"2023-10-20T23:16:30Z"
1,701
49
transformers
[ "transformers", "gguf", "llama", "base_model:oobabooga/CodeBooga-34B-v0.1", "license:llama2", "text-generation-inference", "region:us" ]
null
"2023-10-20T22:30:32Z"
--- base_model: oobabooga/CodeBooga-34B-v0.1 inference: false license: llama2 model_creator: oobabooga model_name: CodeBooga 34B v0.1 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeBooga 34B v0.1 - GGUF - Model creator: [oobabooga](https://huggingface.co/oobabooga) - Original model: [CodeBooga 34B v0.1](https://huggingface.co/oobabooga/CodeBooga-34B-v0.1) <!-- description start --> ## Description This repo contains GGUF format model files for [oobabooga's CodeBooga 34B v0.1](https://huggingface.co/oobabooga/CodeBooga-34B-v0.1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF) * [oobabooga's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/oobabooga/CodeBooga-34B-v0.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [codebooga-34b-v0.1.Q2_K.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes | | [codebooga-34b-v0.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss | | [codebooga-34b-v0.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss | | [codebooga-34b-v0.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss | | [codebooga-34b-v0.1.Q4_0.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [codebooga-34b-v0.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss | | [codebooga-34b-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended | | [codebooga-34b-v0.1.Q5_0.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [codebooga-34b-v0.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended | | [codebooga-34b-v0.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended | | [codebooga-34b-v0.1.Q6_K.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss | | [codebooga-34b-v0.1.Q8_0.gguf](https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF/blob/main/codebooga-34b-v0.1.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/CodeBooga-34B-v0.1-GGUF and below it, a specific filename to download, such as: codebooga-34b-v0.1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/CodeBooga-34B-v0.1-GGUF codebooga-34b-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/CodeBooga-34B-v0.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CodeBooga-34B-v0.1-GGUF codebooga-34b-v0.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m codebooga-34b-v0.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/CodeBooga-34B-v0.1-GGUF", model_file="codebooga-34b-v0.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: oobabooga's CodeBooga 34B v0.1 # CodeBooga-34B-v0.1 This is a merge between the following two models: 1) [Phind-CodeLlama-34B-v2](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) 2) [WizardCoder-Python-34B-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0) It was created with the [BlockMerge Gradient script](https://github.com/Gryphe/BlockMerge_Gradient), the same one that was used to create [MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b), and with the same settings. The following YAML was used: ```yaml model_path1: "Phind_Phind-CodeLlama-34B-v2_safetensors" model_path2: "WizardLM_WizardCoder-Python-34B-V1.0_safetensors" output_model_path: "CodeBooga-34B-v0.1" operations: - operation: lm_head # Single tensor filter: "lm_head" gradient_values: [0.75] - operation: embed_tokens # Single tensor filter: "embed_tokens" gradient_values: [0.75] - operation: self_attn filter: "self_attn" gradient_values: [0.75, 0.25] - operation: mlp filter: "mlp" gradient_values: [0.25, 0.75] - operation: layernorm filter: "layernorm" gradient_values: [0.5, 0.5] - operation: modelnorm # Single tensor filter: "model.norm" gradient_values: [0.75] ``` ## Prompt format Both base models use the Alpaca format, so it should be used for this one as well. ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Your instruction ### Response: Bot reply ### Instruction: Another instruction ### Response: Bot reply ``` ## Evaluation I made a quick experiment where I asked a set of 3 Python and 3 Javascript questions to the following models: 1) This one 2) A second variant generated with `model_path1` and `model_path2` swapped in the YAML above, which I called CodeBooga-Reversed-34B-v0.1 3) WizardCoder-Python-34B-V1.0 4) Phind-CodeLlama-34B-v2 Specifically, I used 4.250b EXL2 quantizations of each. I then sorted the responses for each question by quality, and attributed the following scores: * 4th place: 0 * 3rd place: 1 * 2nd place: 2 * 1st place: 4 The resulting cumulative scores were: * CodeBooga-34B-v0.1: 22 * WizardCoder-Python-34B-V1.0: 12 * Phind-CodeLlama-34B-v2: 7 * CodeBooga-Reversed-34B-v0.1: 1 CodeBooga-34B-v0.1 performed very well, while its variant performed poorly, so I uploaded the former but not the latter. ## Recommended settings I recommend the [Divine Intellect](https://github.com/oobabooga/text-generation-webui/blob/ae8cd449ae3e0236ecb3775892bb1eea23f9ed68/presets/Divine%20Intellect.yaml) preset for instruction-following models like this, as per the [Preset Arena experiment results](https://github.com/oobabooga/oobabooga.github.io/blob/main/arena/results.md): ```yaml temperature: 1.31 top_p: 0.14 repetition_penalty: 1.17 top_k: 49 ``` ## Quantized versions ### EXL2 A 4.250b EXL2 version of the model can be found here: https://huggingface.co/oobabooga/CodeBooga-34B-v0.1-EXL2-4.250b ### GGUF TheBloke has kindly provided GGUF quantizations for llama.cpp: https://huggingface.co/TheBloke/CodeBooga-34B-v0.1-GGUF <a href="https://ko-fi.com/oobabooga"><img src="https://i.imgur.com/UJlEAYw.png"></a> <!-- original-model-card end -->
lmsys/vicuna-13b-delta-v1.1
lmsys
"2023-08-01T18:23:25Z"
1,700
411
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2302.13971", "arxiv:2306.05685", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-12T09:47:00Z"
--- inference: false --- **NOTE: New version available** Please check out a newer version of the weights [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md). **NOTE: This "delta model" cannot be used directly.** Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights. See [instructions](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md#how-to-apply-delta-weights-for-weights-v11-and-v0). <br> <br> # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v1.1 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 70K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
TheBloke/WizardLM-7B-uncensored-GPTQ
TheBloke
"2023-10-26T09:39:38Z"
1,700
186
transformers
[ "transformers", "safetensors", "llama", "text-generation", "uncensored", "dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered", "base_model:ehartford/WizardLM-7B-Uncensored", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2023-05-05T08:09:36Z"
--- license: other tags: - uncensored datasets: - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered model_name: Wizardlm 7B Uncensored base_model: ehartford/WizardLM-7B-Uncensored inference: false model_creator: Eric Hartford model_type: llama prompt_template: 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user''s questions. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Wizardlm 7B Uncensored - GPTQ - Model creator: [Eric Hartford](https://huggingface.co/ehartford) - Original model: [Wizardlm 7B Uncensored](https://huggingface.co/ehartford/WizardLM-7B-Uncensored) <!-- description start --> ## Description This repo contains GPTQ model files for [Eric Hartford's Wizardlm 7B Uncensored](https://huggingface.co/ehartford/WizardLM-7B-Uncensored). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GGUF) * [Eric Hartford's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-7B-Uncensored) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Vicuna ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- README_GPTQ.md-provided-files start --> ## Provided files and GPTQ parameters Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa. <details> <summary>Explanation of GPTQ parameters</summary> - Bits: The bit size of the quantised model. - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value. - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now. - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy. - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s). - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences. - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit. </details> | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc | | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- | | [oobaCUDA](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ/tree/oobaCUDA) | 4 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 4.00 GB | Yes | 4-bit, without Act Order and group size 128g. | | [latest](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ/tree/latest) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 3.89 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. | | [main](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ/tree/main) | 4 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 3.89 GB | Yes | 4-bit, without Act Order and group size 128g. | <!-- README_GPTQ.md-provided-files end --> <!-- README_GPTQ.md-download-from-branches start --> ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/WizardLM-7B-uncensored-GPTQ:oobaCUDA` - With Git, you can clone a branch with: ``` git clone --single-branch --branch oobaCUDA https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ ``` - In Python Transformers code, the branch is the `revision` parameter; see below. <!-- README_GPTQ.md-download-from-branches end --> <!-- README_GPTQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-7B-uncensored-GPTQ`. - To download from a specific branch, enter for example `TheBloke/WizardLM-7B-uncensored-GPTQ:oobaCUDA` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `WizardLM-7B-uncensored-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! <!-- README_GPTQ.md-text-generation-webui end --> <!-- README_GPTQ.md-use-from-python start --> ## How to use this GPTQ model from Python code ### Install the necessary packages Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later. ```shell pip3 install transformers>=4.32.0 optimum>=1.12.0 pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7 ``` If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y auto-gptq git clone https://github.com/PanQiWei/AutoGPTQ cd AutoGPTQ pip3 install . ``` ### For CodeLlama models only: you must use Transformers 4.33.0 or later. If 4.33.0 is not yet released when you read this, you will need to install Transformers from source: ```shell pip3 uninstall -y transformers pip3 install git+https://github.com/huggingface/transformers.git ``` ### You can then use the following code ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_name_or_path = "TheBloke/WizardLM-7B-uncensored-GPTQ" # To use a different branch, change revision # For example: revision="oobaCUDA" model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=False, revision="main") tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) prompt = "Tell me about AI" prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT: ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` <!-- README_GPTQ.md-use-from-python end --> <!-- README_GPTQ.md-compatibility start --> ## Compatibility The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI). [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models. <!-- README_GPTQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Eric Hartford's Wizardlm 7B Uncensored This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
noamrot/FuseCap_Image_Captioning
noamrot
"2024-01-25T15:42:28Z"
1,700
18
transformers
[ "transformers", "pytorch", "blip", "text2text-generation", "image-captioning", "image-to-text", "arxiv:2305.17718", "license:mit", "autotrain_compatible", "region:us" ]
image-to-text
"2023-05-31T07:04:57Z"
--- license: mit inference: false pipeline_tag: image-to-text tags: - image-captioning --- # FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions A framework designed to generate semantically rich image captions. ## Resources - 💻 **Project Page**: For more details, visit the official [project page](https://rotsteinnoam.github.io/FuseCap/). - 📝 **Read the Paper**: You can find the paper [here](https://arxiv.org/abs/2305.17718). - 🚀 **Demo**: Try out our BLIP-based model [demo](https://huggingface.co/spaces/noamrot/FuseCap) trained using FuseCap. - 📂 **Code Repository**: The code for FuseCap can be found in the [GitHub repository](https://github.com/RotsteinNoam/FuseCap). - 🗃️ **Datasets**: The fused captions datasets can be accessed from [here](https://github.com/RotsteinNoam/FuseCap#datasets). #### Running the model Our BLIP-based model can be run using the following code, ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') processor = BlipProcessor.from_pretrained("noamrot/FuseCap") model = BlipForConditionalGeneration.from_pretrained("noamrot/FuseCap").to(device) img_url = 'https://huggingface.co/spaces/noamrot/FuseCap/resolve/main/bike.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') text = "a picture of " inputs = processor(raw_image, text, return_tensors="pt").to(device) out = model.generate(**inputs, num_beams = 3) print(processor.decode(out[0], skip_special_tokens=True)) ``` ## Upcoming Updates The official codebase, datasets and trained models for this project will be released soon. ## BibTeX ``` Citation @inproceedings{rotstein2024fusecap, title={Fusecap: Leveraging large language models for enriched fused image captions}, author={Rotstein, Noam and Bensa{\"\i}d, David and Brody, Shaked and Ganz, Roy and Kimmel, Ron}, booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision}, pages={5689--5700}, year={2024} } ```
RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf
RichardErkhov
"2024-06-30T03:24:36Z"
1,700
0
null
[ "gguf", "arxiv:1910.09700", "region:us" ]
null
"2024-06-30T03:15:03Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) QWEN1.5-0.5B-CHAT_SFT_COS_2 - GGUF - Model creator: https://huggingface.co/JCHAVEROT/ - Original model: https://huggingface.co/JCHAVEROT/QWEN1.5-0.5B-CHAT_SFT_COS_2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q2_K.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q2_K.gguf) | Q2_K | 0.23GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.IQ3_XS.gguf) | IQ3_XS | 0.24GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.IQ3_S.gguf) | IQ3_S | 0.25GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q3_K_S.gguf) | Q3_K_S | 0.25GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.IQ3_M.gguf) | IQ3_M | 0.26GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q3_K.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q3_K.gguf) | Q3_K | 0.26GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q3_K_M.gguf) | Q3_K_M | 0.26GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q3_K_L.gguf) | Q3_K_L | 0.28GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.IQ4_XS.gguf) | IQ4_XS | 0.28GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q4_0.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q4_0.gguf) | Q4_0 | 0.29GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.IQ4_NL.gguf) | IQ4_NL | 0.29GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q4_K_S.gguf) | Q4_K_S | 0.29GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q4_K.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q4_K.gguf) | Q4_K | 0.3GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q4_K_M.gguf) | Q4_K_M | 0.3GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q4_1.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q4_1.gguf) | Q4_1 | 0.3GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q5_0.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q5_0.gguf) | Q5_0 | 0.32GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q5_K_S.gguf) | Q5_K_S | 0.32GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q5_K.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q5_K.gguf) | Q5_K | 0.33GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q5_K_M.gguf) | Q5_K_M | 0.33GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q5_1.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q5_1.gguf) | Q5_1 | 0.34GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q6_K.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q6_K.gguf) | Q6_K | 0.36GB | | [QWEN1.5-0.5B-CHAT_SFT_COS_2.Q8_0.gguf](https://huggingface.co/RichardErkhov/JCHAVEROT_-_QWEN1.5-0.5B-CHAT_SFT_COS_2-gguf/blob/main/QWEN1.5-0.5B-CHAT_SFT_COS_2.Q8_0.gguf) | Q8_0 | 0.47GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
SanjiWatsuki
"2023-12-21T08:21:27Z"
1,699
9
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-21T01:50:18Z"
--- license: cc-by-nc-4.0 tags: - merge --- ![image/png](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE/resolve/main/bruins-maid.png) <!-- description start --> ## Description This repository hosts FP16 files for **Loyal-Toppy-Bruins-Maid-7B**, a 7B model aimed at having engaging RP with solid character card adherence and being a smart cookie at the same time. Its foundation is [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha), notable for its performance in the LMSYS Chatbot Arena, even surpassing GPT-3.5-Turbo-1106. The model incorporates [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2), a [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) derivative with Alpaca RP data tuning. The other foundational model is [chargoddard/loyal-piano-m7](https://huggingface.co/chargoddard/loyal-piano-m7), chosen for its strong RP performance and Alpaca format training, with a diverse dataset including PIPPA, rpbuild, and LimaRP. [Undi95/Toppy-M-7B](https://huggingface.co/Undi95/Toppy-M-7B), known for its creativity, brings in useful RP data from various sources. It ranks first among 7B models on [OpenRouter](https://openrouter.ai/rankings) for a good reason. [NeverSleep/Noromaid-7b-v0.1.1](https://huggingface.co/NeverSleep/Noromaid-7b-v0.1.1), a Mistral finetune with unique RP data not present in other models, was also added for bringing in a unique RP dataset and being a well-regarded RP model. The models were merged using the DARE ties method, with a targeted 1.2 absolute weight and high density (0.5-0.6), as discussed in the [MergeKit GitHub Repo](https://github.com/cg123/mergekit/issues/26). Currently, this model ranks at the top of my personal RP unit test benchmark and scored a very solid 20 on [lilblam's LLM Logic Test](https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit#gid=1278290632). My first impressions of it for RPing are very good but, admittedly, this model came out of the oven today so I haven't played it with it too much 😊 ### The sauce ``` models: # Top-Loyal-Bruins-Maid-DARE-7B_v2 - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: rwitz/go-bruins-v2 # MetamathCybertronStarling base parameters: weight: 0.5 density: 0.6 - model: chargoddard/loyal-piano-m7 # Pull in some PIPPA/LimaRP/Orca/rpguild parameters: weight: 0.5 density: 0.6 - model: Undi95/Toppy-M-7B parameters: weight: 0.1 density: 0.5 - model: NeverSleep/Noromaid-7b-v0.1.1 parameters: weight: 0.1 density: 0.5 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: normalize: false int8_mask: true dtype: bfloat16 ``` <!-- description end --> <!-- prompt-template start --> ## Prompt template: Custom format, or Alpaca ### Custom format: I found the best SillyTavern results from using the Noromaid template. SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). Otherwise, I tried to ensure that all of the underlying merged models were Alpaca favored. ### Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ```
shibing624/chinese-alpaca-plus-7b-hf
shibing624
"2023-12-15T06:17:11Z"
1,698
51
transformers
[ "transformers", "pytorch", "llama", "text-generation", "chatglm", "zh", "Text2Text-Generation", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-01T02:42:28Z"
--- title: chinese-alpaca-plus-7b emoji: 📚 colorFrom: gray colorTo: red language: - zh tags: - chatglm - pytorch - zh - Text2Text-Generation license: "other" widget: - text: "为什么天空是蓝色的?" --- # Chinese Alpaca Plus 7B Model **发布中文LLaMA, Alpaca Plus版(7B)模型** 推出中文LLaMA, Alpaca Plus版(7B),相比基础版本的改进点如下: - 进一步扩充了训练数据,其中LLaMA扩充至120G文本(通用领域),Alpaca扩充至4M指令数据(重点增加了STEM相关数据) - Alpaca训练时采用了更大的rank,相比原版具有更低的验证集损失 - 评测结果显示,Alpaca-Plus-7B相比基础版Alpaca-7B效果更优,部分任务接近或超过13B版本 - 这一轮比拼:7B获得65.3分,13B获得70.9分,Plus-7B效果75.3分,具体评测结果请参考[效果评测](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/examples/README.md) 本模型是`原生LLaMA-7B`合并`中文LLaMA LoRA`和`中文Alpaca LoRA`后的模型权重`chinese-alpaca-plus-7b-hf`,并转化为HuggingFace版本权重(.bin文件),可以直接使用或者继续训练。 13b-hf权重链接:https://huggingface.co/shibing624/chinese-alpaca-plus-13b-hf test case: |input_text|predict| |:-- |:--- | |为什么天空是蓝色的?|天空是蓝色的,是因为大气层中的气体分子会散射太阳光中的蓝色光,使得我们看到的天空是蓝色的。| ## release model weight - chinese-llama-plus-7b 模型权重链接:https://huggingface.co/minlik/chinese-llama-plus-7b-merged - chinese-alpaca-plus-7b 模型权重链接:https://huggingface.co/shibing624/chinese-alpaca-plus-7b-hf - chinese-llama-plus-13b 模型权重链接:https://huggingface.co/shibing624/chinese-llama-plus-13b-hf - chinese-aplaca-plus-13b 模型权重链接:https://huggingface.co/shibing624/chinese-alpaca-plus-13b-hf ## Usage 本项目开源在textgen项目:[textgen](https://github.com/shibing624/textgen),可支持llama模型,通过如下命令调用: Install package: ```shell pip install -U textgen ``` ```python from textgen import GptModel model = GptModel("llama", "shibing624/chinese-alpaca-plus-7b-hf") r = model.predict(["用一句话描述地球为什么是独一无二的。"]) print(r) # ['地球是独一无二的,因为它拥有独特的大气层、水循环、生物多样性以及其他自然资源,这些都使它成为一个独特的生命支持系统。'] ``` ## Usage (HuggingFace Transformers) Without [textgen](https://github.com/shibing624/textgen), you can use the model like this: First, you pass your input through the transformer model, then you get the generated sentence. Install package: ``` pip install sentencepiece pip install transformers>=4.28.0 ``` ```python import torch import transformers from transformers import LlamaTokenizer, LlamaForCausalLM def generate_prompt(text): return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {text} ### Response:""" tokenizer = LlamaTokenizer.from_pretrained('shibing624/chinese-alpaca-plus-7b-hf') model = LlamaForCausalLM.from_pretrained('shibing624/chinese-alpaca-plus-7b-hf').half().cuda() model.eval() text = '为什么天空是蓝色的?' prompt = generate_prompt(text) input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda') with torch.no_grad(): output_ids = model.generate( input_ids=input_ids, max_new_tokens=128, temperature=1, top_k=40, top_p=0.9, repetition_penalty=1.15 ).cuda() output = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(output.replace(text, '').strip()) ``` output: ```shell 为什么天空是蓝色的? 天空是蓝色的,是因为大气层中的气体分子会散射太阳光中的蓝色光,使得我们看到的天空是蓝色的。 ``` ## 模型来源 release合并后的模型权重,一步到位直接使用,省电、减少碳排放。 基于 [多LoRA权重合并(适用于Chinese-Alpaca-Plus )](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/%E6%89%8B%E5%8A%A8%E6%A8%A1%E5%9E%8B%E5%90%88%E5%B9%B6%E4%B8%8E%E8%BD%AC%E6%8D%A2#%E5%A4%9Alora%E6%9D%83%E9%87%8D%E5%90%88%E5%B9%B6%E9%80%82%E7%94%A8%E4%BA%8Echinese-alpaca-plus-)方法手动合并而成,具体是使用 [decapoda-research/llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf) 底座模型 合并 Chinese-LLaMA-Plus-LoRA和Chinese-Alpaca-Plus-LoRA 两个LoRA权重 得到,并转化为HuggingFace版本权重(.bin文件)。 HuggingFace版本权重(.bin文件)可用于: - 使用Transformers进行训练和推理 - 使用text-generation-webui搭建界面 PyTorch版本权重(.pth文件)可用于: - 使用llama.cpp工具进行量化和部署 PyTorch版本权重(.pth文件)链接,8-bit量化版的Alpaca-Plus-7B:[Billsfriend/chinese-Alpaca-7b-plus-ggml-q8_0](https://huggingface.co/Billsfriend/chinese-Alpaca-7b-plus-ggml-q8_0/tree/main) 模型文件组成: ``` chinese-alpaca-plus-7b-hf config.json generation_config.json pytorch_model-00001-of-00002.bin pytorch_model-00002-of-00002.bin pytorch_model.bin.index.json special_tokens_map.json tokenizer.json tokenizer.model tokenizer_config.json ``` 硬件要求:14G显存 ### 微调数据集 我整理部分公开微调数据集: 1. 50万条中文ChatGPT指令Belle数据集:[BelleGroup/train_0.5M_CN](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN) 2. 100万条中文ChatGPT指令Belle数据集:[BelleGroup/train_1M_CN](https://huggingface.co/datasets/BelleGroup/train_1M_CN) 3. 5万条英文ChatGPT指令Alpaca数据集:[50k English Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca#data-release) 4. 5万条中文GPT4指令Alpaca数据集:[shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh) 5. 69万条中文指令Guanaco数据集(Belle50万条+Guanaco19万条):[Chinese-Vicuna/guanaco_belle_merge_v1.0](https://huggingface.co/datasets/Chinese-Vicuna/guanaco_belle_merge_v1.0) 如果需要训练LLaMA模型,请参考[https://github.com/shibing624/textgen](https://github.com/shibing624/textgen) ## Citation ```latex @software{textgen, author = {Xu Ming}, title = {textgen: Implementation of language model finetune}, year = {2023}, url = {https://github.com/shibing624/textgen}, } ``` ## Reference - https://github.com/ymcui/Chinese-LLaMA-Alpaca # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_shibing624__chinese-alpaca-plus-7b-hf) | Metric | Value | |-----------------------|---------------------------| | Avg. | 42.46 | | ARC (25-shot) | 49.23 | | HellaSwag (10-shot) | 70.48 | | MMLU (5-shot) | 38.39 | | TruthfulQA (0-shot) | 39.72 | | Winogrande (5-shot) | 70.09 | | GSM8K (5-shot) | 0.68 | | DROP (3-shot) | 28.61 |
BELLE-2/BELLE-Llama2-13B-chat-0.4M
BELLE-2
"2023-10-18T03:18:07Z"
1,698
30
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text2text-generation", "zh", "en", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2023-07-26T06:01:54Z"
--- license: llama2 tags: - text2text-generation pipeline_tag: text2text-generation language: - zh - en --- # Model Card for Model ID ## Welcome If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE ! ## Model description This model is obtained by fine-tuning the complete parameters using 0.4M Chinese instruction data on the original Llama2-13B-chat. We firmly believe that the original Llama2-chat exhibits commendable performance post Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF). Our pursuit continues to be the further enhancement of this model using Chinese instructional data for fine-tuning, with an aspiration to facilitate stable and high-quality Chinese language outputs. ## Use model Please note that the input should be formatted as follows in both **training** and **inference**. ``` python Human: \n{input}\n\nAssistant:\n ``` After you decrypt the files, BELLE-Llama2-13B-chat-0.4M can be easily loaded with AutoModelForCausalLM. ``` python from transformers import AutoModelForCausalLM, LlamaTokenizer import torch ckpt = '/path/to_finetuned_model/' device = torch.device('cuda') model = AutoModelForCausalLM.from_pretrained(ckpt).half().to(device) tokenizer = LlamaTokenizer.from_pretrained(ckpt) prompt = "Human: \n写一首中文歌曲,赞美大自然 \n\nAssistant: \n" input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) generate_ids = model.generate(input_ids, max_new_tokens=1024, do_sample=True, top_k=30, top_p=0.85, temperature=0.5, repetition_penalty=1.2, eos_token_id=2, bos_token_id=1, pad_token_id=0) output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] response = output[len(prompt):] print(response) ``` ## Limitations There still exists a few issues in the model trained on current base model and data: 1. The model might generate factual errors when asked to follow instructions related to facts. 2. Occasionally generates harmful responses since the model still struggles to identify potential harmful instructions. 3. Needs improvements on reasoning and coding. Since the model still has its limitations, we require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed. ## Citation Please cite our paper and github when using our code, data or model. ``` @misc{BELLE, author = {BELLEGroup}, title = {BELLE: Be Everyone's Large Language model Engine}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/LianjiaTech/BELLE}}, } ```
swap-uniba/LLaMAntino-2-7b-hf-ITA
swap-uniba
"2024-01-22T08:16:22Z"
1,698
6
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "it", "arxiv:2312.09993", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-12-14T11:19:19Z"
--- license: llama2 language: - it tags: - text-generation-inference --- # Model Card for LLaMAntino-2-7b-ITA *Last Update: 22/01/2024*<br> ## Model description <!-- Provide a quick summary of what the model is/does. --> **LLaMAntino-2-7b** is a *Large Language Model (LLM)* that is an italian-adapted **LLaMA 2**. This model aims to provide Italian NLP researchers with a base model for natural language generation tasks. The model was trained using *QLora* and using as training data [clean_mc4_it medium](https://huggingface.co/datasets/gsarti/clean_mc4_it/viewer/medium). If you are interested in more details regarding the training procedure, you can find the code we used at the following link: - **Repository:** https://github.com/swapUniba/LLaMAntino **NOTICE**: the code has not been released yet, we apologize for the delay, it will be available asap! - **Developed by:** Pierpaolo Basile, Elio Musacchio, Marco Polignano, Lucia Siciliani, Giuseppe Fiameni, Giovanni Semeraro - **Funded by:** PNRR project FAIR - Future AI Research - **Compute infrastructure:** [Leonardo](https://www.hpc.cineca.it/systems/hardware/leonardo/) supercomputer - **Model type:** LLaMA 2 - **Language(s) (NLP):** Italian - **License:** Llama 2 Community License - **Finetuned from model:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) ## How to Get Started with the Model Below you can find an example of model usage: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "swap-uniba/LLaMAntino-2-7b-hf-ITA" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) prompt = "Scrivi qui un possibile prompt" input_ids = tokenizer(prompt, return_tensors="pt").input_ids outputs = model.generate(input_ids=input_ids) print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1]:], skip_special_tokens=True)[0]) ``` If you are facing issues when loading the model, you can try to load it quantized: ```python model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True) ``` *Note*: The model loading strategy above requires the [*bitsandbytes*](https://pypi.org/project/bitsandbytes/) and [*accelerate*](https://pypi.org/project/accelerate/) libraries ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you use this model in your research, please cite the following: ```bibtex @misc{basile2023llamantino, title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language}, author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro}, year={2023}, eprint={2312.09993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` *Notice:* Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. [*License*](https://ai.meta.com/llama/license/)
stanfordnlp/stanza-hi
stanfordnlp
"2024-03-24T23:47:29Z"
1,697
0
stanza
[ "stanza", "token-classification", "hi", "license:apache-2.0", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- tags: - stanza - token-classification library_name: stanza language: hi license: apache-2.0 --- # Stanza model for Hindi (hi) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2024-03-24 23:47:14.196
quantumaikr/llama-2-70b-fb16-korean
quantumaikr
"2023-08-11T13:15:12Z"
1,697
36
transformers
[ "transformers", "safetensors", "llama", "text-generation", "orca", "llama-2", "ko", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-11T12:33:33Z"
--- language: - ko - en pipeline_tag: text-generation tags: - orca - llama-2 --- <p align="center" width="100%"> <img src="https://i.imgur.com/snFDU0P.png" alt="KoreanLM icon" style="width: 500px; display: block; margin: auto; border-radius: 10%;"> </p> # quantumaikr/llama-2-70b-fb16-korean ## Model Description `quantumaikr/llama-2-70b-fb16-korean` is a Llama2 70B model finetuned the Korean Dataset ## Usage Start chatting with `quantumaikr/llama-2-70b-fb16-korean` using the following code snippet: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("quantumaikr/llama-2-70b-fb16-korean") model = AutoModelForCausalLM.from_pretrained("quantumaikr/llama-2-70b-fb16-korean", torch_dtype=torch.float16, device_map="auto") system_prompt = "### System:\n귀하는 지시를 매우 잘 따르는 AI인 QuantumLM입니다. 최대한 많이 도와주세요. 안전에 유의하고 불법적인 행동은 하지 마세요.\n\n" message = "인공지능이란 무엇인가요?" prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, temperature=0.9, top_p=0.75, max_new_tokens=4096) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` QuantumLM should be used with this prompt format: ``` ### System: This is a system prompt, please behave and help the user. ### User: Your prompt here ### Assistant The output of QuantumLM ``` ## Use and Limitations ### Intended Use These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. ### Limitations and bias Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly. Contact us : [email protected]
Secbone/llama-2-13B-instructed
Secbone
"2023-11-18T10:12:43Z"
1,697
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "zh", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-13T10:39:57Z"
--- license: llama2 language: - en - zh pipeline_tag: text-generation --- # LlaMA 2 13B instruction finetuned # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Secbone__llama-2-13B-instructed) | Metric | Value | |-----------------------|---------------------------| | Avg. | 47.84 | | ARC (25-shot) | 59.39 | | HellaSwag (10-shot) | 83.88 | | MMLU (5-shot) | 55.57 | | TruthfulQA (0-shot) | 46.89 | | Winogrande (5-shot) | 74.03 | | GSM8K (5-shot) | 8.04 | | DROP (3-shot) | 7.06 |
Sao10K/Euryale-1.3-L2-70B
Sao10K
"2024-03-28T04:12:27Z"
1,697
43
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-10T00:26:59Z"
--- license: llama2 language: - en --- *Updated readme latest update on 14/11* ![Euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B/resolve/main/Euryale.jpg) 17th Attempt. Past 10 Failed, cost me >$200 lol. Idea is an updated version of Euryale with ReMantik (Mythologic Recreation in 70B + Sauce) instead of the ties-merge between the original 3 models. This is then mixed with a saucy model (spicyboros+pyg_lora) with a Mythomax-esque Ratio, and a certain experimental (self) LoRA applied to it. ***Unrelated to Euryale 1.0, I got rid of most of the merged models as they were bloat.*** *** 1.4 Here: https://huggingface.co/Sao10K/Euryale-1.4-L2-70B <br> Subjectively better? atleast it is a minor improvement in my eyes. *** Test Results: Works Well. <br>NSFL and NSFW fine in roleplay context. <br>slight censor with 0 context, zero issues in actual RP / ERP. <br>Good Prose, Not Dumbed Down due to RP merges from testing. <br> I have not encountered any repetition issues some had with the original Euryale. tell me if you do, though. Prompt and System Format: most works well. I recommend Alpaca. ST Settings used for Test: Lightning 1.1 System Prompt + Shortwave(1.2 Temperature) Support me [here](https://ko-fi.com/sao10k) :) Quants done by TheBloke! Ty a lot to him. https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GPTQ https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-GGUF https://huggingface.co/TheBloke/Euryale-1.3-L2-70B-AWQ *** ty for the exl2 quants! there's more bpw out there you can search too! https://huggingface.co/LoneStriker/Euryale-1.3-L2-70B-2.4bpw-h6-exl2 https://huggingface.co/LoneStriker/Euryale-1.3-L2-70B-2.6bpw-h6-exl2 https://huggingface.co/Panchovix/Euryale-1.3-L2-70B-4.65bpw-h6-exl2 https://huggingface.co/AzureBlack/Euryale-1.3-L2-70B-4.6bpw-6h-exl2 https://huggingface.co/Doctor-Shotgun/Euryale-1.3-limarpv3-L2-70B-exl2 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Sao10K__Euryale-1.3-L2-70B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 66.58 | | ARC (25-shot) | 70.82 | | HellaSwag (10-shot) | 87.92 | | MMLU (5-shot) | 70.39 | | TruthfulQA (0-shot) | 59.85 | | Winogrande (5-shot) | 82.79 | | GSM8K (5-shot) | 34.19 | | DROP (3-shot) | 60.1 |
nm-testing/tinyllama-oneshot-w4a16-channel-v2
nm-testing
"2024-06-07T19:40:44Z"
1,697
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-07T19:40:26Z"
Entry not found
TheBloke/llama-2-70b-Guanaco-QLoRA-fp16
TheBloke
"2023-08-08T10:04:37Z"
1,696
56
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "text-classification", "en", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-classification
"2023-07-21T20:19:11Z"
--- inference: false language: - en license: other model_type: llama pipeline_tag: text-classification tags: - llama-2 --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Llama2 70b Guanaco QLoRA - fp16 - Model creator: [Mikael110](https://huggingface.co/Mikael110) - Original model: [Llama2 70b Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora) # Mikael110's Llama2 70b Guanaco QLoRA fp16 These files are pytorch format fp16 model files for [Mikael110's Llama2 70b Guanaco QLoRA](https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora). It is the result of merging and/or converting the source repository to float16. ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-GGML) * [Merged fp16 model, for GPU inference and further conversions](https://huggingface.co/TheBloke/llama-2-70b-Guanaco-QLoRA-fp16) * [Mikael110's original QLoRA adapter](https://huggingface.co/Mikael110/llama-2-70b-guanaco-qlora) ## Prompt template: Guanaco ``` ### Human: {prompt} ### Assistant: ``` <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz. **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Mikael110's Llama2 70b Guanaco QLoRA This is a Llama-2 version of [Guanaco](https://huggingface.co/timdettmers/guanaco-65b). It was finetuned from the base [Llama-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) model using the official training scripts found in the [QLoRA repo](https://github.com/artidoro/qlora). I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model. This repo contains the QLoRA adapter. A 7b version of the adapter can be found [here](https://huggingface.co/Mikael110/llama-2-7b-guanaco-qlora). A 13b version of the adapter can be found [here](https://huggingface.co/Mikael110/llama-2-13b-guanaco-qlora). **Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.**
Yntec/SinkOrSwim
Yntec
"2023-12-15T00:41:07Z"
1,696
5
diffusers
[ "diffusers", "safetensors", "Character Design", "CGI", "Cartoon", "Illustration", "iamxenos", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-12-14T23:49:50Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Character Design - CGI - Cartoon - Illustration - iamxenos - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Sink Or Swim A mix of Kitsch-In-Sync v2 and Generate Me! to maximize the SOUL! Samples and prompts: ![Samples 768](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/fB583MyQouAp4wyIjHw0k.png) (Click for larger) Top left: pretty cute little girl sitting on a giant lemon themed hamburger, high quality Top right: Father with little daughter holding Coca Cola. Santa Claus sitting with a pretty cute girl, Art Christmas Theme by Haddon_Sundblom and Gil_Elvgren Bottom left: Baby girl with a giant basket full of cherries, high quality, grass by wess anderson Bottom right: an illustration of a baby porcupine with headphones holding an ribbon umbrella in the rain Original pages: https://civitai.com/models/142552?modelVersionId=163068 (Kitsch-In-Sync (...and a bag of chips, by Xenos) v2.0) https://huggingface.co/Yntec/GenerateMe # Recipe: - SuperMerger Weight sum Train Difference Use MBW 0,1,1,0,1,0,1,0,0,0,0,1,0,1,1,1,1,1,1,1,0,1,0,0,0,0 Model A: Kitsch-In-Sync v2 Model B: Generate Me! Output Model: SinkOrSwim Easter egg: ![Sample 512](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/5QOarLxYJCxM5QGuYUfgN.png)
mradermacher/PowerQwen-1.5B-v1-GGUF
mradermacher
"2024-06-18T18:49:27Z"
1,696
1
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen2", "trl", "sft", "en", "base_model:mahiatlinux/PowerQwen-1.5B-v1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-18T18:38:18Z"
--- base_model: mahiatlinux/PowerQwen-1.5B-v1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mahiatlinux/PowerQwen-1.5B-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.Q2_K.gguf) | Q2_K | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.IQ3_XS.gguf) | IQ3_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.Q3_K_S.gguf) | Q3_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.IQ3_S.gguf) | IQ3_S | 0.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.IQ3_M.gguf) | IQ3_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.Q3_K_L.gguf) | Q3_K_L | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.IQ4_XS.gguf) | IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.Q5_K_S.gguf) | Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.Q5_K_M.gguf) | Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.Q6_K.gguf) | Q6_K | 1.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/PowerQwen-1.5B-v1-GGUF/resolve/main/PowerQwen-1.5B-v1.f16.gguf) | f16 | 3.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
textattack/bert-base-uncased-WNLI
textattack
"2021-05-20T07:39:22Z"
1,695
1
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
## TextAttack Model Card This `bert-base-uncased` model was fine-tuned for sequence classification using TextAttack and the glue dataset loaded using the `nlp` library. The model was fine-tuned for 5 epochs with a batch size of 64, a learning rate of 5e-05, and a maximum sequence length of 256. Since this was a classification task, the model was trained with a cross-entropy loss function. The best score the model achieved on this task was 0.5633802816901409, as measured by the eval set accuracy, found after 1 epoch. For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
openmmlab/upernet-swin-large
openmmlab
"2023-04-24T09:48:31Z"
1,695
0
transformers
[ "transformers", "pytorch", "safetensors", "upernet", "vision", "image-segmentation", "en", "arxiv:1807.10221", "arxiv:2103.14030", "license:mit", "endpoints_compatible", "region:us" ]
image-segmentation
"2023-01-13T14:35:14Z"
--- language: en license: mit tags: - vision - image-segmentation model_name: openmmlab/upernet-swin-large --- # UperNet, Swin Transformer large-sized backbone UperNet framework for semantic segmentation, leveraging a Swin Transformer backbone. UperNet was introduced in the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Xiao et al. Combining UperNet with a Swin Transformer backbone was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030). Disclaimer: The team releasing UperNet + Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description UperNet is a framework for semantic segmentation. It consists of several components, including a backbone, a Feature Pyramid Network (FPN) and a Pyramid Pooling Module (PPM). Any visual backbone can be plugged into the UperNet framework. The framework predicts a semantic label per pixel. ![UperNet architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/upernet_architecture.jpg) ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=openmmlab/upernet) to look for fine-tuned versions (with various backbones) on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/upernet#transformers.UperNetForSemanticSegmentation).
luffycodes/vicuna-class-tutor-13b-ep3
luffycodes
"2023-12-21T14:58:23Z"
1,695
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2305.13272", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-21T14:43:42Z"
--- license: llama2 --- If you use this work, please cite: CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles https://arxiv.org/abs/2305.13272 ``` @misc{sonkar2023class, title={CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles}, author={Shashank Sonkar and Lucy Liu and Debshila Basu Mallick and Richard G. Baraniuk}, year={2023}, eprint={2305.13272}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf
RichardErkhov
"2024-06-27T12:07:19Z"
1,695
0
null
[ "gguf", "region:us" ]
null
"2024-06-27T11:59:21Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qllama-tiny-.5B-test-1 - GGUF - Model creator: https://huggingface.co/Josephgflowers/ - Original model: https://huggingface.co/Josephgflowers/Qllama-tiny-.5B-test-1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Qllama-tiny-.5B-test-1.Q2_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q2_K.gguf) | Q2_K | 0.23GB | | [Qllama-tiny-.5B-test-1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.IQ3_XS.gguf) | IQ3_XS | 0.24GB | | [Qllama-tiny-.5B-test-1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.IQ3_S.gguf) | IQ3_S | 0.25GB | | [Qllama-tiny-.5B-test-1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q3_K_S.gguf) | Q3_K_S | 0.25GB | | [Qllama-tiny-.5B-test-1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.IQ3_M.gguf) | IQ3_M | 0.26GB | | [Qllama-tiny-.5B-test-1.Q3_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q3_K.gguf) | Q3_K | 0.26GB | | [Qllama-tiny-.5B-test-1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q3_K_M.gguf) | Q3_K_M | 0.26GB | | [Qllama-tiny-.5B-test-1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q3_K_L.gguf) | Q3_K_L | 0.28GB | | [Qllama-tiny-.5B-test-1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.IQ4_XS.gguf) | IQ4_XS | 0.28GB | | [Qllama-tiny-.5B-test-1.Q4_0.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q4_0.gguf) | Q4_0 | 0.29GB | | [Qllama-tiny-.5B-test-1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.IQ4_NL.gguf) | IQ4_NL | 0.29GB | | [Qllama-tiny-.5B-test-1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q4_K_S.gguf) | Q4_K_S | 0.29GB | | [Qllama-tiny-.5B-test-1.Q4_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q4_K.gguf) | Q4_K | 0.3GB | | [Qllama-tiny-.5B-test-1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q4_K_M.gguf) | Q4_K_M | 0.3GB | | [Qllama-tiny-.5B-test-1.Q4_1.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q4_1.gguf) | Q4_1 | 0.3GB | | [Qllama-tiny-.5B-test-1.Q5_0.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q5_0.gguf) | Q5_0 | 0.32GB | | [Qllama-tiny-.5B-test-1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q5_K_S.gguf) | Q5_K_S | 0.32GB | | [Qllama-tiny-.5B-test-1.Q5_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q5_K.gguf) | Q5_K | 0.33GB | | [Qllama-tiny-.5B-test-1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q5_K_M.gguf) | Q5_K_M | 0.33GB | | [Qllama-tiny-.5B-test-1.Q5_1.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q5_1.gguf) | Q5_1 | 0.34GB | | [Qllama-tiny-.5B-test-1.Q6_K.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q6_K.gguf) | Q6_K | 0.36GB | | [Qllama-tiny-.5B-test-1.Q8_0.gguf](https://huggingface.co/RichardErkhov/Josephgflowers_-_Qllama-tiny-.5B-test-1-gguf/blob/main/Qllama-tiny-.5B-test-1.Q8_0.gguf) | Q8_0 | 0.47GB | Original model description: --- license: mit --- Llamafyd version of Qwen .5B further fine tuned on wiki, math, science, and chat datasets. Based on Cinder data. As well as Cinder character specific data, a mix of RAG generated Q and A of world knowledge, STEM topics, and Cinder Character data. I suplimented the Cinder character with an abreviated Samantha dataset edited for Cinder and removed a lot of the negative responses. Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.
jinaai/jina-embedding-t-en-v1
jinaai
"2023-09-10T06:42:12Z"
1,694
28
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "finetuner", "feature-extraction", "sentence-similarity", "en", "dataset:jinaai/negation-dataset", "arxiv:2307.11224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2023-07-21T15:05:02Z"
--- pipeline_tag: sentence-similarity tags: - finetuner - sentence-transformers - feature-extraction - sentence-similarity datasets: - jinaai/negation-dataset language: en license: apache-2.0 --- <br><br> <p align="center"> <img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px"> </p> <p align="center"> <b>The text embedding set trained by <a href="https://jina.ai/"><b>Jina AI</b></a>, <a href="https://github.com/jina-ai/finetuner"><b>Finetuner</b></a> team.</b> </p> ## Intented Usage & Model Info `jina-embedding-t-en-v1` is a tiny language model that has been trained using Jina AI's Linnaeus-Clean dataset. This dataset consists of 380 million pairs of sentences, which include both query-document pairs. These pairs were obtained from various domains and were carefully selected through a thorough cleaning process. The Linnaeus-Full dataset, from which the Linnaeus-Clean dataset is derived, originally contained 1.6 billion sentence pairs. The model has a range of use cases, including information retrieval, semantic textual similarity, text reranking, and more. With a tiny small parameter size of just 14 million parameters, the model enables lightning-fast inference on CPU, while still delivering impressive performance. Additionally, we provide the following options: - [`jina-embedding-t-en-v1`](https://huggingface.co/jinaai/jina-embedding-t-en-v1): 14 million parameters **(you are here)**. - [`jina-embedding-s-en-v1`](https://huggingface.co/jinaai/jina-embedding-s-en-v1): 35 million parameters. - [`jina-embedding-b-en-v1`](https://huggingface.co/jinaai/jina-embedding-b-en-v1): 110 million parameters. - [`jina-embedding-l-en-v1`](https://huggingface.co/jinaai/jina-embedding-l-en-v1): 330 million parameters. - `jina-embedding-1b-en-v1`: 1.2 billion parameters, 10 times bert-base (soon). - `jina-embedding-6b-en-v1`: 6 billion parameters, 30 times bert-base (soon). ## Data & Parameters Please checkout our [technical blog](https://arxiv.org/abs/2307.11224). ## Metrics We compared the model against `all-minilm-l6-v2`/`all-mpnet-base-v2` from sbert and `text-embeddings-ada-002` from OpenAI: |Name|param |dimension| |------------------------------|-----|------| |all-minilm-l6-v2|23m |384| |all-mpnet-base-v2 |110m |768| |ada-embedding-002|Unknown/OpenAI API |1536| |jina-embedding-t-en-v1|14m |312| |jina-embedding-s-en-v1|35m |512| |jina-embedding-b-en-v1|110m |768| |jina-embedding-l-en-v1|330m |1024| |Name|STS12|STS13|STS14|STS15|STS16|STS17|TRECOVID|Quora|SciFact| |------------------------------|-----|-----|-----|-----|-----|-----|--------|-----|-----| |all-minilm-l6-v2|0.724|0.806|0.756|0.854|0.79 |0.876|0.473 |0.876|0.645 | |all-mpnet-base-v2|0.726|**0.835**|0.78 |0.857|0.8 |**0.906**|0.513 |0.875|0.656 | |ada-embedding-002|0.698|0.833|0.761|0.861|**0.86** |0.903|**0.685** |0.876|**0.726** | |jina-embedding-t-en-v1|0.717|0.773|0.731|0.829|0.777|0.860|0.482 |0.840|0.522 | |jina-embedding-s-en-v1|0.743|0.786|0.738|0.837|0.80|0.875|0.523 |0.857|0.524 | |jina-embedding-b-en-v1|**0.751**|0.809|0.761|0.856|0.812|0.890|0.606 |0.876|0.594 | |jina-embedding-l-en-v1|0.745|0.832|**0.781**|**0.869**|0.837|0.902|0.573 |**0.881**|0.598 | ## Inference Speed We encoded a single sentence "What is the current weather like today?" 10k times on: 1. cpu: MacBook Pro 2020, 2 GHz Quad-Core Intel Core i5 2. gpu: 1 Nvidia 3090 And recorded time spent to demonstrate the embedding speed: |Name|param |dimension| time@cpu | time@gpu | |------------------------------|-----|------|-----|-----| |jina-embedding-t-en-v1|14m |312| 5.78s | 2.36s| |all-minilm-l6-v2|23m |384| 11.95s | 2.70s | |jina-embedding-s-en-v1|35m |512| 17.25s | 2.81s | ## Usage Use with Jina AI Finetuner ```python !pip install finetuner import finetuner model = finetuner.build_model('jinaai/jina-embedding-t-en-v1') embeddings = finetuner.encode( model=model, data=['how is the weather today', 'What is the current weather like today?'] ) print(finetuner.cos_sim(embeddings[0], embeddings[1])) ``` Use with sentence-transformers: ```python from sentence_transformers import SentenceTransformer from sentence_transformers.util import cos_sim sentences = ['how is the weather today', 'What is the current weather like today?'] model = SentenceTransformer('jinaai/jina-embedding-t-en-v1') embeddings = model.encode(sentences) print(cos_sim(embeddings[0], embeddings[1])) ``` ## Fine-tuning Please consider [Finetuner](https://github.com/jina-ai/finetuner). ## Plans 1. The development of `jina-embedding-s-en-v2` is currently underway with two main objectives: improving performance and increasing the maximum sequence length. 2. We are currently working on a bilingual embedding model that combines English and X language. The upcoming model will be called `jina-embedding-s/b/l-de-v1`. ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find Jina Embeddings useful in your research, please cite the following paper: ``` latex @misc{günther2023jina, title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models}, author={Michael Günther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao}, year={2023}, eprint={2307.11224}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Brouz/Slerpeno
Brouz
"2023-09-08T22:51:29Z"
1,694
4
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-08T00:33:20Z"
--- license: cc-by-4.0 --- Uses the same models Stheno does but merging using SLERP method instead 13B model
chargoddard/Yi-34B-Llama
chargoddard
"2023-11-09T19:16:00Z"
1,694
58
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-06T00:20:02Z"
--- license: other license_name: yi-license license_link: LICENSE --- [01-ai/Yi-34B](https://huggingface.co/01-ai/Yi-34B/tree/main) with tensors renamed to match standard Llama modelling code. Model can be loaded without trust_remote_code, but the tokenizer can not. The branch `llama-tokenizer` uses the Llama tokenizer class as well.
RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf
RichardErkhov
"2024-05-20T01:49:02Z"
1,694
0
null
[ "gguf", "region:us" ]
null
"2024-05-20T00:03:39Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Synatra-RP-Orca-2-7b-v0.1 - GGUF - Model creator: https://huggingface.co/maywell/ - Original model: https://huggingface.co/maywell/Synatra-RP-Orca-2-7b-v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Synatra-RP-Orca-2-7b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q2_K.gguf) | Q2_K | 2.36GB | | [Synatra-RP-Orca-2-7b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.6GB | | [Synatra-RP-Orca-2-7b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.IQ3_S.gguf) | IQ3_S | 2.75GB | | [Synatra-RP-Orca-2-7b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [Synatra-RP-Orca-2-7b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.IQ3_M.gguf) | IQ3_M | 2.9GB | | [Synatra-RP-Orca-2-7b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q3_K.gguf) | Q3_K | 3.07GB | | [Synatra-RP-Orca-2-7b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [Synatra-RP-Orca-2-7b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [Synatra-RP-Orca-2-7b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [Synatra-RP-Orca-2-7b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q4_0.gguf) | Q4_0 | 3.56GB | | [Synatra-RP-Orca-2-7b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.58GB | | [Synatra-RP-Orca-2-7b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [Synatra-RP-Orca-2-7b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q4_K.gguf) | Q4_K | 3.8GB | | [Synatra-RP-Orca-2-7b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [Synatra-RP-Orca-2-7b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q4_1.gguf) | Q4_1 | 3.95GB | | [Synatra-RP-Orca-2-7b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q5_0.gguf) | Q5_0 | 4.33GB | | [Synatra-RP-Orca-2-7b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [Synatra-RP-Orca-2-7b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q5_K.gguf) | Q5_K | 4.45GB | | [Synatra-RP-Orca-2-7b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.45GB | | [Synatra-RP-Orca-2-7b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q5_1.gguf) | Q5_1 | 4.72GB | | [Synatra-RP-Orca-2-7b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q6_K.gguf) | Q6_K | 5.15GB | | [Synatra-RP-Orca-2-7b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/maywell_-_Synatra-RP-Orca-2-7b-v0.1-gguf/blob/main/Synatra-RP-Orca-2-7b-v0.1.Q8_0.gguf) | Q8_0 | 6.67GB | Original model description: --- license: apache-2.0 --- # **Synatra-RP-Orca-2-7b-v0.1🐧** ## Support Me Synatra is a personal project and is being developed with one person's resources. If you like the model, how about a little research funding? [<img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy me a Coffee" width="217" height="50">](https://www.buymeacoffee.com/mwell) Wanna be a sponser? (Please) Contact me on Telegram **AlzarTakkarsen** # **Model Details** **Base Model** microsoft/Orca-2-7b **Model Description** It's a test RP sft model. Finetuned from microsoft/Orca-2-7b. **Trained On** A100 80GB * 1 **Instruction format** Alpaca(Better), ChatML
vvrules00/llama-3-8b-Instruct-bnb-4bit-qa
vvrules00
"2024-06-20T10:09:41Z"
1,693
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-20T10:00:43Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf base_model: unsloth/llama-3-8b-Instruct-bnb-4bit --- # Uploaded model - **Developed by:** vvrules00 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
huggingface-course/bert-finetuned-ner
huggingface-course
"2022-12-05T23:49:09Z"
1,692
11
transformers
[ "transformers", "pytorch", "tf", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: test-bert-finetuned-ner results: - task: type: token-classification name: Token Classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - type: precision value: 0.9354625186165811 name: Precision - type: recall value: 0.9513631773813531 name: Recall - type: f1 value: 0.943345848977889 name: F1 - type: accuracy value: 0.9867545770294931 name: Accuracy - task: type: token-classification name: Token Classification dataset: name: conll2003 type: conll2003 config: conll2003 split: test metrics: - type: accuracy value: 0.9003797607979704 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGVlNjEyMTJmOTBhMmE1NjY1ODA3MTE0ZjM1YjU5Mzk2ZTY1NWE2MTZiMGMxZTRiNDNjNzNiYzI2NzZiMzAxMiIsInZlcnNpb24iOjF9.ScTPJWA72u8-LTp78w7U8teH-TXdyWnoz4vnK-1TefERahcKQ51eekHI_2xjOPe-1uQmw5z8rKTZfh3MOv-HCw - type: precision value: 0.9286807108391197 name: Precision verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjc0OGM4MTQ0OGM3NzA1ZTJmODg4YmJiZTZjOTVkZWYzZGYxZGYzZThhYzRkMzAxOWNhZmQ0NmJhNTMxZGI4MCIsInZlcnNpb24iOjF9.vloc_Hl4_UmVHUMTN2utIKJ2gYntSlZVuVJNkeGn-fR9SeRbKzmkBds4GQNjsV0JiVmnX0POB1hUqRGP4UjdAg - type: recall value: 0.9158238551580065 name: Recall verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzE2ZGIwNTAzNDhkMDc0MmU2NTQ2MjIyNjA0NzI0N2JiNDM3NjgxNTU3YmNiNWIwOTRmYzNkMTE0MmUyOTNhNiIsInZlcnNpb24iOjF9.-mi3lImJs1-993tdLiTL7KGFEb-jZJVrviqUlFaVY0rgkojDvRyhbUBnJoD4dadh728kRDTH5NW-ZKb9B9FTDg - type: f1 value: 0.9222074745602832 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGE1ODE0MGUzZmFhZTNhOWMwMzk3NzQ5MTQwOWIyNjAxZWUwMDgzNDBlNGIyNmY4YmQ4ZDRmOTljZmYyNGYzOCIsInZlcnNpb24iOjF9.PjQJinFobofJhCpsTLEuMSjsskLfbOmAPPQVGWBGk7jYOi3lvd9CUn9i_g1GlbbxuxmO1L9sMAj-pANn-aQiAA - type: loss value: 0.8705922365188599 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGI2YTU4ZmExYmZmMjBmMjM3ZWJhNDA0OGMwZjM4YWE4MjU1YmFjMTQxMjQ5MDlhNzYzYTBmYTc3YzRkN2UwOCIsInZlcnNpb24iOjF9.iyuIRW9M-yknXWi2Whboo-rjzicgxSGaeCpypgiQVYexjenzA5itKt_CDx52t7508zYshp-1ERnEHuEwBic9Aw --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0600 - Precision: 0.9355 - Recall: 0.9514 - F1: 0.9433 - Accuracy: 0.9868 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0849 | 1.0 | 1756 | 0.0713 | 0.9144 | 0.9366 | 0.9253 | 0.9817 | | 0.0359 | 2.0 | 3512 | 0.0658 | 0.9346 | 0.9500 | 0.9422 | 0.9860 | | 0.0206 | 3.0 | 5268 | 0.0600 | 0.9355 | 0.9514 | 0.9433 | 0.9868 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.8.1+cu111 - Datasets 1.12.1.dev0 - Tokenizers 0.10.3
timm/vit_large_patch32_224.orig_in21k
timm
"2024-02-09T18:12:58Z"
1,692
0
timm
[ "timm", "pytorch", "safetensors", "image-feature-extraction", "dataset:imagenet-21k", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-feature-extraction
"2022-12-22T07:49:35Z"
--- license: apache-2.0 library_name: timm tags: - image-feature-extraction - timm datasets: - imagenet-21k --- # Model card for vit_large_patch32_224.orig_in21k A Vision Transformer (ViT) image classification model. Pretrained on ImageNet-21k in JAX by paper authors, ported to PyTorch by Ross Wightman. This model does not have a classification head, useful for features and fine-tune only. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 305.5 - GMACs: 15.3 - Activations (M): 11.1 - Image size: 224 x 224 - **Papers:** - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-21k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_large_patch32_224.orig_in21k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_large_patch32_224.orig_in21k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 50, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
pythainlp/wangchanglm-7.5B-sft-enth
pythainlp
"2023-05-29T15:24:36Z"
1,692
8
transformers
[ "transformers", "pytorch", "xglm", "text-generation", "en", "th", "ja", "vi", "dataset:laion/OIG", "dataset:Hello-SimpleAI/HC3", "dataset:databricks/databricks-dolly-15k", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-04-25T04:37:10Z"
--- license: cc-by-sa-4.0 datasets: - laion/OIG - Hello-SimpleAI/HC3 - databricks/databricks-dolly-15k language: - en - th - ja - vi pipeline_tag: text-generation --- # Model Card for WangChanGLM 🐘 - The Multilingual Instruction-Following Model <!-- Provide a longer summary of what this model is. --> WangChanGLM is a multilingual, instruction-finetuned Facebook XGLM-7.5B using open-source, commercially permissible datasets (LAION OIG chip2 and infill_dbpedia, DataBricks Dolly v2, OpenAI TL;DR, and Hello-SimpleAI HC3; about 400k examples), released under CC-BY SA 4.0. The models are trained to perform a subset of instruction-following tasks we found most relevant namely: reading comprehension, brainstorming, and creative writing. We provide the weights for a model finetuned on an English-only dataset ([wangchanglm-7.5B-sft-en](https://huggingface.co/pythainlp/wangchanglm-7.5B-sft-en)) and another checkpoint further finetuned on Google-Translated Thai dataset ([wangchanglm-7.5B-sft-enth](https://huggingface.co/pythainlp/wangchanglm-7.5B-sft-enth)). We perform Vicuna-style evaluation using both humans and ChatGPT (in our case, `gpt-3.5-turbo` since we are still on the waitlist for `gpt-4`) and observe some discrepancies between the two types of annoators. All training and evaluation codes are shared under the [Apache-2.0 license](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE) in our Github, as well as datasets and model weights on [HuggingFace](https://huggingface.co/pythainlp). In a similar manner to [Dolly v2](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm), we use only use open-source, commercially permissive pretrained models and datasets, our models are neither restricted by non-commercial clause like models that use LLaMA as base nor non-compete clause like models that use self-instruct datasets from ChatGPT. See our live demo [here](). - **Developed by:** [PyThaiNLP](https://www.github.com/pythainlp) and [VISTEC-depa AI Research Institute of Thailand](https://huggingface.co/airesearch) - **Model type:** Finetuned [XGLM-7.5B](https://huggingface.co/facebook/xglm-7.5B) - **Language(s) (NLP)**: `en`, `th`, `ja`, `vi` capacibilities evaluated, theoretically all 30 languages of [XGLM-7.5B](https://huggingface.co/facebook/xglm-7.5B) - **License:** [CC-BY SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [pythainlp/wangchanglm](https://www.github.com/pythainlp/wangchanglm) - **Blog:** [Medium](https://link.medium.com/s2MWr3ZXnzb) - **Demo:** [Colab notebook](https://colab.research.google.com/github/pythainlp/WangChanGLM/blob/main/demo/WangChanGLM_v0_1_demo.ipynb) ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> Intended to be use as an instruction-following model for reading comprehension, brainstorming and creative writing. ### Downstream Use <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> The model can be finetuned for any typical instruction-following use cases. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> We do not expect the models to perform well in math problems, reasoning, and factfulness. We intentionally filter out training examples from these use cases. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> We noticed similar limitations to other finetuned instruction followers such as math problems, reasoning, and factfulness. Even though the models do not perform on the level that we expect them to be abused, they do contain undesirable biases and toxicity and should be further optimized for your particular use cases. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ``` model_name = "pythainlp/wangchanglm-7.5B-sft-en" model = AutoModelForCausalLM.from_pretrained( model_name, return_dict=True, load_in_8bit=True , device_map="auto", torch_dtype=torch.float16, offload_folder="./", low_cpu_mem_usage=True, ) text = "เล่นหุ้นยังไงให้รวย" tokenizer = AutoTokenizer.from_pretrained(model_name) batch = tokenizer(text, return_tensors="pt") with torch.cuda.amp.autocast(): output_tokens = model.generate( input_ids=batch["input_ids"], max_new_tokens=max_gen_len, # 512 begin_suppress_tokens = exclude_ids, no_repeat_ngram_size=2, #oasst k50 top_k=50, top_p=top_p, # 0.95 typical_p=1., temperature=temperature, # 0.9 # #oasst typical3 # typical_p = 0.3, # temperature = 0.8, # repetition_penalty = 1.2, ) tokenizer.decode(output_tokens[0], skip_special_tokens=True) ``` ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> Finetuning datasets are sourced from [LAION OIG chip2 and infill_dbpedia](https://huggingface.co/datasets/laion/OIG) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [DataBricks Dolly v2](https://github.com/databrickslabs/dolly) ([Apache-2.0](https://github.com/pythainlp/wangchanglm/blob/main/LICENSE)), [OpenAI TL;DR](https://github.com/openai/summarize-from-feedback) ([MIT](https://opensource.org/license/mit/)), and [Hello-SimpleAI HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) ([CC-BY SA](https://creativecommons.org/licenses/by-sa/4.0/)). ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing See [pythainlp/wangchanglm](https://www.github.com/pythainlp/wangchanglm). #### Training Hyperparameters - **Training regime:** LoRA with 4 GPUs. See more details at [pythainlp/wangchanglm](https://www.github.com/pythainlp/wangchanglm). ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> We performed automatic evaluation in the style of [Vicuna](https://vicuna.lmsys.org/) and human evaluation. See more details from our [blog](). ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.432 kgCO2eq/kWh. A cumulative of 500 hours of computation was performed on hardware of type Tesla V100-SXM2-32GB (TDP of 300W). Total emissions are estimated to be 64.8 CO2eq of which 0 percents were directly offset. Estimations were conducted using the [MachineLearning Impact calculator](https://mlco2.github.io/impact#compute). ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ``` @software{charin_polpanumas_2023_7878101, author = {Charin Polpanumas and Wannaphong Phatthiyaphaibun and Patomporn Payoungkhamdee and Peerat Limkonchotiwat and Lalita Lowphansirikul and Can Udomcharoenchaikit and Titipat Achakulwisut and Ekapol Chuangsuwanich and Sarana Nutanong}, title = {{WangChanGLM🐘 — The Multilingual Instruction- Following Model}}, month = apr, year = 2023, publisher = {Zenodo}, version = {v0.1}, doi = {10.5281/zenodo.7878101}, url = {https://doi.org/10.5281/zenodo.7878101} } ``` ## Model Card Contact [PyThaiNLP](https://github.com/pythainlp)
vonjack/Qwen-LLaMAfied-HFTok-7B-Chat
vonjack
"2024-04-25T13:55:56Z"
1,692
23
transformers
[ "transformers", "pytorch", "llama", "text-generation", "qwen", "llama-2", "en", "zh", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-09T08:17:56Z"
--- language: - en - zh tags: - qwen - llama - llama-2 license: apache-2.0 --- [WIP] Origin repository [JosephusCheung/Qwen-LLaMAfied-7B-Chat](https://huggingface.co/JosephusCheung/Qwen-LLaMAfied-7B-Chat). This is the LLaMAfied version of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat), recalibrated to fit the original LLaMA/LLaMA-2-like model structure. You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models. I converted the tokenizer from tiktoken format to huggingface format, so you do not need to allow external codes when loading anymore. The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen. SPOILOR: Further finetuning is in progress, the current version is a work-in-progress, some knowledge may be biased and illusory due to structural changes. Will be updated very, very sooooooooooon. PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) CURRENT MMLU: 50.36 Issue: Compared to the original Qwen-Chat scoring 53.9, the MMLU score dropped slightly (-3.54) due to insufficient realignment.
Phind/Phind-CodeLlama-34B-v1
Phind
"2023-08-28T19:53:12Z"
1,692
323
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code llama", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-25T20:16:25Z"
--- license: llama2 model-index: - name: Phind-CodeLlama-34B-v1 results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 67.6% verified: false tags: - code llama --- # NOTE: We've now launched **Phind-CodeLlama-34B-v2**, which acheives **73.8% pass@1** on HumanEval. It is instruction-tuned and much easier to use than this v1 model. # Check out Phind-CodeLlama-34B-v2 [here](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2). ## **Phind-CodeLlama-34B-v1** We've fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieve 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieves 67%. We've applied OpenAI's decontamination methodology to our dataset to ensure result validity. More details can be found on our [blog post](https://www.phind.com/blog/code-llama-beats-gpt4). ## Model Details This model is fine-tuned from CodeLlama-34B and achieves 67.6% pass@1 on HumanEval. ## Dataset Details We fined-tuned on a proprietary dataset of ~80k high quality programming problems and solutions. This dataset consists of instruction-answer pairs instead of code completion examples, making it structurally different from HumanEval. The Phind models were trained for 2 epochs, for a total of ~160k examples shown. LoRA was not used -- both models are a native finetune. We used DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours on 32 A100-80GB GPUs. We used a sequence length of 4096 tokens. ## How to Get Started with the Model Make sure to install Transformers from the main git branch: ```bash pip install git+https://github.com/huggingface/transformers.git ``` ## How to Prompt the Model **Please note that this model is somewhat instruction-tuned, but not chat-tuned.** Do not try to use the Llama chat markup with this model. Instead, simply tell it what you want and add "\n: " at the end of your task. For example: ``` Write me a linked list implementation: \n ``` ## How to reproduce HumanEval Results To reproduce our results: ```python from transformers import AutoTokenizer, LlamaForCausalLM from human_eval.data import write_jsonl, read_problems from tqdm import tqdm # initialize the model model_path = "Phind/Phind-CodeLlama-34B-v1" model = LlamaForCausalLM.from_pretrained(model_path, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_path) # HumanEval helper def generate_one_completion(prompt: str): tokenizer.pad_token = tokenizer.eos_token inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=4096) # Generate generate_ids = model.generate(inputs.input_ids.to("cuda"), max_new_tokens=256, do_sample=True, top_p=0.75, top_k=40, temperature=0.1) completion = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] completion = completion.replace(prompt, "").split("\n\n\n")[0] return completion # perform HumanEval problems = read_problems() num_samples_per_task = 1 samples = [ dict(task_id=task_id, completion=generate_one_completion(problems[task_id]["prompt"])) for task_id in tqdm(problems) for _ in range(num_samples_per_task) ] write_jsonl("samples.jsonl", samples) # run `evaluate_functional_correctness samples.jsonl` in your HumanEval code sandbox ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments. ## Training details <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** 32x A100-80GB - **Hours used:** 90 GPU-hours - **Cloud Provider:** AWS - **Compute Region:** us-east-1
Brillibits/Instruct_Llama70B_Dolly15k
Brillibits
"2024-05-15T19:31:26Z"
1,692
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:databricks/databricks-dolly-15k", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-14T16:24:41Z"
--- license: llama2 datasets: - databricks/databricks-dolly-15k language: - en pipeline_tag: text-generation --- # Instruct_Llama70B_Dolly15k Fine-tuned from Llama-2-70B,used Dolly15k for the dataset. 80% for training, 15% validation, 5% test. Trained for 1.5 epochs using QLora. Trained with 1024 context window. # Model Details * **Trained by**: trained by [Brillibits](https://brillibits.com/en). See [YouTube](https://www.youtube.com/@Brillibits) as well. * **Model type:** **Instruct_Llama70B_Dolly15k** is an auto-regressive language model based on the Llama 2 transformer architecture. * **Language(s)**: English * **License for Instruct_Llama70B_Dolly15ks**: llama2 license # Prompting ## Prompt Template With Context ``` Write a 10-line poem about a given topic Input: The topic is about racecars Output: ``` ## Prompt Template Without Context ``` Who was the was the second president of the United States? Output: ``` ## Professional Assistance This model and other models like it are great, but where LLMs hold the most promise is when they are applied on custom data to automate a wide variety of tasks If you have a dataset and want to see if you might be able to apply that data to automate some tasks, and you are looking for professional assistance, contact me [here](mailto:[email protected]) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Brillibits__Instruct_Llama70B_Dolly15k) | Metric | Value | |-----------------------|---------------------------| | Avg. | 60.97 | | ARC (25-shot) | 68.34 | | HellaSwag (10-shot) | 87.21 | | MMLU (5-shot) | 69.52 | | TruthfulQA (0-shot) | 46.46 | | Winogrande (5-shot) | 84.29 | | GSM8K (5-shot) | 42.68 | | DROP (3-shot) | 28.26 |
openaccess-ai-collective/jackalope-7b
openaccess-ai-collective
"2023-10-12T08:21:34Z"
1,692
29
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "en", "dataset:Open-Orca/OpenOrca", "dataset:LDJnr/LessWrong-Amplify-Instruct", "dataset:LDJnr/Pure-Dove", "dataset:LDJnr/Verified-Camel", "dataset:PygmalionAI/PIPPA", "dataset:meta-math/MetaMathQA", "dataset:riddle_sense", "arxiv:2306.02707", "arxiv:2301.13688", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-07T16:08:42Z"
--- datasets: - Open-Orca/OpenOrca - LDJnr/LessWrong-Amplify-Instruct - LDJnr/Pure-Dove - LDJnr/Verified-Camel - PygmalionAI/PIPPA - meta-math/MetaMathQA - riddle_sense language: - en library_name: transformers pipeline_tag: text-generation license: apache-2.0 --- <p><h1>🐰🦌 Jackalope 7B 🐰🦌</h1></p> ![Jackalope Logo](https://huggingface.co/openaccess-ai-collective/jackalope-7b/resolve/main/images/jackalope.jpg "Jackalope Logo") [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # Jackalope 7B We have used the [SlimOrca dataset](https://huggingface.co/datasets/Open-Orca/SlimOrca), PIPPA, and various other open datasets to fine-tune on top of [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1). This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707). We use [OpenChat](https://huggingface.co/openchat) packing, trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This release highlights the efficiency of SlimOrca, while improving the ability of the model's multi-turn chat. HF Leaderboard evals puts this model only slightly below the MistralOrca release, but can be considered a reasonable tradeoff for a more general model that can handle multi-turn chat. If you'd like to try the model now, we have it running on fast GPUs unquantized: https://huggingface.co/spaces/openaccess-ai-collective/jackalope-7b Join the OpenAccess AI Collective Discord for more information about Axolotl trainer and other OAAIC models here: https://discord.gg/5y8STgB3P3 Also join the AlignmentLab Discord for sneak-peak announcements: https://AlignmentLab.ai # Quantized Models Quantized versions of this model are generously made available by [TheBloke](https://huggingface.co/TheBloke). - AWQ: https://huggingface.co/TheBloke/Jackalope-7B-AWQ - GPTQ: https://huggingface.co/TheBloke/Jackalope-7B-GPTQ - GGUF: https://huggingface.co/TheBloke/Jackalope-7B-GGUF # Prompt Template We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this. This means that, e.g., in [oobabooga](https://github.com/oobabooga/text-generation-webui/) the "`MPT-Chat`" instruction template should work, as it also uses ChatML. This formatting is also available via a pre-defined [Transformers chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means that lists of messages can be formatted for you with the `apply_chat_template()` method: ```python chat = [ {"role": "system", "content": "You are JackalopeAI, a large language model trained by OpenAccess AI Collective. Write out your reasoning step-by-step to be sure you get the right answers!"} {"role": "user", "content": "How are you?"}, {"role": "assistant", "content": "I am doing well!"}, {"role": "user", "content": "Please tell me about the mythical creatures called jackalopes."}, ] tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) ``` which will yield: ``` <|im_start|>system You are JackalopeAI. Write out your reasoning step-by-step to be sure you get the right answers! <|im_end|> <|im_start|>user How are you?<|im_end|> <|im_start|>assistant I am doing well!<|im_end|> <|im_start|>user Please tell me about the mythical creatures called jackalopes.<|im_end|> <|im_start|>assistant ``` If you use `tokenize=True` and `return_tensors="pt"` instead, then you will get a tokenized and formatted conversation ready to pass to `model.generate()`. # Evaluation ## HuggingFace Leaderboard Performance ![All benchmarks](https://huggingface.co/openaccess-ai-collective/jackalope-7b/resolve/main/images/bench.png) | Metric | Value | |-----------------------|--| | MMLU (5-shot) | 63.63 | | ARC (25-shot) | 63.31 | | HellaSwag (10-shot) | 83.29 | | TruthfulQA (0-shot) | 49.99 | | Avg. | 65.06 | We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. # Dataset We used a verified, curated, filtered selection of most of the GPT-4 augmented data from the OpenOrca dataset. Additionally we include multi-turn chat from PIPPA, various datasets by LDJ from Nous Research, MetaMathQA, and Chain-of-Thought augmented data from the train split of RiddleSense. - [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) - [LDJnr/LessWrong-Amplify-Instruct](https://huggingface.co/datasets/LDJnr/LessWrong-Amplify-Instruct) - [LDJnr/Pure-Dove](https://huggingface.co/datasets/LDJnr/Pure-Dove) - [LDJnr/Verified-Camel](https://huggingface.co/datasets/LDJnr/Verified-Camel) - [PygmalionAI/PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) - [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) - [riddle_sense](https://huggingface.co/datasets/riddle_sense) # Training We trained with 8x A6000 GPUs for 96 hours, completing 4 epochs of full fine tuning on our dataset in one training run. Commodity cost was ~$650. # Citation ```bibtex @software{lian2023jackalope, title = {Jackalope 7B: Mistral-7B Model Multi-Turn Chat tuned on Filtered OpenOrcaV1 GPT-4 Dataset}, author = {Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{openaccess-ai-collective/jackalope-7b}, } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
myrulezzzz/website_FAQ_mistral_Q4_GGUF_V6
myrulezzzz
"2024-06-19T22:19:41Z"
1,692
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-19T22:09:01Z"
--- base_model: unsloth/mistral-7b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf --- # Uploaded model - **Developed by:** myrulezzzz - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
AJosh/medical-llama3b
AJosh
"2024-06-25T05:44:03Z"
1,692
0
transformers
[ "transformers", "gguf", "llama", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
"2024-06-25T02:43:28Z"
--- license: apache-2.0 ---
camel-ai/CAMEL-13B-Role-Playing-Data
camel-ai
"2024-06-10T01:57:34Z"
1,691
20
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2303.17760", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-02T20:49:35Z"
CAMEL-13B-Role-Playing-Data is a chat large language model obtained by finetuning LLaMA-13B model on a total of 229K conversations created through our role-playing framework proposed in [CAMEL](https://arxiv.org/abs/2303.17760). We evaluate our model offline using EleutherAI's language model evaluation harness used by Huggingface's Open LLM Benchmark. CAMEL-13B scores an average of 57.2. --- license: cc-by-nc-4.0 --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_camel-ai__CAMEL-13B-Role-Playing-Data) | Metric | Value | |-----------------------|---------------------------| | Avg. | 45.03 | | ARC (25-shot) | 54.95 | | HellaSwag (10-shot) | 79.25 | | MMLU (5-shot) | 46.61 | | TruthfulQA (0-shot) | 46.35 | | Winogrande (5-shot) | 74.03 | | GSM8K (5-shot) | 7.35 | | DROP (3-shot) | 6.66 |
cvssp/audioldm2-music
cvssp
"2024-04-16T10:02:30Z"
1,691
15
diffusers
[ "diffusers", "safetensors", "arxiv:2308.05734", "license:cc-by-nc-sa-4.0", "diffusers:AudioLDM2Pipeline", "region:us" ]
null
"2023-08-21T11:00:44Z"
--- license: cc-by-nc-sa-4.0 --- # AudioLDM 2 Music AudioLDM 2 is a latent text-to-audio diffusion model capable of generating realistic audio samples given any text input. It is available in the 🧨 Diffusers library from v0.21.0 onwards. # Model Details AudioLDM 2 was proposed in the paper [AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining](https://arxiv.org/abs/2308.05734) by Haohe Liu et al. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. # Checkpoint Details This is the original, **music** version of the AudioLDM 2 model, also referred to as **audioldm2-music-665k**. There are three official AudioLDM 2 checkpoints. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation. All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. See table below for details on the three official checkpoints: | Checkpoint | Task | UNet Model Size | Total Model Size | Training Data / h | |-----------------------------------------------------------------|---------------|-----------------|------------------|-------------------| | [audioldm2](https://huggingface.co/cvssp/audioldm2) | Text-to-audio | 350M | 1.1B | 1150k | | [audioldm2-large](https://huggingface.co/cvssp/audioldm2-large) | Text-to-audio | 750M | 1.5B | 1150k | | [audioldm2-music](https://huggingface.co/cvssp/audioldm2-music) | Text-to-music | 350M | 1.1B | 665k | ## Model Sources - [**Original Repository**](https://github.com/haoheliu/audioldm2) - [**🧨 Diffusers Pipeline**](https://huggingface.co/docs/diffusers/api/pipelines/audioldm2) - [**Paper**](https://arxiv.org/abs/2308.05734) - [**Demo**](https://huggingface.co/spaces/haoheliu/audioldm2-text2audio-text2music) # Usage First, install the required packages: ``` pip install --upgrade diffusers transformers accelerate ``` ## Text-to-Audio For text-to-audio generation, the [AudioLDM2Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/audioldm2) can be used to load pre-trained weights and generate text-conditional audio outputs: ```python from diffusers import AudioLDM2Pipeline import torch repo_id = "cvssp/audioldm2-music" pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" audio = pipe(prompt, num_inference_steps=200, audio_length_in_s=10.0).audios[0] ``` The resulting audio output can be saved as a .wav file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=16000, data=audio) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(audio, rate=16000) ``` ## Tips Prompts: * Descriptive prompt inputs work best: you can use adjectives to describe the sound (e.g. "high quality" or "clear") and make the prompt context specific (e.g., "water stream in a forest" instead of "stream"). * It's best to use general terms like 'cat' or 'dog' instead of specific names or abstract objects that the model may not be familiar with. Inference: * The _quality_ of the predicted audio sample can be controlled by the `num_inference_steps` argument: higher steps give higher quality audio at the expense of slower inference. * The _length_ of the predicted audio sample can be controlled by varying the `audio_length_in_s` argument. When evaluating generated waveforms: * The quality of the generated waveforms can vary significantly based on the seed. Try generating with different seeds until you find a satisfactory generation * Multiple waveforms can be generated in one go: set `num_waveforms_per_prompt` to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly. The following example demonstrates how to construct a good audio generation using the aforementioned tips: ```python import scipy import torch from diffusers import AudioLDM2Pipeline # load the pipeline repo_id = "cvssp/audioldm2-music" pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") # define the prompts prompt = "Techno music with a strong, upbeat tempo and high melodic riffs" negative_prompt = "Low quality." # set the seed generator = torch.Generator("cuda").manual_seed(0) # run the generation audio = pipe( prompt, negative_prompt=negative_prompt, num_inference_steps=200, audio_length_in_s=10.0, num_waveforms_per_prompt=3, ).audios # save the best audio sample (index 0) as a .wav file scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0]) ``` # Citation **BibTeX:** ``` @article{liu2023audioldm2, title={"AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining"}, author={Haohe Liu and Qiao Tian and Yi Yuan and Xubo Liu and Xinhao Mei and Qiuqiang Kong and Yuping Wang and Wenwu Wang and Yuxuan Wang and Mark D. Plumbley}, journal={arXiv preprint arXiv:2308.05734}, year={2023} } ```
Phind/Phind-CodeLlama-34B-Python-v1
Phind
"2023-08-26T03:05:48Z"
1,691
250
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code llama", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-25T20:33:09Z"
--- license: llama2 model-index: - name: Phind-CodeLlama-34B-v1 results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 69.5% verified: false tags: - code llama --- # **Phind-CodeLlama-34B-Python-v1** We've fine-tuned CodeLlama-34B and CodeLlama-34B-Python on an internal Phind dataset that achieve 67.6% and 69.5% pass@1 on HumanEval, respectively. GPT-4 achieves 67%. We've applied OpenAI's decontamination methodology to our dataset to ensure result validity. More details can be found on our [blog post](https://www.phind.com/blog/code-llama-beats-gpt4). ## Model Details This model is fine-tuned from CodeLlama-34B-Python and achieves 69.5% pass@1 on HumanEval. ## Dataset Details We fined-tuned on a proprietary dataset of ~80k high quality programming problems and solutions. This dataset consists of instruction-answer pairs instead of code completion examples, making it structurally different from HumanEval. The Phind models were trained for 2 epochs, for a total of ~160k examples shown. LoRA was not used -- both models are a native finetune. We used DeepSpeed ZeRO 3 and Flash Attention 2 to train these models in three hours on 32 A100-80GB GPUs. We used a sequence length of 4096 tokens. ## How to Get Started with the Model Make sure to install Transformers from the main git branch: ```bash pip install git+https://github.com/huggingface/transformers.git ``` ## How to Prompt the Model **Please note that this model is somewhat instruction-tuned, but not chat-tuned.** Do not try to use the Llama chat markup with this model. Instead, simply tell it what you want and add "\n: " at the end of your task. For example: ``` Write me a linked list implementation: \n ``` ## How to reproduce HumanEval Results To reproduce our results: ```python from transformers import AutoTokenizer, LlamaForCausalLM from human_eval.data import write_jsonl, read_problems from tqdm import tqdm # initialize the model model_path = "Phind/Phind-CodeLlama-34B-v1" model = LlamaForCausalLM.from_pretrained(model_path, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_path) # HumanEval helper def generate_one_completion(prompt: str): tokenizer.pad_token = tokenizer.eos_token inputs = tokenizer(prompt, return_tensors="pt", truncation=True, max_length=4096) # Generate generate_ids = model.generate(inputs.input_ids.to("cuda"), max_new_tokens=256, do_sample=True, top_p=0.75, top_k=40, temperature=0.1) completion = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] completion = completion.replace(prompt, "").split("\n\n\n")[0] return completion # perform HumanEval problems = read_problems() num_samples_per_task = 1 samples = [ dict(task_id=task_id, completion=generate_one_completion(problems[task_id]["prompt"])) for task_id in tqdm(problems) for _ in range(num_samples_per_task) ] write_jsonl("samples.jsonl", samples) # run `evaluate_functional_correctness samples.jsonl` in your HumanEval code sandbox ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This model has undergone very limited testing. Additional safety testing should be performed before any real-world deployments. ## Training details <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** 32x A100-80GB - **Hours used:** 90 GPU-hours - **Cloud Provider:** AWS - **Compute Region:** us-east-1
Niggendar/sevenof9PonyRealMix_realPonyMixFinal
Niggendar
"2024-05-30T08:49:17Z"
1,691
1
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-05-30T08:39:17Z"
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
legraphista/neo_7b-IMat-GGUF
legraphista
"2024-05-31T12:21:47Z"
1,691
0
gguf
[ "gguf", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "base_model:m-a-p/neo_7b", "license:apache-2.0", "region:us" ]
text-generation
"2024-05-31T11:11:07Z"
--- base_model: m-a-p/neo_7b inference: false library_name: gguf license: apache-2.0 pipeline_tag: text-generation quantized_by: legraphista tags: - quantized - GGUF - imatrix - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- # neo_7b-IMat-GGUF _Llama.cpp imatrix quantization of m-a-p/neo_7b_ Original Model: [m-a-p/neo_7b](https://huggingface.co/m-a-p/neo_7b) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3051](https://github.com/ggerganov/llama.cpp/releases/tag/b3051) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [neo_7b.Q8_0.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q8_0.gguf) | Q8_0 | 8.28GB | ✅ Available | ⚪ Static | 📦 No | [neo_7b.Q6_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q6_K.gguf) | Q6_K | 6.40GB | ✅ Available | ⚪ Static | 📦 No | [neo_7b.Q4_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q4_K.gguf) | Q4_K | 4.74GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.Q3_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q3_K.gguf) | Q3_K | 3.79GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.Q2_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q2_K.gguf) | Q2_K | 2.92GB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [neo_7b.BF16.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.BF16.gguf) | BF16 | 15.59GB | ✅ Available | ⚪ Static | 📦 No | [neo_7b.FP16.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.FP16.gguf) | F16 | 15.59GB | ✅ Available | ⚪ Static | 📦 No | [neo_7b.Q8_0.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q8_0.gguf) | Q8_0 | 8.28GB | ✅ Available | ⚪ Static | 📦 No | [neo_7b.Q6_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q6_K.gguf) | Q6_K | 6.40GB | ✅ Available | ⚪ Static | 📦 No | [neo_7b.Q5_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q5_K.gguf) | Q5_K | 5.54GB | ✅ Available | ⚪ Static | 📦 No | [neo_7b.Q5_K_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q5_K_S.gguf) | Q5_K_S | 5.39GB | ✅ Available | ⚪ Static | 📦 No | [neo_7b.Q4_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q4_K.gguf) | Q4_K | 4.74GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.Q4_K_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q4_K_S.gguf) | Q4_K_S | 4.47GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ4_NL.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ4_NL.gguf) | IQ4_NL | 4.44GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ4_XS.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ4_XS.gguf) | IQ4_XS | 4.20GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.Q3_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q3_K.gguf) | Q3_K | 3.79GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.Q3_K_L.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q3_K_L.gguf) | Q3_K_L | 4.11GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.Q3_K_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q3_K_S.gguf) | Q3_K_S | 3.43GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ3_M.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ3_M.gguf) | IQ3_M | 3.53GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ3_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ3_S.gguf) | IQ3_S | 3.43GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ3_XS.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ3_XS.gguf) | IQ3_XS | 3.25GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ3_XXS.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ3_XXS.gguf) | IQ3_XXS | 3.03GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.Q2_K.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q2_K.gguf) | Q2_K | 2.92GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.Q2_K_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.Q2_K_S.gguf) | Q2_K_S | 2.71GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ2_M.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ2_M.gguf) | IQ2_M | 2.68GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ2_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ2_S.gguf) | IQ2_S | 2.47GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ2_XS.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ2_XS.gguf) | IQ2_XS | 2.36GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ2_XXS.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ2_XXS.gguf) | IQ2_XXS | 2.14GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ1_M.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ1_M.gguf) | IQ1_M | 1.89GB | ✅ Available | 🟢 IMatrix | 📦 No | [neo_7b.IQ1_S.gguf](https://huggingface.co/legraphista/neo_7b-IMat-GGUF/blob/main/neo_7b.IQ1_S.gguf) | IQ1_S | 1.73GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/neo_7b-IMat-GGUF --include "neo_7b.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/neo_7b-IMat-GGUF --include "neo_7b.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <s>[INST] <<SYS>> You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <</SYS>> {user_prompt} [/INST]{assistant_response}</s><s>[INST] {next_user_prompt} [/INST] ``` ### Chat template with system prompt ``` <s>[INST] {user_prompt} [/INST]{assistant_response}</s><s>[INST] {next_user_prompt} [/INST] ``` ### Llama.cpp ``` llama.cpp/main -m neo_7b.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `neo_7b.Q8_0`) 3. Run `gguf-split --merge neo_7b.Q8_0/neo_7b.Q8_0-00001-of-XXXXX.gguf neo_7b.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
MetaIX/GPT4-X-Alpasta-30b
MetaIX
"2023-04-28T10:11:50Z"
1,690
64
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-25T22:52:03Z"
Dont be upsetti, here, have some spaghetti! Att: A'eala <3 <p><strong><font size="5">Information</font></strong></p> GPT4-X-Alpasta-30b working with Oobabooga's Text Generation Webui and KoboldAI. <p>This is an attempt at improving Open Assistant's performance as an instruct while retaining its excellent prose. The merge consists of <a href="https://huggingface.co/chansung/gpt4-alpaca-lora-30b">Chansung's GPT4-Alpaca Lora</a> and <a href="https://huggingface.co/OpenAssistant/oasst-sft-6-llama-30b-xor">Open Assistant's native fine-tune</a>.</p> <p><strong><font size="5">Benchmarks</font></strong></p> <p><strong><font size="4">FP16</font></strong></p> <strong>Wikitext2</strong>: 4.6077961921691895 <strong>Ptb-New</strong>: 9.41549301147461 <strong>C4-New</strong>: 6.98392915725708 <p>Benchmarks brought to you by A'eala</p>
backyardai/Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF
backyardai
"2024-06-07T15:12:45Z"
1,690
1
null
[ "gguf", "uncencored", "llama-3", "tess", "lumimaid", "Lumi-tess", "base_model:ryzen88/Llama-3-70b-Uncensored-Lumi-Tess-gradient", "region:us" ]
null
"2024-06-07T12:38:01Z"
--- tags: - uncencored - llama-3 - tess - lumimaid - Lumi-tess base_model: ryzen88/Llama-3-70b-Uncensored-Lumi-Tess-gradient model_name: Llama-3-70b-Uncensored-Lumi-Tess-gradient-GGUF quantized_by: brooketh parameter_count: 70553706496 --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Llama 3 Uncensored Lumi Tess Gradient 70B - **Creator:** [ryzen88](https://huggingface.co/ryzen88/) - **Original:** [Llama 3 Uncensored Lumi Tess Gradient 70B](https://huggingface.co/ryzen88/Llama-3-70b-Uncensored-Lumi-Tess-gradient) - **Date Created:** 2024-05-10 - **Trained Context:** 262144 tokens - **Description:** Good Llama3 uncensored model with a long context, made using a breadcrumb ties merger of Instruct-gradient, Lumimaid, and Tess models. Works with a very wide range of sampler settings. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
spacy/en_core_web_sm
spacy
"2023-11-21T08:10:13Z"
1,689
37
spacy
[ "spacy", "token-classification", "en", "license:mit", "model-index", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- tags: - spacy - token-classification language: - en license: mit model-index: - name: en_core_web_sm results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.8454836771 - name: NER Recall type: recall value: 0.8456530449 - name: NER F Score type: f_score value: 0.8455683525 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.97246532 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.9175304332 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.89874821 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.9059485531 --- ### Details: https://spacy.io/models/en#en_core_web_sm English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `en_core_web_sm` | | **Version** | `3.7.1` | | **spaCy** | `>=3.7.2,<3.8.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (113 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, `_SP`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.86 | | `TOKEN_P` | 99.57 | | `TOKEN_R` | 99.58 | | `TOKEN_F` | 99.57 | | `TAG_ACC` | 97.25 | | `SENTS_P` | 92.02 | | `SENTS_R` | 89.21 | | `SENTS_F` | 90.59 | | `DEP_UAS` | 91.75 | | `DEP_LAS` | 89.87 | | `ENTS_P` | 84.55 | | `ENTS_R` | 84.57 | | `ENTS_F` | 84.56 |
bigscience/bloomz
bigscience
"2023-05-27T17:25:31Z"
1,689
494
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "bloom", "text-generation", "ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zu", "dataset:bigscience/xP3", "arxiv:2211.01786", "license:bigscience-bloom-rail-1.0", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-09-17T10:48:39Z"
--- datasets: - bigscience/xP3 license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript pipeline_tag: text-generation inference: false widget: - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?" example_title: "zh-en sentiment" - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?" example_title: "zh-zh sentiment" - text: "Suggest at least five related search terms to \"Mạng neural nhân tạo\"." example_title: "vi-en query" - text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»." example_title: "fr-fr query" - text: "Explain in a sentence in Telugu what is backpropagation in neural networks." example_title: "te-en qa" - text: "Why is the sky blue?" example_title: "en-en qa" - text: "Explain to me in Traditional Chinese what is the difference between Bitcoin and Ethereum." example_title: "zh-en qa" - text: "Write a code snippet with O(log(n)) computational complexity." example_title: "code-en" - text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):" example_title: "es-en fable" - text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):" example_title: "hi-en fable" - text: "How many sides does a rectangle and heptagon have, when combined? Answer this question with some math. Ein Rechteck hat 4 Seiten. Ein Siebeneck hat 7 Seiten. In Kombination haben sie 4 + 7 = 11 Seiten. كم عدد الأضلاع التي يجمعها المربع والمثلث؟ Répondez à cette question en chinois." example_title: "en-de-ar-fr-zh math" model-index: - name: bloomz results: - task: type: Coreference resolution dataset: type: winogrande name: Winogrande XL (xl) config: xl split: validation revision: a80f460359d1e9a67c006011c94de42a8759430c metrics: - type: Accuracy value: 59.27 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (en) config: en split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 69.08 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (fr) config: fr split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 68.67 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (jp) config: jp split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 59.65 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (pt) config: pt split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 64.26 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (ru) config: ru split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 60.95 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (zh) config: zh split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 70.24 - task: type: Natural language inference dataset: type: anli name: ANLI (r1) config: r1 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 48.6 - task: type: Natural language inference dataset: type: anli name: ANLI (r2) config: r2 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 44.1 - task: type: Natural language inference dataset: type: anli name: ANLI (r3) config: r3 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 45.5 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (cb) config: cb split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 82.14 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (rte) config: rte split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 85.56 - task: type: Natural language inference dataset: type: xnli name: XNLI (ar) config: ar split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 60.68 - task: type: Natural language inference dataset: type: xnli name: XNLI (bg) config: bg split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 48.43 - task: type: Natural language inference dataset: type: xnli name: XNLI (de) config: de split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 54.38 - task: type: Natural language inference dataset: type: xnli name: XNLI (el) config: el split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 47.43 - task: type: Natural language inference dataset: type: xnli name: XNLI (en) config: en split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 67.47 - task: type: Natural language inference dataset: type: xnli name: XNLI (es) config: es split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 61.24 - task: type: Natural language inference dataset: type: xnli name: XNLI (fr) config: fr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 61.37 - task: type: Natural language inference dataset: type: xnli name: XNLI (hi) config: hi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 60.2 - task: type: Natural language inference dataset: type: xnli name: XNLI (ru) config: ru split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 54.02 - task: type: Natural language inference dataset: type: xnli name: XNLI (sw) config: sw split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 52.09 - task: type: Natural language inference dataset: type: xnli name: XNLI (th) config: th split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 43.78 - task: type: Natural language inference dataset: type: xnli name: XNLI (tr) config: tr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 45.7 - task: type: Natural language inference dataset: type: xnli name: XNLI (ur) config: ur split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.8 - task: type: Natural language inference dataset: type: xnli name: XNLI (vi) config: vi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 61.0 - task: type: Natural language inference dataset: type: xnli name: XNLI (zh) config: zh split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 56.91 - task: type: Program synthesis dataset: type: openai_humaneval name: HumanEval config: None split: test revision: e8dc562f5de170c54b5481011dd9f4fa04845771 metrics: - type: Pass@1 value: 12.06 - type: Pass@10 value: 26.53 - type: Pass@100 value: 48.44 - task: type: Sentence completion dataset: type: story_cloze name: StoryCloze (2016) config: "2016" split: validation revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db metrics: - type: Accuracy value: 96.26 - task: type: Sentence completion dataset: type: super_glue name: SuperGLUE (copa) config: copa split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 91.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (et) config: et split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 51.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ht) config: ht split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 58.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (id) config: id split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 86.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (it) config: it split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 74.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (qu) config: qu split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 56.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (sw) config: sw split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 64.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ta) config: ta split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 69.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (th) config: th split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 58.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (tr) config: tr split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 57.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (vi) config: vi split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 87.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (zh) config: zh split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 90.0 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ar) config: ar split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 92.79 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (es) config: es split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 94.37 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (eu) config: eu split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 86.9 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (hi) config: hi split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 88.42 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (id) config: id split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 92.12 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (my) config: my split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 52.35 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ru) config: ru split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 81.73 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (sw) config: sw split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 79.81 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (te) config: te split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 81.2 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (zh) config: zh split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 93.12 --- ![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true) # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Limitations](#limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 7. [Citation](#citation) # Model Summary > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages. - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf) - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:[email protected]) - **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages. - **BLOOMZ & mT0 Model Family:** <div class="max-w-full overflow-auto"> <table> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English. </tr> <tr> <td>Parameters</td> <td>300M</td> <td>580M</td> <td>1.2B</td> <td>3.7B</td> <td>13B</td> <td>560M</td> <td>1.1B</td> <td>1.7B</td> <td>3B</td> <td>7.1B</td> <td>176B</td> </tr> <tr> <td>Finetuned Model</td> <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td> <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td> <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> </tr> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td> </tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td> </tr> <th colspan="12">Original pretrained checkpoints. Not recommended.</th> <tr> <td>Pretrained Model</td> <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td> <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td> <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td> <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td> <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td> </tr> </table> </div> # Use ## Intended use We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper: - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? - Suggest at least five related search terms to "Mạng neural nhân tạo". - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): - Explain in a sentence in Telugu what is backpropagation in neural networks. **Feel free to share your generations in the Community tab!** ## How to use ### CPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto") inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU in 8bit <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> <!-- Necessary for whitespace --> ### # Limitations **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*". # Training ## Model - **Architecture:** Same as [bloom](https://huggingface.co/bigscience/bloom), also refer to the `config.json` file - **Finetuning steps:** 498 - **Finetuning tokens:** 2.09 billion - **Finetuning layout:** 72x pipeline parallel, 1x tensor parallel, 4x data parallel - **Precision:** bfloat16 ## Hardware - **CPUs:** AMD CPUs with 512GB memory per node - **GPUs:** 288 A100 80GB GPUs with 8 GPUs per node (36 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links - **Communication:** NCCL-communications network with a fully dedicated subnet ## Software - **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed) - **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5) - **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex) # Evaluation We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config. # Citation ```bibtex @article{muennighoff2022crosslingual, title={Crosslingual generalization through multitask finetuning}, author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others}, journal={arXiv preprint arXiv:2211.01786}, year={2022} } ```
timm/davit_base.msft_in1k
timm
"2024-02-10T23:30:16Z"
1,689
2
timm
[ "timm", "pytorch", "image-classification", "dataset:imagenet-1k", "arxiv:2204.03645", "license:apache-2.0", "region:us" ]
image-classification
"2023-01-27T21:47:23Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for davit_base.msft_in1k A DaViT image classification model. Trained on ImageNet-1k by paper authors. Thanks to [Fredo Guan](https://github.com/fffffgggg54) for bringing the classification backbone to `timm`. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 88.0 - GMACs: 15.5 - Activations (M): 40.7 - Image size: 224 x 224 - **Papers:** - DaViT: Dual Attention Vision Transformers: https://arxiv.org/abs/2204.03645 - **Original:** https://github.com/dingmyu/davit - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('davit_base.msft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'davit_base.msft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7] print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'davit_base.msft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled (ie.e a (batch_size, num_features, H, W) tensor output = model.forward_head(output, pre_logits=True) # output is (batch_size, num_features) tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top1_err|top5 |top5_err|param_count|img_size|crop_pct|interpolation| |---------------------|------|--------|------|--------|-----------|--------|--------|-------------| |davit_base.msft_in1k |84.634|15.366 |97.014|2.986 |87.95 |224 |0.95 |bicubic | |davit_small.msft_in1k|84.25 |15.75 |96.94 |3.06 |49.75 |224 |0.95 |bicubic | |davit_tiny.msft_in1k |82.676|17.324 |96.276|3.724 |28.36 |224 |0.95 |bicubic | ## Citation ```bibtex @inproceedings{ding2022davit, title={DaViT: Dual Attention Vision Transformer}, author={Ding, Mingyu and Xiao, Bin and Codella, Noel and Luo, Ping and Wang, Jingdong and Yuan, Lu}, booktitle={ECCV}, year={2022}, } ```
KoboldAI/PPO_Pygway-6b-Mix
KoboldAI
"2024-03-24T11:55:33Z"
1,689
24
transformers
[ "transformers", "pytorch", "safetensors", "gptj", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
"2023-03-14T19:13:35Z"
--- language: en license: apache-2.0 commercial: 'no' inference: false --- # GPT-J 6B - PPO_Pygway Mix ## Model description This is a merged model, using a weighted parameter blend strategy at a (20:20:60) ratio between the models: - [20%] - KoboldAI/GPT-J-6B-Janeway: https://huggingface.co/KoboldAI/GPT-J-6B-Janeway - [20%] - reciprocate/ppo_hh_gpt-j: https://huggingface.co/reciprocate/ppo_hh_gpt-j - [60%] - Pygmalion/Pygmalion-6b: https://huggingface.co/Pygmalion/Pygmalion-6b By their respective authors. **Warning: PPO_Pygway-6b may generate NSFW or inappropriate content due to the base models (Mainly [Pygmalion/Pygmalion-6b](https://huggingface.co/Pygmalion/Pygmalion-6b)) being trained on general user logs, and internet archives.** ### Intended Use: Research purposes only, intended for responsible use. Express a conversation in natural language, and PPO_Pygmalion will pick up on the conversational format. Try starting a two line prompt such as: ``` Bot: "Hello, how are you?" You: "I am doing just fine, thank you." ``` Or any other topic, and the model will carry on in this back and forth style. ## Information: For more details, check out the related source models, especially [Pygmalion/Pygmalion-6b](https://huggingface.co/Pygmalion/Pygmalion-6b) for more information on how to utilize the chat bot formatting expected. In a similar manner to fine-tuning, merging weights does not add information but transforms it, therefore it is important to consider trade-offs. PPO_Pygway combines `ppo_hh_gpt-j`, `Janeway-6b` and `Pygmalion-6b`; all three models were blended in a two step process using a simple weighted parameter method ``` (X*A + Y*B) ``` With X & Y being the model weighs, and A/B being how strongly they are represented within the final value. The intent of this is to elevate the end-model by borrowing the strongly represented aspects out of each base model, but may also weaken other faces of each model, which can be desirable if the base models have problematic traits that need to be worked on. Blend was done in FP32 and output saved in FP16 for reduced storage needs. ## Limitations and biases Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). <ins>Warning: This model has a moderate NSFW bias.</ins> ### License GPT-J-6b is licensed by EleutherAI under the apache-2.0 license. All Rights Reserved. ### BibTeX entry and citation info ``` @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ### Credits To: Models involved: - https://huggingface.co/EleutherAI/gpt-j-6B - https://huggingface.co/Pygmalion/Pygmalion-6b - https://huggingface.co/reciprocate/ppo_hh_gpt-j - https://huggingface.co/KoboldAI/GPT-J-6B-Janeway Average weights merging Script credit to Concedo: - https://huggingface.co/concedo ### Related datasets and articles: PPO_HH-GPT-J-6b's Dataset is a variant of the Helpful Harmless assistant themed dataset and Proximal Policy Optimization, specific datasets used are unknown; listed repo datasets include: - https://huggingface.co/datasets/reciprocate/summarize_eval_ilql - https://huggingface.co/datasets/reciprocate/hh_eval_ilql PPO explained: - https://paperswithcode.com/method/ppo Potential HH-type datasets utilized: - https://huggingface.co/HuggingFaceH4 - https://huggingface.co/datasets/Anthropic/hh-rlhf No formal evaluation is available for this model at this time. It is recommend to use this model with the KoboldAI software. All feedback and comments can be directed to TeH_Venom on the KoboldAI discord.
Monero/Manticore-13b-Chat-Pyg-Guanaco
Monero
"2023-05-27T05:32:39Z"
1,689
15
transformers
[ "transformers", "pytorch", "llama", "text-generation", "manticore", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-26T18:43:08Z"
--- tags: ["manticore"] --- Manticore-13b-Chat-Pyg with the Guanaco 13b qLoRa from TimDettmers applied
IkariDev/Athena-v3
IkariDev
"2023-10-03T14:39:22Z"
1,689
13
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-26T18:59:34Z"
--- license: cc-by-nc-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/LjO8no5EzagA9qWdtYKxG.png) Experimental Athena v3 model. Use Alpaca format. Suitable for RP, ERP and general stuff. <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains fp16 files of Athena-V3. [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-GGUF) [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-GPTQ) <!-- [exl2 - by AzureBlack](https://huggingface.co/AzureBlack/Athena-v2-6.0bit-exl2) --> [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v3-AWQ) [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3) <!-- [GGUF - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v3-GGUF) --> [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v3-GGUF) ## Ratings: Note: I have permission of all users to upload their ratings, i DONT screenshot random reviews without asking if i can put them here! https://snombler.neocities.org/logs#athenav3 <!-- description end --> <!-- description start --> ## Models and loras used - Athena-v2 - migtissera/Synthia-13B-v1.2 - The-Face-Of-Goonery/Huginn-13b-FP16 - PygmalionAI/pygmalion-2-13b - The-Face-Of-Goonery/LegerDemain-FP16 - chargoddard/storytime-13b - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT - zattio770/120-Days-of-LORA-v2-13B ``` Loras: [lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT(0.65) + zattio770/120-Days-of-LORA-v2-13B(0.35)](0.3) to the final model + [Athena-v2(0.70) + migtissera/Synthia-13B-v1.2(0.3)](0.5) + [The-Face-Of-Goonery/Huginn-13b-FP16(0.85) + PygmalionAI/pygmalion-2-13b](0.15)](0.40) + [The-Face-Of-Goonery/LegerDemain-FP16(0.3) chargoddard/storytime-13b(0.7)](0.10) ``` <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` HUGE thanks to [Undi95](https://huggingface.co/Undi95) for doing the merging (Recipe was my idea, he merged) To TheBloke: please if you quant this, please include [IkariDev](https://huggingface.co/IkariDev) + [Undi95](https://huggingface.co/Undi95) in all the credits/links to the creator.
flpelerin/mlk-models
flpelerin
"2024-06-12T09:32:05Z"
1,689
0
null
[ "gguf", "region:us" ]
null
"2024-04-09T08:51:50Z"
Entry not found
vicgalle/Configurable-Mistral-7B
vicgalle
"2024-06-11T11:42:29Z"
1,689
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "dataset:vicgalle/configurable-system-prompt-multitask", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-11T11:35:18Z"
--- library_name: transformers license: apache-2.0 datasets: - vicgalle/configurable-system-prompt-multitask --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Lewdiculous/llama3-8B-aifeifei-1.3-GGUF-IQ-Imatrix
Lewdiculous
"2024-06-12T10:26:29Z"
1,689
2
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
"2024-06-12T04:57:20Z"
--- inference: false license: apache-2.0 --- [[Request #45]](https://huggingface.co/Lewdiculous/Model-Requests/discussions/45) <br> [aifeifei798/llama3-8B-aifeifei-1.3](https://huggingface.co/aifeifei798/llama3-8B-aifeifei-1.3) <br> This model is tailored for specific use cases, please read the original page for details. **Prompt formatting:** <br> Llama-3 **Author:** <br> "Different roles form different experts, with the core purpose of solving your actual problems. Different roles also create different gaming experiences, providing fun and entertainment." Use **KoboldCpp 1.67** or higher. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/PepJ4SGbCxCXM5WZ6a-4s.png)
Nexesenex/TeeZee_Kyllene-Yi-34B-v1.1-iMat.GGUF
Nexesenex
"2024-04-22T02:20:47Z"
1,688
17
null
[ "gguf", "region:us" ]
null
"2024-01-28T08:01:02Z"
Quants with iMatrix for : https://huggingface.co/TeeZee/Kyllene-34B-v1.1 Non-iMatrix quants (more choice in higher bitrate quants) : https://huggingface.co/TeeZee/Kyllene-34B-v1.1-GGUF/tree/main ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6451b24dc5d273f95482bfa4/dNJECS98SGzQBSNEM3lsS.jpeg) --- TeeZee's Kyllene 34B v1.1 model is one of the best Yi_34b merge around with those of BruceTheMoose. But it has a little thing which distinguishes it : It uses Gryphe's MergeMonster as a tool to trim out the GPTisms, Yisms, and Llamaisms, and give a more natural output. The clearing of any problematic gptism, llamaism, or yiism which was specified to MergeMonster is noticeable And it's like the model is freed of these sequences which represent some form of "EOS chains of tokens" in many models, this in the sense that they conclude many outputs, this ofc in an unwanted way It's quite a step in the right direction which should become the standard practice. That make me wonder about the future, when we'll get Miqu 70b models properly finetuned with the best datatsets AND with the Mistralisms trimmed out as well. --- Available quants : Full offload possible on 48GB VRAM with a huge context size : Q8_0 Full offload possible on 36 GB VRAM with a huge context size : Q5_K_S Full offload possible on 24GB VRAM with a big to huge context size (from 12288 with Q4_K_M, for example) : Q4_K_M, Q4_K_S, Q3_K_M Full offload possible on 16GB VRAM with a decent context size : IQ3_XXS SOTA (which is equivalent to a Q3_K_S with more context!), Q2_K, Q2_K_S Full offload possible on 12GB VRAM with a decent context size : IQ2_XS SOTA. lower quality : IQ2_XXS SOTA Full offload maybe possible on 8GB VRAM with a small context size : IQ1_S revision "even better" (b2404) (or v5). All my IQ1_S quant from the 13/03/2024 will be with this new IQ1_S quantization base. --- The merge parameters and logs are in the repo : https://huggingface.co/TeeZee/Kyllene-34B-v1.1/tree/main --- After iMatrixing and quantizing Kyllene, I benched her thoroughly, and she proved herself worthy : Q4_K_S : - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,Hellaswag,85,,400,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,Hellaswag,85.2,,1000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,Hellaswag,84.6,,2000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,Hellaswag_Bin,81,,400,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,Hellaswag_Bin,83.5,,1000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,Hellaswag_Bin,82.95,,2000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,Arc-Challenge,61.53846154,,299,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,Arc-Easy,80.35087719,,570,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,MMLU,43.13099042,,313,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,Thruthful-QA,35.00611995,,817,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,Winogrande,79.3212,,1267,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_S.gguf,-,wikitext,5.1703,512,512,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, Q4_K_M : - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,Hellaswag,84.75,,400,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,Hellaswag,85.6,,1000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,Hellaswag,84.9,,2000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,Hellaswag_Bin,81,,400,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,Hellaswag_Bin,83.4,,1000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,Hellaswag_Bin,82.9,,2000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,Arc-Challenge,60.53511706,,299,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,Arc-Easy,80.52631579,,570,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,MMLU,42.49201278,,313,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,Thruthful-QA,34.39412485,,817,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,Winogrande,79.4791,,1267,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,wikitext,5.1679,512,512,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,wikitext,4.3623,4096,4096,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q4_K_M.gguf,-,wikitext,4.4061,8192,8192,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, Q5_K_S : - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,Hellaswag,85.25,,400,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,Hellaswag,85.6,,1000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,Hellaswag,84.95,,2000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,Hellaswag_Bin,81.25,,400,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,Hellaswag_Bin,83.3,,1000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,Hellaswag_Bin,83,,2000,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,Arc-Challenge,60.20066890,,299,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,Arc-Easy,81.05263158,,570,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,MMLU,42.17252396,,313,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,Thruthful-QA,36.96450428,,817,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,Winogrande,79.5580,,1267,2024-01-28 05:40:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, - Kyllene-34B-v1.1-b1989-iMat-c32_ch3250-Q5_K_S.gguf,-,wikitext,5.1806,512,512,2024-01-28 00:00:00,,34b,Yi,200000,,,GGUF,TeeZee,Nexesenex, ----- IQ1_S V5 : - TeeZee_Kyllene-34B-v1.1-b2409-iMat-c32_ch3250-IQ1_S_v5.gguf,-,Hellaswag,70.3,,1000,2024-03-12 00:00:00,,34b,Yi,2000000,,,GGUF,TeeZee,Nexesenex, - TeeZee_Kyllene-34B-v1.1-b2409-iMat-c32_ch3250-IQ1_S_v5.gguf,-,Arc-Challenge,40.46822742,299,2024-03-12 00:00:00,,34b,Yi,2000000,,,GGUF,TeeZee,Nexesenex, - TeeZee_Kyllene-34B-v1.1-b2409-iMat-c32_ch3250-IQ1_S_v5.gguf,-,Arc-Easy,62.28070175,,570,2024-03-12 00:00:00,,34b,Yi,2000000,,,GGUF,TeeZee,Nexesenex, - TeeZee_Kyllene-34B-v1.1-b2409-iMat-c32_ch3250-IQ1_S_v5.gguf,-,MMLU,32.90734824,,313,2024-03-12 00:00:00,,34b,Yi,2000000,,,GGUF,TeeZee,Nexesenex, - TeeZee_Kyllene-34B-v1.1-b2409-iMat-c32_ch3250-IQ1_S_v5.gguf,-,Thruthful-QA,29.37576499,,817,2024-03-12 00:00:00,,34b,Yi,2000000,,,GGUF,TeeZee,Nexesenex, - TeeZee_Kyllene-34B-v1.1-b2409-iMat-c32_ch3250-IQ1_S_v5.gguf,-,Winogrande,68.7451,,1267,2024-03-12 00:00:00,,34b,Yi,2000000,,,GGUF,TeeZee,Nexesenex, - TeeZee_Kyllene-34B-v1.1-b2409-iMat-c32_ch3250-IQ1_S_v5.gguf,-,wikitext,9.8761,512,512,2024-03-12 00:00:00,,34b,Yi,2000000,,,GGUF,TeeZee,Nexesenex, - TeeZee_Kyllene-34B-v1.1-b2409-iMat-c32_ch3250-IQ1_S_v5.gguf,-,wikitext,7.8954,4096,4096,2024-03-12 00:00:00,,34b,Yi,2000000,,,GGUF,TeeZee,Nexesenex, ----- Enjoy these quants!
PassionFriend/5CFkJDzLk9b7RuUx64QypPv7UNhWv7Ab9meywSza41D4yg7R_vgg
PassionFriend
"2024-03-01T06:38:47Z"
1,688
0
keras
[ "keras", "region:us" ]
null
"2024-02-10T12:51:54Z"
Entry not found
NLPark/AnFeng_v3_Avocet
NLPark
"2024-05-01T12:41:27Z"
1,688
0
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "conversational", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-30T14:42:55Z"
--- language: - en pipeline_tag: text-generation inference: false library_name: transformers license: cc-by-nc-nd-4.0 --- # AnFeng **~30B**, SFT... **Chinese, English** Test 0 of all. Released as an early preview of our v3 LLMs. The v3 series covers the "Shi-Ci", "AnFeng" and "Cecilia" LLM products. The sizes are labelled from small to large "Nano" "Leap" "Pattern" "Avocet "Robin" "Kestrel"
Helsinki-NLP/opus-mt-lv-en
Helsinki-NLP
"2023-08-16T12:00:49Z"
1,687
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "lv", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-03-02T23:29:04Z"
--- tags: - translation license: apache-2.0 --- ### opus-mt-lv-en * source languages: lv * target languages: en * OPUS readme: [lv-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lv-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lv-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | newsdev2017-enlv.lv.en | 29.9 | 0.587 | | newstest2017-enlv.lv.en | 22.1 | 0.526 | | Tatoeba.lv.en | 53.3 | 0.707 |
mrm8488/bert2bert_shared-german-finetuned-summarization
mrm8488
"2023-05-10T12:29:44Z"
1,687
14
transformers
[ "transformers", "pytorch", "safetensors", "encoder-decoder", "text2text-generation", "summarization", "news", "de", "dataset:mlsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
"2022-03-02T23:29:05Z"
--- tags: - summarization - news language: de datasets: - mlsum widget: - text: 'Wie geht man nach schrecklichen Ereignissen ambesten auf die Ängste und Sorgen von Kindern ein?Therapeuten haben eine klare Botschaft. Die Weltist voller Gefahren, Verbrechen und Schrecken -Krieg, Terrorismus, Umweltzerstörung und eben auchKindesmissbrauch. Soll man mit Kindern darüberreden, und wie? Die Antwort hängt auch vom Alterdes Kindes ab. Kinder, gerade kleine Kinder,brauchen Sicherheit, man muss sie nicht mitabstrakten Bedrohungen konfrontieren, die sieohnehin noch nicht ganz verstehen können. Ihreeigenen Ängste sollten Eltern lieber bei sichbehalten, raten Psychologen. Etwas anderes ist es,wenn Kinder schreckliche Ereignisse wie denaktuellen Fall in München mitbekommen. Dann sollteman natürlich auf die Ängste und Sorgen der Kindereingehen und mit ihnen sprechen. Man sollte aberklarmachen: Ja, es gibt kranke Menschen, die Bösestun, aber das ist die Ausnahme. Der Verbrecher istgefasst, er läuft nicht mehr frei herum,Polizisten passen auf. Die Botschaft sollte sein:Das ist nicht nah an dir dran, das bedroht dichnicht, empfehlen Familientherapeuten zum Umgangmit Ängsten von Kindern. Natürlich können auchVerhaltensregeln nicht schaden: Nein sagen, lautwerden und nicht mit Fremden mitgehen. AuchBilderbücher können helfen, solches Verhalten frühzu vermitteln, etwa "Das große und das kleineNein!" von Gisela Braun und Dorothee Wolters oder"Ich geh doch nicht mit Jedem mit!" von DagmarGeisler. Aber auch wenn jeder Vater, jede Mutterbeim Gedanken an derartige Verbrechen insSchlottern kommt: Die Statistik zeigt eindeutig,dass solche Fälle sehr selten sind.Kindesmissbrauch findet vor allem im nahensozialen Umfeld statt, in der Familie, in Vereinenoder bei älteren vermeintlichen "Freunden". Werseine Kinder davor beschützen will, muss ihnenzuhören, sie ernst nehmen, Fragen stellen, genauhinschauen.' --- # German BERT2BERT fine-tuned on MLSUM DE for summarization ## Model [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) (BERT Checkpoint) ## Dataset **MLSUM** is the first large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, **German**, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community. We report cross-lingual comparative analyses based on state-of-the-art systems. These highlight existing biases which motivate the use of a multi-lingual dataset. [MLSUM de](https://huggingface.co/datasets/viewer/?dataset=mlsum) ## Results |Set|Metric| # Score| |----|------|------| | Test |Rouge2 - mid -precision | **33.04**| | Test | Rouge2 - mid - recall | **33.83**| | Test | Rouge2 - mid - fmeasure | **33.15**| ## Usage ```python import torch from transformers import BertTokenizerFast, EncoderDecoderModel device = 'cuda' if torch.cuda.is_available() else 'cpu' ckpt = 'mrm8488/bert2bert_shared-german-finetuned-summarization' tokenizer = BertTokenizerFast.from_pretrained(ckpt) model = EncoderDecoderModel.from_pretrained(ckpt).to(device) def generate_summary(text): inputs = tokenizer([text], padding="max_length", truncation=True, max_length=512, return_tensors="pt") input_ids = inputs.input_ids.to(device) attention_mask = inputs.attention_mask.to(device) output = model.generate(input_ids, attention_mask=attention_mask) return tokenizer.decode(output[0], skip_special_tokens=True) text = "Your text here..." generate_summary(text) ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
bofenghuang/vigogne-13b-instruct
bofenghuang
"2023-07-05T10:15:34Z"
1,687
13
transformers
[ "transformers", "pytorch", "llama", "text-generation", "LLM", "fr", "license:openrail", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-03-26T22:08:28Z"
--- license: openrail language: - fr pipeline_tag: text-generation library_name: transformers tags: - llama - LLM inference: false --- <p align="center" width="100%"> <img src="https://huggingface.co/bofenghuang/vigogne-13b-instruct/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;"> </p> # Vigogne-13B-Instruct: A French Instruction-following LLaMA Model Vigogne-13B-Instruct is a LLaMA-13B model fine-tuned to follow the French instructions. For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne **Usage and License Notices**: Same as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), Vigogne is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes. ## Changelog All versions are available in branches. - **V1.0**: Initial release, trained on the translated Stanford Alpaca dataset. - **V1.1**: Improved translation quality of the Stanford Alpaca dataset. - **V2.0**: Expanded training dataset to 224k for better performance. - **V3.0**: Further expanded training dataset to 262k for improved results. ## Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig from vigogne.preprocess import generate_instruct_prompt model_name_or_path = "bofenghuang/vigogne-13b-instruct" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False) model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto") user_query = "Expliquez la différence entre DoS et phishing." prompt = generate_instruct_prompt(user_query) input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device) input_length = input_ids.shape[1] generated_outputs = model.generate( input_ids=input_ids, generation_config=GenerationConfig( temperature=0.1, do_sample=True, repetition_penalty=1.0, max_new_tokens=512, ), return_dict_in_generate=True, ) generated_tokens = generated_outputs.sequences[0, input_length:] generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True) print(generated_text) ``` You can also infer this model by using the following Google Colab Notebook. <a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Limitations Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
bavest/fin-llama-33b-merged
bavest
"2023-11-29T09:29:51Z"
1,687
15
transformers
[ "transformers", "pytorch", "llama", "text-generation", "finance", "llm", "trading", "dataset:bavest/fin-llama-dataset", "license:gpl", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-02T22:47:37Z"
--- license: gpl datasets: - bavest/fin-llama-dataset tags: - finance - llm - llama - trading --- # FIN-LLAMA > Efficient Finetuning of Quantized LLMs for Finance [Adapter Weights](https://huggingface.co/bavest/fin-llama-33b-merged) | [Dataset](https://huggingface.co/datasets/bavest/fin-llama-dataset) ## Installation To load models in 4bits with transformers and bitsandbytes, you have to install accelerate and transformers from source and make sure you have the latest version of the bitsandbytes library (0.39.0). ```bash pip3 install -r requirements.txt ``` ### Other dependencies If you want to finetune the model on a new instance. You could run the `setup.sh` to install the python and cuda package. ```bash bash scripts/setup.sh ``` ## Finetuning ```bash bash script/finetune.sh ``` ## Usage Quantization parameters are controlled from the `BitsandbytesConfig` - Loading in 4 bits is activated through `load_in_4bit` - The datatype used for the linear layer computations with `bnb_4bit_compute_dtype` - Nested quantization is activated through `bnb_4bit_use_double_quant` - The datatype used for qunatization is specified with `bnb_4bit_quant_type`. Note that there are two supported quantization datatypes `fp4` (four bit float) and `nf4` (normal four bit float). The latter is theoretically optimal for normally distributed weights and we recommend using `nf4`. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig pretrained_model_name_or_path = "bavest/fin-llama-33b-merge" model = AutoModelForCausalLM.from_pretrained( pretrained_model_name_or_path=pretrained_model_name_or_path, load_in_4bit=True, device_map='auto', torch_dtype=torch.bfloat16, quantization_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ), ) tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path) question = "What is the market cap of apple?" input = "" # context if needed prompt = f""" A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's question. '### Instruction:\n{question}\n\n### Input:{input}\n""\n\n### Response: """ input_ids = tokenizer.encode(prompt, return_tensors="pt").to('cuda:0') with torch.no_grad(): generated_ids = model.generate( input_ids, do_sample=True, top_p=0.9, temperature=0.8, max_length=128 ) generated_text = tokenizer.decode( [el.item() for el in generated_ids[0]], skip_special_tokens=True ) ``` ## Dataset for FIN-LLAMA The dataset is released under bigscience-openrail-m. You can find the dataset used to train FIN-LLAMA models on HF at [bavest/fin-llama-dataset](https://huggingface.co/datasets/bavest/fin-llama-dataset). ## Known Issues and Limitations Here a list of known issues and bugs. If your issue is not reported here, please open a new issue and describe the problem. See [QLORA](https://github.com/artidoro/qlora) for any other limitations. 1. 4-bit inference is slow. Currently, our 4-bit inference implementation is not yet integrated with the 4-bit matrix multiplication 2. Currently, using `bnb_4bit_compute_type='fp16'` can lead to instabilities. 3. Make sure that `tokenizer.bos_token_id = 1` to avoid generation issues. ## Acknowledgements We also thank Meta for releasing the LLaMA models without which this work would not have been possible. This repo builds on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) , [QLORA](https://github.com/artidoro/qlora), [Chinese-Guanaco](https://github.com/jianzhnie/Chinese-Guanaco/tree/main) and [LMSYS FastChat](https://github.com/lm-sys/FastChat) repos. ## License and Intended Use We release the resources associated with QLoRA finetuning in this repository under GLP3 license. In addition, we release the FIN-LLAMA model family for base LLaMA model sizes of 7B, 13B, 33B, and 65B. These models are intended for purposes in line with the LLaMA license and require access to the LLaMA models. ## Prompts ### Act as an Accountant > I want you to act as an accountant and come up with creative ways to manage finances. You'll need to consider budgeting, investment strategies and risk management when creating a financial plan for your client. In some cases, you may also need to provide advice on taxation laws and regulations in order to help them maximize their profits. My first suggestion request is “Create a financial plan for a small business that focuses on cost savings and long-term investments". ## Paged Optimizer You can access the paged optimizer with the argument --optim paged_adamw_32bit ## Cite ```tex @misc{Fin-LLAMA, author = {William Todt, Ramtin Babaei, Pedram Babaei}, title = {Fin-LLAMA: Efficient Finetuning of Quantized LLMs for Finance}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/Bavest/fin-llama}}, } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_bavest__fin-llama-33b-merged) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.76 | | ARC (25-shot) | 65.02 | | HellaSwag (10-shot) | 86.2 | | MMLU (5-shot) | 58.73 | | TruthfulQA (0-shot) | 49.75 | | Winogrande (5-shot) | 80.03 | | GSM8K (5-shot) | 16.22 | | DROP (3-shot) | 6.36 |
Open-Orca/LlongOrca-13B-16k
Open-Orca
"2023-08-21T05:15:08Z"
1,687
14
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "arxiv:2306.02707", "arxiv:2301.13688", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-16T23:03:19Z"
--- license: llama2 language: - en library_name: transformers pipeline_tag: text-generation datasets: - Open-Orca/OpenOrca --- <p><h1>🐋 The Second Llong Context Orca! 🐋</h1></p> ![OpenOrca Logo](https://huggingface.co/datasets/Open-Orca/OpenOrca/resolve/main/OpenOrcaLogo.png "OpenOrca Logo") # OpenOrca - LlongOrca - 13B - 16k We have used our own [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) to fine-tune on top of [LLongMA-2-13b-16k](https://huggingface.co/conceptofmind/LLongMA-2-13b-16k). This dataset is our attempt to reproduce the dataset generated for Microsoft Research's [Orca Paper](https://arxiv.org/abs/2306.02707). We use [OpenChat](https://huggingface.co/openchat) packing, trained with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl). This release is trained on a curated filtered subset of most of our GPT-4 augmented data. It is the same subset of our data as was used in our [OpenOrcaxOpenChat-Preview2-13B model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B). HF Leaderboard evals place this model as #1 for all 13B long context models at release time. We achieve >112% the performance of the base LLongMA2-13b-16k model we tuned on top of. As well, we preserve >98% of the performance of the OpenOrcaxOpenChat-Preview2-13B model we share datasets with, while extending the context to 16k. We did this training as part of testing setup of our H100 cluster. Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2). [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) Many thanks to @EnricoShippole, @theemozilla, and @kaiokendev1 for the fine work on creating the LlongMA-2-13b-16k model this was trained on top of! We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners. We will also give sneak-peak announcements on our Discord, which you can find here: https://AlignmentLab.ai # Prompt Template We used [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this. ## Example Prompt Exchange ``` <|im_start|>system You are LlongOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers! <|im_end|> <|im_start|>user How are you<|im_end|> <|im_start|>assistant I am doing well!<|im_end|> <|im_start|>user How are you now?<|im_end|> ``` # Evaluation We have evaluated using the methodology and tools for the HuggingFace Leaderboard, and find that we have significantly improved upon the base long context model. We reach >112% of LLongMA2-13B-16k performance. ## HuggingFaceH4 Open LLM Leaderboard Performance We have run our own tests using parameters matching the [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) evals. We preserve >98% of OpenOrcaxOpenChat-Preview2-13B performance and are #1 on the leaderboard for long context 13B models at release time. We have >103% performance of the next 16k model (vicuna-13b-v1.5-16k). As well, we expect the context extension techniques from LLongMA to be more robust than other 16k context models available. ![LlongOrca 13B 16k HF Leaderboard Internal Performance](https://huggingface.co/Open-Orca/LlongOrca-13B-16k/resolve/main/Images/LlongOrca13BHFLeaderboard.png "HuggingFace Leaderboard Internal Performance") ## GPT4ALL Leaderboard Performance We find we score higher than all non-OpenOrca models on the GPT4ALL leaderboard, while preserving ~98.7% of our OpenOrcaxOpenChat-Preview2-13B performance. ![LLongOrca 13B 16k GPT4ALL Leaderboard Internal Performance](https://huggingface.co/Open-Orca/LlongOrca-13B-16k/resolve/main/Images/LlongOrca13BG4A.png "GPT4ALL Leaderboard Internal Performance") # Dataset We used a curated, filtered selection of most of the GPT-4 augmented data from our OpenOrca dataset, which aims to reproduce the Orca Research Paper dataset. Further details of our curation practices will be forthcoming with our full model releases. # Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"/>](https://github.com/OpenAccess-AI-Collective/axolotl) We trained with 8x H100 GPUs for 10 hours, completing 4 epochs of full fine tuning on our dataset in one training run. Commodity cost was ~$300. # Citation ```bibtex @software{dale2023llongorca13b, title = {LlongOrca13B: Llama2-13B Model Instruct-tuned for Long Context on Filtered OpenOrcaV1 GPT-4 Dataset}, author = {Alpin Dale and Wing Lian and Bleys Goodson and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/Open-Orca/LlongOrca-7B-16k}, } @software{openchat, title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}}, author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling}, doi = {10.5281/zenodo.8105775}, url = {https://github.com/imoneoi/openchat}, version = {pre-release}, year = {2023}, month = {7}, } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, } ```
snorkelai/Snorkel-Mistral-PairRM-DPO
snorkelai
"2024-05-13T01:20:42Z"
1,687
104
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "dataset:snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset", "arxiv:2305.18290", "arxiv:2306.02561", "arxiv:2401.10020", "arxiv:2312.11456", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-19T16:16:48Z"
--- license: apache-2.0 datasets: - snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset pipeline_tag: text-generation --- Read our release blog here: [Snorkel AI Blog](https://snorkel.ai/new-benchmark-results-demonstrate-value-of-snorkel-ai-approach-to-llm-alignment/) You can try our models on the [Together AI](https://api.together.xyz/playground/chat/snorkelai/Snorkel-Mistral-PairRM-DPO) playground: https://api.together.xyz/playground/chat/snorkelai/Snorkel-Mistral-PairRM-DPO. This model is optimized for chat purposes. Have fun! Our model is also available through [Together AI API](https://www.together.ai/solutions#what-we-offer) with the following model API string: `snorkelai/Snorkel-Mistral-PairRM-DPO`. Special thanks to the [Together AI](https://www.together.ai/) team for adding our model to their endpoints. We also provide an HF inference endpoint for everyone to test the model. It may initially take a few minutes to activate, but will eventually operate at the standard speed of HF's 7B model text inference endpoint. The speed of inference depends on HF endpoint performance and is not related to Snorkel offerings. This endpoint is designed for initial trials, not for ongoing production use. ``` import requests API_URL = "https://t1q6ks6fusyg1qq7.us-east-1.aws.endpoints.huggingface.cloud" headers = { "Accept" : "application/json", "Content-Type": "application/json" } def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return response.json() output = query({ "inputs": "[INST] Recommend me some Hollywood movies [/INST]", "parameters": {} }) ``` ### Dataset: Training dataset: [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset) We utilize ONLY the prompts from [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized); **no external LLM responses used**. ### Methodology: 1. Generate five response variations for each prompt from a subset of 20,000 using the LLM - to start, we used [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). 2. Apply [PairRM](https://huggingface.co/llm-blender/PairRM) for response reranking. 3. Update the LLM by applying Direct Preference Optimization (DPO) on the top (chosen) and bottom (rejected) responses. 4. Use this LLM as the base model for the next iteration, repeating three times in total. This overview provides a high-level summary of our approach. We plan to release more detailed results and findings in the coming weeks on the [Snorkel blog.](https://snorkel.ai/blog/) The prompt format follows the Mistral model: ```[INST] {prompt} [/INST]``` ### Training recipe: - The provided data is formatted to be compatible with the Hugging Face's [Zephyr recipe](https://github.com/huggingface/alignment-handbook/tree/main/recipes/zephyr-7b-beta). We executed the n_th DPO iteration using the "train/test_iteration_{n}". ### Key Premises: - **Specialization Requirement**: For most enterprise use cases, using LLMs "off-the-shelf" falls short of production quality, necessitating additional fine-tuning and alignment. - **Ease of Model Building**: Creating ranking/scoring/classification models is simpler than developing high-quality, manually annotated datasets for long-form responses. - **Alignment Recipe**: Using smaller but specialized teacher models (reward models) can incrementally align LLMs towards specific axes. ### Applications: Unlike our customers, who have very specific use cases to align LLMs to, the AlpacaEval 2.0 leaderboard measures the ability of LLMS to follow user instructions. With this demonstration, we focus on the general approach to alignment. Thus, we use a general-purpose reward model - the performant [PairRM model](https://huggingface.co/llm-blender/PairRM). We use the [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model as our base LLM. For interest in building your **specialized internal reward models that reflect your enterprises' needs**, please contact the Snorkel AI team or consider attending our [**Enterprise LLM Summit: Building GenAI with Your Data on January 25, 2024**](https://snorkel.ai/event/enterprise-llm-summit/) to learn more about "Programmatically scaling human preferences and alignment in GenAI". ### Result: On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/): - The base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) scored **14.72**. After applying the above methodology: - This model scored **30.22** - ranked 3rd and the highest for an open-source base model at the time of publication. - When post-processing the model outputs with PairRM-best-of-16, which involved generating 16 responses and selecting the highest-scoring response by PairRM, we scored **34.86** - ranked 2nd. The best model on the leaderboard is "gpt-4-turbo", which is also the judge of optimal responses. We recognize that the Alpaca-Eval 2.0 benchmark does not entirely capture the full range of capabilities and performances of LLMs. However, in our current work, where the goal is to align with general "human preferences," Alpaca-Eval 2.0 serves as a suitable and representative benchmark. Moving forward, we anticipate further contributions from the community regarding new alignment axes, and conduct evaluations using other appropriate benchmarks. The Alpaca-Eval 2.0 evaluator, "gpt-4-turbo," exhibits a bias towards longer responses. This tendency might also be present in our chosen reward model, resulting in our model producing lengthier responses after DPO iterations, which can be among the factors to our higher ranks on the leaderboard. Future work could include measures to control response length and other relevant metrics. ### Limitations: The model is a quick demonstration that the LLMs can be programmatically aligned using smaller specialized reward models. It does not have any moderation mechanisms. We look forward to continuing to engage with the research community and our customers exploring optimal methods for getting models to respect guardrails, allowing for deployment in environments requiring moderated outputs. ### Contemporary Work and Acknowledgements: - The Mistral AI Team for developing and releasing the advanced Mistral-7B-Instruct-v0.2 model. - The author of the [Direct Preference Optimization paper](https://arxiv.org/abs/2305.18290) for the innovative approach - The author of the [Pairwise Reward Model for LLMs paper](https://arxiv.org/abs/2306.02561) for the powerful general-purpose reward model - The HuggingFace team for the DPO implementation under [The Alignment Handbook](https://github.com/huggingface/alignment-handbook) - We would also like to acknowledge contemporary work published independently on arXiv on 2024-01-18 by Meta & NYU (Yuan, et al) in a paper called [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020), which proposes a similar general approach for creating alignment pairs from a larger set of candidate responses, but using the LLM as the reward model. While this may work for general-purpose models, our experience has shown that task-specific reward models guided by SMEs are necessary for most enterprise applications of LLMs for specific use cases, which is why we focus on the use of external reward models. - Also, we would like to acknowledge another concurrent work that has a similar approach but focuses more on the theoretical aspect of the iterative DPO process: [Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint](https://arxiv.org/pdf/2312.11456.pdf) on 2024-01-28 (Xiong, et al). ### GGUF version Snorkel-Mistral-PairRM-DPO GGUF model version: from [andrew-cartwheel](https://huggingface.co/andrew-cartwheel/snorkel-mistral-pairRM-DPO-q8_0.gguf) or [brittlewis12](https://huggingface.co/brittlewis12/Snorkel-Mistral-PairRM-DPO-GGUF). ExllamaV2 quants model version: from [bartowski](https://huggingface.co/bartowski/Snorkel-Mistral-PairRM-DPO-exl2). Thanks to the mentioned community members for providing the GGUF model versions. ### The Snorkel AI Team Hoang Tran, Chris Glaze, Braden Hancock If you found this work useful, feel free to cite [our work](https://huggingface.co/snorkelai/Snorkel-Mistral-PairRM-DPO/): ``` @techreport{viethoangtranduong, author = {Tran, Hoang and Glaze, Chris, and Hancock, Braden}, title = {Iterative DPO Alignment}, institution = {Snorkel AI}, year = {2023}, } ```
cognitivecomputations/WizardLM-33B-V1.0-Uncensored
cognitivecomputations
"2024-03-04T16:02:30Z"
1,686
59
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-24T11:08:38Z"
--- language: - en license: other datasets: - ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split model-index: - name: WizardLM-33B-V1.0-Uncensored results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 63.65 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-33B-V1.0-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 83.84 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-33B-V1.0-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 59.36 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-33B-V1.0-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 56.8 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-33B-V1.0-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-33B-V1.0-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 18.65 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-33B-V1.0-Uncensored name: Open LLM Leaderboard --- This is a retraining of https://huggingface.co/WizardLM/WizardLM-30B-V1.0 with a filtered dataset, intended to reduce refusals, avoidance, and bias. Note that LLaMA itself has inherent ethical beliefs, so there's no such thing as a "truly uncensored" model. But this model will be more compliant than WizardLM/WizardLM-7B-V1.0. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. Like WizardLM/WizardLM-30B-V1.0, this model is trained with Vicuna-1.1 style prompts. ``` You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute! # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-33B-V1.0-Uncensored) | Metric | Value | |-----------------------|---------------------------| | Avg. | 54.41 | | ARC (25-shot) | 63.65 | | HellaSwag (10-shot) | 83.84 | | MMLU (5-shot) | 59.36 | | TruthfulQA (0-shot) | 56.8 | | Winogrande (5-shot) | 77.66 | | GSM8K (5-shot) | 18.65 | | DROP (3-shot) | 20.89 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-33B-V1.0-Uncensored) | Metric |Value| |---------------------------------|----:| |Avg. |59.99| |AI2 Reasoning Challenge (25-Shot)|63.65| |HellaSwag (10-Shot) |83.84| |MMLU (5-Shot) |59.36| |TruthfulQA (0-shot) |56.80| |Winogrande (5-shot) |77.66| |GSM8k (5-shot) |18.65|
zoukagh/BEThiz-spanish-bert-base-uncased
zoukagh
"2024-02-27T10:53:18Z"
1,686
1
transformers
[ "transformers", "pytorch", "tf", "bert", "fill-mask", "es", "base_model:zoukagh/bert-base-iz-spain-uncased", "doi:10.57967/hf/1547", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-08-10T09:08:34Z"
--- base_model: zoukagh/bert-base-iz-spain-uncased model-index: - name: bert-base-iz-spain-uncased results: [] widget_old: - text: "cada año luchamos por mejor calidada vida en las calles y ayer nosotros lucha[MASK] por nuestros derechos." - text: >- ayer nosotros [MASK] una carta del gobierno donde nos indican que tenemos que migrar . - text: protocolo de mensajeria mas usado es el [MASK]. - text: El español es un idioma muy [MASK] en el mundo. - text: es primero el huevo antes que la [MASK] - text: >- En año pasado viaje a sur de españa y conoci a maria y jose , son una pareja que llevan 10 años [MASK] en en un pueblo de jaen. - text: la temperatura es mejor en las [MASK] que en la ciudad. - text: en verano es mejor estar en la [MASK] que en la ciudad. - text: >- Anoche, en la fiesta de Javier, conocí a alguien que viene de Barcelona y [MASK] en Londres porque quería perfeccionar su inglés. - text: la capital de españa es [MASK] - text: la capital de marruecos es [MASK] - text: la capital de portugal es [MASK] - text: la capital de egipto es el [MASK] - text: كيف نثق بالتاريخ إذا كان الحاضر يتم تزويره أمام أعيننا! language: - es --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-iz-spain-uncased Este modelo se ha entrenado con gran corpus de texto de wikipedia , CC100 This model is a fine-tuned version of [zoukagh/bert-base-iz-spain-uncased](https://huggingface.co/zoukagh/bert-base-iz-spain-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1302 - Steps: 2M 👷🏗️🚧 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters ### Training results ### Framework versions - TensorFlow 2.12.0
BEE-spoke-data/verysmol_llama-v11-KIx2
BEE-spoke-data
"2024-01-10T18:01:16Z"
1,686
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "dataset:BEE-spoke-data/knowledge-inoc-concat-v1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-20T02:16:20Z"
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy inference: parameters: max_new_tokens: 64 do_sample: true temperature: 0.85 repetition_penalty: 1.35 no_repeat_ngram_size: 5 eta_cutoff: 0.001 renormalize_logits: true widget: - text: My name is El Microondas the Wise and example_title: El Microondas - text: Kennesaw State University is a public example_title: Kennesaw State University - text: >- Bungie Studios is an American video game developer. They are most famous for developing the award winning Halo series of video games. They also made Destiny. The studio was founded example_title: Bungie - text: The Mona Lisa is a world-renowned painting created by example_title: Mona Lisa - text: >- The Harry Potter series, written by J.K. Rowling, begins with the book titled example_title: Harry Potter Series - text: >- Question: I have cities, but no houses. I have mountains, but no trees. I have water, but no fish. What am I? Answer: example_title: Riddle - text: The process of photosynthesis involves the conversion of example_title: Photosynthesis - text: >- Jane went to the store to buy some groceries. She picked up apples, oranges, and a loaf of bread. When she got home, she realized she forgot example_title: Story Continuation - text: >- Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph, and another train leaves Station B at 10:00 AM and travels at 80 mph, when will they meet if the distance between the stations is 300 miles? To determine example_title: Math Problem - text: In the context of computer programming, an algorithm is example_title: Algorithm Definition pipeline_tag: text-generation datasets: - BEE-spoke-data/knowledge-inoc-concat-v1 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # verysmol_llama-v11-KIx2 ## Model description This model is a fine-tuned version of v10 (refinedweb-3m dedup) further trained for 2 epochs on KI dataset. It achieves the following results on the evaluation set: - Loss: 2.8876 - Accuracy: 0.4502 --- ## evals `hf-causal-experimental (pretrained=pszemraj/verysmol_llama-v11-KIx2,revision=main,trust_remote_code=True,dtype='float'), limit: None, provide_description: False, num_fewshot: 0, batch_size: 16` | Task |Version| Metric | Value | |Stderr| |--------------|------:|--------|-------:|---|-----:| |arc_easy | 0|acc | 0.4024|± |0.0101| | | |acc_norm| 0.3788|± |0.0100| |boolq | 1|acc | 0.6199|± |0.0085| |lambada_openai| 0|ppl |111.9939|± |4.6906| | | |acc | 0.2354|± |0.0059| |openbookqa | 0|acc | 0.1440|± |0.0157| | | |acc_norm| 0.2760|± |0.0200| |piqa | 0|acc | 0.5713|± |0.0115| | | |acc_norm| 0.5664|± |0.0116| |winogrande | 0|acc | 0.5201|± |0.0140| | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.1971|± |0.0116| | | |acc_norm|0.2278|± |0.0123| | Task |Version| Metric |Value | |Stderr| |---------|------:|--------|-----:|---|-----:| |hellaswag| 0|acc |0.2618|± |0.0088| | | |acc_norm|0.2797|± |0.0090| | Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.2509|± |0.0152| | | |mc2 |0.4492|± |0.0156| --- ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00014 - train_batch_size: 16 - eval_batch_size: 16 - seed: 17514 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-06 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.0681 | 0.03 | 150 | 3.0689 | 0.4259 | | 3.0113 | 0.07 | 300 | 3.0433 | 0.4278 | | 2.9468 | 0.1 | 450 | 3.0362 | 0.4288 | | 3.0162 | 0.13 | 600 | 3.0148 | 0.4326 | | 2.9531 | 0.17 | 750 | 3.0012 | 0.4341 | | 2.9282 | 0.2 | 900 | 2.9923 | 0.4358 | | 2.9485 | 0.23 | 1050 | 2.9845 | 0.4357 | | 2.9365 | 0.27 | 1200 | 2.9749 | 0.4375 | ... | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.8215 | 1.7 | 7650 | 2.8943 | 0.4496 | | 2.7714 | 1.74 | 7800 | 2.8914 | 0.4501 | | 2.8132 | 1.77 | 7950 | 2.8913 | 0.4500 | | 2.8505 | 1.8 | 8100 | 2.8906 | 0.4502 | | 2.8294 | 1.84 | 8250 | 2.8901 | 0.4502 | | 2.7977 | 1.87 | 8400 | 2.8891 | 0.4499 | | 2.7501 | 1.9 | 8550 | 2.8878 | 0.4505 | | 2.8038 | 1.94 | 8700 | 2.8883 | 0.4504 | | 2.7547 | 1.97 | 8850 | 2.8876 | 0.4502 | ---
danielhanchen/open_llama_3b_600bt_preview
danielhanchen
"2023-06-07T06:26:48Z"
1,685
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-27T17:14:59Z"
--- license: apache-2.0 language: - en --- # ARCHIVED. ## Download from original repo: https://huggingface.co/openlm-research/open_llama_3b_600bt_preview ### I made a few PRs to the original repo to include my changes! Original model from https://huggingface.co/openlm-research/open_llama_3b_600bt_preview. Example below edited from https://github.com/openlm-research/open_llama ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "openlm-research/open_llama_3b_600bt_preview" fast_model_name = "danielhanchen/open_llama_3b_600bt_preview" tokenizer = AutoTokenizer.from_pretrained(fast_model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype = torch.float16, device_map = "auto") prompt = "Q: What is the largest animal?\nA:" input_ids = tokenizer(prompt, return_tensors = "pt").input_ids print( tokenizer.decode( model.generate( input_ids, max_new_tokens = 32).ravel() ) ) ``` This repo includes: 1) Ported `LlamaTokenizer` to `LlamaTokenizerFast` via a few lines of code. Loading via `AutoTokenizer` takes 4 to 5 minutes. Now, a few seconds! Essentially the porting is done via the below code: ``` # from huggingface_hub import notebook_login # notebook_login() from transformers import LlamaTokenizerFast from tokenizers import AddedToken tokenizer = LlamaTokenizerFast.from_pretrained( "openlm-research/open_llama_3b_600bt_preview", add_bos_token = True, add_eos_token = False, # Original LLaMA is False -> add </s> during processing. bos_token = AddedToken("<s>", single_word = True), eos_token = AddedToken("</s>", single_word = True), unk_token = AddedToken("<unk>", single_word = True), pad_token = AddedToken("<unk>", single_word = True) ) tokenizer.push_to_hub("open_llama_3b_600bt_preview") ``` 2) `AutoTokenizer` does not recognize the BOS, EOS and UNK tokens. Weirdly `<unk>` ie the 0 token was added instead of the `<s>` or `</s>` token. 3) Manually added BOS `<s>`, EOS `</s>`, UNK `<unk>` tokens, with PAD (padding) being also the `<unk>` token.
MaziyarPanahi/mergekit-slerp-lvhhlmq-GGUF
MaziyarPanahi
"2024-06-16T17:05:38Z"
1,685
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "gemma", "text-generation", "mergekit", "merge", "base_model:beomi/gemma-ko-2b", "base_model:unsloth/gemma-2b", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-lvhhlmq" ]
text-generation
"2024-06-16T16:57:24Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - gemma - text-generation - mergekit - merge - base_model:beomi/gemma-ko-2b - base_model:unsloth/gemma-2b - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-lvhhlmq-GGUF base_model: mergekit-community/mergekit-slerp-lvhhlmq inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-lvhhlmq-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-lvhhlmq-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-lvhhlmq](https://huggingface.co/mergekit-community/mergekit-slerp-lvhhlmq) ## Description [MaziyarPanahi/mergekit-slerp-lvhhlmq-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-lvhhlmq-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-lvhhlmq](https://huggingface.co/mergekit-community/mergekit-slerp-lvhhlmq). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
MaziyarPanahi/mergekit-slerp-jdsasof-GGUF
MaziyarPanahi
"2024-06-17T00:08:27Z"
1,685
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B", "base_model:Endevor/InfinityRP-v1-7B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-jdsasof" ]
text-generation
"2024-06-16T23:46:05Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B - base_model:Endevor/InfinityRP-v1-7B - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-jdsasof-GGUF base_model: mergekit-community/mergekit-slerp-jdsasof inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-jdsasof-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-jdsasof-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-jdsasof](https://huggingface.co/mergekit-community/mergekit-slerp-jdsasof) ## Description [MaziyarPanahi/mergekit-slerp-jdsasof-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-jdsasof-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-jdsasof](https://huggingface.co/mergekit-community/mergekit-slerp-jdsasof). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
facebook/convnext-large-224-22k-1k
facebook
"2023-11-21T08:39:30Z"
1,684
2
transformers
[ "transformers", "pytorch", "tf", "convnext", "image-classification", "vision", "dataset:imagenet-21k", "arxiv:2201.03545", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-21k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # ConvNeXT (large-sized model) ConvNeXT model pre-trained on ImageNet-22k and fine-tuned on ImageNet-1k at resolution 224x224. It was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545) by Liu et al. and first released in [this repository](https://github.com/facebookresearch/ConvNeXt). Disclaimer: The team releasing ConvNeXT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The authors started from a ResNet and "modernized" its design by taking the Swin Transformer as inspiration. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/convnext_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=convnext) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import ConvNextImageProcessor, ConvNextForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] processor = ConvNextImageProcessor.from_pretrained("facebook/convnext-large-224-22k-1k") model = ConvNextForImageClassification.from_pretrained("facebook/convnext-large-224-22k-1k") inputs = processor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1k ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/convnext). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2201-03545, author = {Zhuang Liu and Hanzi Mao and Chao{-}Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {CoRR}, volume = {abs/2201.03545}, year = {2022}, url = {https://arxiv.org/abs/2201.03545}, eprinttype = {arXiv}, eprint = {2201.03545}, timestamp = {Thu, 20 Jan 2022 14:21:35 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2201-03545.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
Undi95/UndiMix-v4-13B
Undi95
"2023-11-17T21:08:17Z"
1,684
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-12T22:58:21Z"
--- license: cc-by-nc-4.0 --- <!-- description start --> ## Description This repo contains fp16 files of personal mix : "UndiMix-v4". It can be hot, serious, playful, and can use emoji thanks to llama-2-13b-chat-limarp-v2-merged. Atomicorn... Hope you will like this one kek, you waited enough. <!-- description end --> <!-- description start --> ## Models used - Undi95/ReMM-v2-Kimiko-v2-13B (0.272) (base) - The-Face-Of-Goonery/Huginn-13b-v1.2 (0.264) - Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged (0.264) - jondurbin/airoboros-l2-13b-2.2 (0.10) - IkariDev/Athena-v1 (0.10) <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` Special thanks to Sushi. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__UndiMix-v4-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 51.77 | | ARC (25-shot) | 61.95 | | HellaSwag (10-shot) | 83.88 | | MMLU (5-shot) | 56.9 | | TruthfulQA (0-shot) | 48.96 | | Winogrande (5-shot) | 76.16 | | GSM8K (5-shot) | 13.72 | | DROP (3-shot) | 20.82 |