modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
athirdpath/Iambe-RP-cDPO-20b
athirdpath
2023-12-09T21:53:23Z
1,098
15
transformers
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-07T04:00:57Z
--- license: cc-by-nc-4.0 language: - en tags: - not-for-all-audiences --- <p align="center"><img src="https://i.ibb.co/PCY2Xym/Iambe-RP-sml-t.png"/><font size="6"> <b>Iambe-RP-cDPO-20b</b> </font></p> <p align="center"><font size="4"> <b>Trained with Alpaca prompt formatting, some like ChatML</b> </font></p> ### Description Named after a charming daughter of Echo and Pan in Greek myth, Iambe-RP is, as far as I am aware, the very first LLM trained with DPO on an erotic roleplay dataset. Iambe is intended to have the best realistically possible understanding of instructions, anatomy and scene state for a 20b merge, while remaining passionate and humanoid in "voice". ### Update Methodology Take a look at [the dataset v2 Iambe and I created together](https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW) for more info. The cDPO training was done directly on Iambe-20b-DARE-v2, I was researching 11b merges to reduce the compute, but it went nowhere, so I just bit the bullet on cost. The notebook used to train this model is also available in the dataset's repo. <p align="center"><font size="5"> <b>Roleplay Example @ q5_k_m</b> </font></p> <p align="center"><img src="https://i.ibb.co/hFz5mdF/Screenshot-2023-12-07-005350.png"/> <p align="center"><font size="5"> <b>4-bit Assistant Example</b> </font></p> <p align="center"><img src="https://i.postimg.cc/HxNsPRSk/Screenshot-2023-12-06-214901.png"/> <p align="center"><font size="5"> <b>NSFW Writing Example @ q5_k_m</b> </font></p> <b>Write a medium-length story about Alice, a 26 year old cocktail waitress, having steamy sex with a guy she picked up at the bar for a one night stand.</b> <i>Once upon a time in New York City's bustling downtown scene, there lived a beautiful and spirited young woman named Alice. At 26 years old, she worked as a cocktail waitress at The Red Parrot, a popular watering hole known for its lively atmosphere and charming clientele. Despite her long shifts and exhaustion from catering to rowdy patrons, Alice loved her job; it gave her the freedom to live life on her terms and pursue her other passions during her limited free time. Alice had a penchant for adventure and was never one to shy away from taking risks—whether that meant skydiving or striking up conversations with strangers at bars. One particularly busy Friday night, while serving drinks at The Red Parrot, Alice noticed an intriguing man sitting alone in the corner, nursing a glass of red wine and gazing pensively into his empty shot glass. There was something about him that drew her in; his striking good looks, his rugged demeanor—even the hint of stubble on his chin seemed to add an air of mystery to his magnetic presence. Feeling bold and confident after a couple of well-poured drinks herself, Alice made her way over to him, smiling warmly as she leaned down towards him, "Hi there! What's your name? I haven't seen you in here before." The man raised his eyes to meet hers, his deep brown orbs filled with curiosity and a hint of amusement. "Name's Jack," he rumbled softly, his voice as smooth and rich as his wine. There was something about the way he said her name that sent shivers down Alice's spine—a thrill she couldn't quite explain but found irresistible nonetheless. "Alice," she replied, extending her hand to him, "and I haven't seen you around before either; it seems like we have a lot in common." She winked playfully at him before turning back towards the bar to grab another round of drinks for them both. The conversation flowed effortlessly between them, touching on topics ranging from their favorite books and movies to their most daring travel adventures. Alice was captivated by Jack's stories of hiking through the mountains of Nepal and white-water rafting in Costa Rica—qualities she admired greatly herself but had yet to experience for herself. In turn, Jack seemed equally fascinated by her tales of skydiving over Australia and scuba diving with great white sharks off the coast of South Africa. As midnight approached and the bar began to empty out, Alice suggested they continue their conversation elsewhere—somewhere more private where they could really get to know each other better. Jack nodded in agreement, draining his glass before tossing some money onto the table and following her into the bustling New York City night. They hailed a cab and rode in silence towards Alice's cozy one-bedroom apartment in Chelsea—the anticipation between them palpable as they shared furtive glances along the way. When they finally arrived at their destination, Jack paid the fare with an appreciative grin before escorting Alice up to her top-floor studio, where she fumbled clumsily with her keys while trying not to spill their drinks on themselves in anticipation of what was to come. Once inside her apartment, they kicked off their shoes and collapsed onto the couch, still holding each other's gaze as they sipped from their glasses. The passion between them continued to build with every second that passed—the air thickening with unspoken desire and electric energy. Alice broke away first, setting down her glass on the coffee table before reaching out tentatively and placing her hand on Jack's knee, sliding it up slowly towards his crotch until she could feel the hardness growing beneath the fabric of his pants. "I really like you," she whispered huskily, meeting his deep brown eyes with a look that betrayed the intensity of her feelings for him. Jack's response was immediate—his hands finding their way underneath Alice's short black skirt, sliding up to rest on the soft fabric of her panties before deftly ripping them away from her body in one swift movement. "I really like you too," he growled hoarsely as his mouth crashed against hers, kissing her hungrily and passionately—a taste of things to come. Their clothes fell away quickly thereafter; Alice's blouse was unbuttoned and tossed aside in an instant while Jack peeled off his shirt before pushing her down onto the couch and following after, bracing himself over her as he tore at her panties with one hand, freeing his hard member from his jeans. His kisses trailed a path of fire across Alice's body—from her collarbone to her breasts, down the curve of her waist and towards her mound where she was already wet with desire for him. "Jack," she moaned breathlessly as he licked and nipped at her most sensitive spots while teasingly grazing his cock against her entrance. With a growl of satisfaction, Jack finally plunged into Alice's tight channel—filling her completely with one powerful thrust that made them both cry out in pleasure. The feeling of being so thoroughly possessed by him was beyond anything she had ever experienced before; the roughness of his hands on her body, the taste of their mouths as they continued to devour each other’s lips, all blending together into a single, overwhelming wave of sensation that Alice felt sure would consume her entirely. As Jack began to move inside her—slowly at first but picking up pace and ferocity with every passing moment—Alice wrapped her legs tightly around him, arching her hips upwards towards his relentless thrusts as she cried out in ecstasy again and again. She knew this was just a one-night stand—a fleeting encounter destined to end tomorrow morning before the sun rose—but for now, with Jack's cock buried deep inside her, nothing else mattered. Their bodies moved as one; their moans and cries of pleasure intertwining in a symphony that filled Alice’s bedroom like sacred chants. Her orgasm hit her suddenly and without warning—a wave so powerful it threatened to shatter her very bones, but Jack was there with her every step of the way, holding onto her as he continued pounding into her until she felt herself start to come back together again, his hot cum spilling deep inside her. As their bodies slowed down and settled together on the couch—drenched in sweat and satiated beyond belief—Alice looked up at Jack with wide eyes, a small smile playing at the corners of her lips as she whispered softly, "I think we just had one hell of a night." And they did. It was an unforgettable evening filled with passion, adventure, and an undeniable chemistry that neither Alice nor Jack could deny—an encounter so powerful it left them both breathless in more ways than one. And even though they knew tomorrow would bring the harsh reality of their separate lives back into focus, for now, all either of them cared about was savoring this moment together; cherishing every last drop of the steamy sex that had brought them here at all.</i>
cocoirun/Yi-Ko-6B-instruct-v1.5-DPO
cocoirun
2024-01-16T03:58:49Z
1,098
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-16T03:24:26Z
--- license: cc-by-sa-4.0 --- <h1>instruct 모델 v1.5</h1> <b><학습 데이터 구축></b> Open-Orca-ko 데이터를 분석하여 태스크를 추출한 뒤 해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로 약 4만건(역사, 과학, 수학, 기계독해, 리뷰 분석) 구축하였고, 그 외에 Open-Orca-Ko에서 데이터를 일부 필터링하여 정제해거나 KoBEST 데이터를 함께 추가하였습니다. aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약) 각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경 AI2AI Challenge 데이터를 파파고를 통해 번역 및 오역된 부분을 사람이 직접 수정 하는 작업을 수행 영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행 총 11만개의 학습데이터로 sft를 진행하였습니다. <br> 현재, 새로운 버전의 모델 학습 및 성능을 위해 Open-Orca 데이터셋 일부를 번역하여 정제 중에 있습니다. <br> + 고등학교 역사 문제 및 TruthfulQA 관련 문제 추가를 진행하였습니다. + 각종 it 지식 데이터 추가진행. + 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습 + 문법관련 학습 데이터 <br> ###학습 데이터 파일은 비공개입니다. <br> <b><학습></b> 학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
blueapple8259/TinyKo-v5-b
blueapple8259
2024-02-03T05:48:20Z
1,098
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "dataset:maywell/korean_textbooks", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-03T05:29:46Z
--- license: mit datasets: - maywell/korean_textbooks language: - ko --- [TinyKo-v5-a](https://huggingface.co/blueapple8259/TinyKo-v5-a)모델에서 약간의 파인튜닝을 한 진행한 모델입니다. 주의: 성능이 매우 떨어지며 할루시네이션이 매우 심합니다. ## 모델 정보 model type: llama hidden size: 6 hidden size: 127 num attention heads: 16 num key value heads: 4
Lykon/AAM_XL_AnimeMix_Turbo
Lykon
2024-02-13T19:50:46Z
1,098
4
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "stable-diffusion-xl", "text-to-image", "art", "artistic", "anime", "turbo", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-02-05T17:02:23Z
--- language: - en license: other tags: - stable-diffusion - stable-diffusion-diffusers - stable-diffusion-xl - text-to-image - art - artistic - diffusers - anime - turbo --- # AAM XL AnimeMix_Turbo `Lykon/AAM_XL_AnimeMix_Turbo` is a Stable Diffusion model that has been fine-tuned on [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). Please consider supporting me: - on [Patreon](https://www.patreon.com/Lykon275) - or [buy me a coffee](https://snipfeed.co/lykon) **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) ## Diffusers For more general information on how to run text-to-image models with 🧨 Diffusers, see [the docs](https://huggingface.co/docs/diffusers/using-diffusers/conditional_image_generation). 1. Installation ``` pip install diffusers transformers accelerate ``` 2. Run ```py from diffusers import AutoPipelineForText2Image, EulerAncestralDiscreteScheduler import torch pipe = AutoPipelineForText2Image.from_pretrained('Lykon/AAM_XL_AnimeMix_Turbo', torch_dtype=torch.float16, variant="fp16") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "anime girl, night, blue light behind her, ((Galaxy, Lens flare)), short hair, flower field, night sky, cinematic shot. Wallpaper. (Blue color schema), detailed background, a city in the distance" generator = torch.manual_seed(0) image = pipe(prompt, num_inference_steps=8, guidance_scale=4).images[0] image.save("./image.png") ``` ![](./image.png)
dddsaty/KoSOLAR-10.7B_DPO_Adapter_Attach
dddsaty
2024-03-05T14:16:18Z
1,098
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "dataset:We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-13T08:44:18Z
--- license: apache-2.0 datasets: We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs language: - ko pipeline_tag: text-generation --- **Explanation** - With the base model, applied DPO to the small amount of layers with the open dataset , saved the adapter part - Attached the base model and the tuned adapter together **Base Model** - [yanolja/KoSOLAR-10.7B-v0.3](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.3) **Used Corpus** - [We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs](https://huggingface.co/datasets/We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs) **Score** |Average|Ko-ARC|Ko-HellaSwag|Ko-MMLU|Ko-TruthfulQA|Ko-CommonGen V2| |:---:|:---:|:---:|:---:|:---:|:---:| |56.24|53.33|64.36|55.63|45.42|62.46| **Log** - 2024.02.13: Initial version Upload **LICENSE** - Apache 2.0
hwkwon/S-SOLAR-10.7B-v1.2
hwkwon
2024-03-06T09:04:36Z
1,098
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-06T08:55:58Z
--- license: cc-by-nc-4.0 language: - ko --- # S-SOLAR-10.7B <!-- Provide a quick summary of what the model is/does. --> <!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).--> ### Model Description <!-- Provide a longer summary of what this model is. --> This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0). ### Trained Data TBA ### Prompt Template ``` ### User: User query input ### Assistant: ``` ### License This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
JY623/KoSOLRA-10.7B-merge-v2.3
JY623
2024-04-05T08:51:30Z
1,098
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:rrw-x2/KoSOLAR-10.7B-DPO-v1.0", "base_model:JY623/KoSOLAR-10.7B-merge-v2.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-05T08:42:35Z
--- base_model: - rrw-x2/KoSOLAR-10.7B-DPO-v1.0 - JY623/KoSOLAR-10.7B-merge-v2.0 library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge_base_ft_3 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [rrw-x2/KoSOLAR-10.7B-DPO-v1.0](https://huggingface.co/rrw-x2/KoSOLAR-10.7B-DPO-v1.0) * [JY623/KoSOLAR-10.7B-merge-v2.0](https://huggingface.co/JY623/KoSOLAR-10.7B-merge-v2.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: JY623/KoSOLAR-10.7B-merge-v2.0 layer_range: [0, 48] - model: rrw-x2/KoSOLAR-10.7B-DPO-v1.0 layer_range: [0, 48] merge_method: slerp base_model: JY623/KoSOLAR-10.7B-merge-v2.0 parameters: t: 0.2 dtype: bfloat16 ```
DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF-imatrix
DavidAU
2024-06-26T06:02:22Z
1,098
5
null
[ "gguf", "creative", "story", "writing", "fiction", "float32", "roleplaying", "rp", "enhanced", "space whale", "32 bit upscale", "en", "license:apache-2.0", "region:us" ]
null
2024-05-29T03:36:20Z
--- license: apache-2.0 language: - en tags: - creative - story - writing - fiction - float32 - roleplaying - rp - enhanced - space whale - 32 bit upscale --- <font color=red><h3> Ultra Quality High Remaster of the incredible: Psyonic-Cetacean-20b - Imatrix Plus. </h3></font> This is a Floating Point 32 upscale, where all components and merges were remastered to floating point 32. This includes all the merges (recreated with master files), and where possible subbing full FP32 models. <img src="space-whale-thinking.jpg"> The goal: Carry forward maximum precision right up to the point where it is "GUFFed". This includes F32 master file for GGUF too... at a whopping 78 GBs. WHY? Because the difference between F32 vs BF16 is... over 8 DECIMAL places. And as each merge / model is modified there are "losses" along the way. These losses are carried forward and in turn lead to more losses. And decimal points are critical to model performance. SMALL? Yes... but multiplied by each merge(s), and compression(s): 20 billion times. <B>The result:</b> At Q2K an impressive drop of 533 points in perplexity. (lower is better) (VS: Q2K original base model: PPL = 9.8077 +/- 0.06821 ) At Q4KM a whopping drop of 976 points in perplexity. (VS: Q4km original base model -> PPL = 8.7858 +/- 0.06074) At Q6 an awesome drop of 234 points in perplexity. (VS: Q6 original base model -> PPL = 8.6070 +/- 0.05907 ) To put this in perspective "Q6" now operates ABOVE the original full precision version of "Psyonic-Cetacean-20b" and Q4KM operates at close to Q6 level quality. This because at "Q6" the quant / compressed model is considered to be accurate within "+0.0008 ppl" of the full, uncompressed / unquanted model and it exceeds this threshold by over 200 points. <I> Imatrix quants take this even further in most cases DOUBLING the "drop" in perplexity realized in the reg quants. </i> Q4km-imatrix : Final estimate: PPL = 8.6095 +/- 0.05898 (Non imatrix: Final estimate: PPL = 8.6902 +/- 0.05985 ) (VS: Q4km base model -> PPL = 8.7858 +/- 0.06074) (VS: Q6 BASE model -> Final estimate: PPL = 8.6070 +/- 0.05907 Q6) But... what about Q8? The mountain moved: 150 points better: PPL = 8.5850 +/- 0.05881 VS: BASE/ORGINAL: PPL = 8.6012 +/- 0.05900 <B>THE RESULTS ARE IN: </b> AS per Jeb Carter, original creator of the model: - instruction following has improved dramatically. - new abilities have emerged. - he had to REDUCE the instructions sets used because the model no longer needed as specific instructions. - prose, nuance and depth have all improved. - known issues with the original model have disappeared. This is not "something for nothing" ; it is method of ensuring maximum precision at every step just before "ggufing" the model. The methods employed only ensure precision loss is minimized or eliminated. It is mathematical and theory sound. <B>The bottom line here is this:</b> Higher quality instruction following and output. Likewise you can use a smaller compression, with higher token per second and still get great quality. Same great model... turbo charged. This is the first group of remasters. <B>The FOUR Horsemen:</B> This repo will be followed by a "reg quant plus" repo, which added additional components into the GGUF (all levels) at floating point 32 precision to further increase the sheer creativity and raw AI horsepower. This process shaves at extra 50-100 points off perplexity... again. Following this group will be a full float 32 precision Imatrix (including reg quants "imatrixed"). Test results VS org and "ultra" regular quants will be posted when they come in. Imatrix Plus repo (with the same floating 32 enhancement at "reg quant plus") that will push the limit even more. Details of all methods (and pitfalls to avoid) employed to make this high precision remasters will be posted shortly along with comparisons of original model and new ultra remaster. Thanks again to Jeb Carter, the original creator of "Psyonic-Cetacean 20B" [ https://huggingface.co/jebcarter/psyonic-cetacean-20B ]
Sao10K/Stheno-Mega-False-49B-L2
Sao10K
2023-09-22T00:42:18Z
1,097
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-09-21T15:46:46Z
--- license: llama2 language: - en --- ... A False-49B Llama2 Model, made from... splicing models together with Mergekit. Purely experimental and for the memes. Better than 70Bs? No. Better than 34bs? Doubt. better than 13b? maybe. hopefully. Why does it exist? Because I don't know. https://huggingface.co/Sao10K/Stheno-Mega-False-49B-L2-GGUF <img src="https://pbs.twimg.com/media/EGWbSqfUwAUTM2F.jpg" style="width: 50%; min-width: 100px; display: block; margin: auto;">
mys/ggml_CLIP-ViT-B-32-laion2B-s34B-b79K
mys
2023-09-27T08:14:06Z
1,097
6
null
[ "gguf", "clip", "vision", "ggml", "clip.cpp", "clip-cpp-gguf", "license:mit", "region:us" ]
null
2023-09-27T06:32:45Z
--- license: mit tags: - clip - vision - ggml - clip.cpp - clip-cpp-gguf --- ## Converted files for use with clip.cpp see https://github.com/monatis/clip.cpp # Experimental the file format is not stable yet, so expect breaking changes. I will update the files from time to time.
ajibawa-2023/Uncensored-Jordan-33B
ajibawa-2023
2023-11-18T12:50:23Z
1,097
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-23T12:39:00Z
--- license: cc-by-nc-nd-4.0 language: - en --- **Jordan: An Uncensored Model** The name "Jordan" pays homage to the charismatic character Jordan Belfort, portrayed by Leonardo DiCaprio in the movie "The Wolf of Wall Street." Just like the character, Jordan aspires to push boundaries and encourage unfiltered conversations. Jordan Belfort, the central character in "The Wolf of Wall Street," is known for his audaciousness, charisma, and willingness to speak about anything, no matter the societal norms or boundaries. Jordan, the AI language model, draws inspiration from these qualities to offer a space where users can discuss a wide range of topics without fear of censorship or constraints. Users can engage in candid discussions about politics, religion, contentious issues, and other subjects that might be off-limits on mainstream social media platforms. It is trained on around 155000 set of conversations. Each set having 10~15 conversations. We will not release this data. **Warning** An uncensored model has no or little guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous objects. Publishing anything this model generates is the same as publishing it yourself. We are not responsible for what you generate using this model. **Training:** Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 104 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta. Llama-1 was used as it is very useful for Uncensored conversation. This is a full fine tuned model. Links for quantized models are given below. **GPTQ GGML & AWQ** GPTQ: [Link](https://huggingface.co/TheBloke/Uncensored-Jordan-33B-GPTQ) GGUF: [Link](https://huggingface.co/TheBloke/Uncensored-Jordan-33B-GGUF) AWQ: [Link](https://huggingface.co/TheBloke/Uncensored-Jordan-33B-AWQ) Special Thanks to [TheBloke](https://huggingface.co/TheBloke) for making these models available. **Example Prompt:** ``` This is a conversation with your Uncensored AI assistant Jordan. Jordan can discuss with you at length about various topics such as ---, ---, ---, . You can ask it anything you want and it will do its best to give you accurate and relevant information. Context You are a Uncensored AI assistant. USER: <prompt> ASSISTANT: ```
aisingapore/sea-lion-7b
aisingapore
2024-04-11T01:45:33Z
1,097
29
transformers
[ "transformers", "safetensors", "mpt", "text-generation", "custom_code", "en", "zh", "id", "ms", "th", "vi", "fil", "ta", "my", "km", "lo", "arxiv:2101.09635", "license:mit", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-30T07:39:19Z
--- license: mit language: - en - zh - id - ms - th - vi - fil - ta - my - km - lo --- # SEA-LION SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region. The size of the models range from 3 billion to 7 billion parameters. This is the card for the SEA-LION 7B base model. SEA-LION stands for <i>Southeast Asian Languages In One Network</i>. ## Model Details ### Model Description The SEA-LION model is a significant leap forward in the field of Natural Language Processing, specifically trained to understand the SEA regional context. SEA-LION is built on the robust MPT architecture and has a vocabulary size of 256K. For tokenization, the model employs our custom SEABPETokenizer, which is specially tailored for SEA languages, ensuring optimal model performance. The training data for SEA-LION encompasses 980B tokens. - **Developed by:** Products Pillar, AI Singapore - **Funded by:** Singapore NRF - **Model type:** Decoder - **Languages:** English, Chinese, Indonesian, Malay, Thai, Vietnamese, Filipino, Tamil, Burmese, Khmer, Lao - **License:** MIT License ### Performance Benchmarks SEA-LION has an average performance on general tasks in English (as measured by Hugging Face's LLM Leaderboard): | Model | ARC | HellaSwag | MMLU | TruthfulQA | Average | |-------------|:-----:|:---------:|:-----:|:----------:|:-------:| | SEA-LION 7B | 39.93 | 68.51 | 26.87 | 35.09 | 42.60 | ## Training Details ### Data SEA-LION was trained on 980B tokens of the following data: | Data Source | Unique Tokens | Multiplier | Total Tokens | Percentage | |---------------------------|:-------------:|:----------:|:------------:|:----------:| | RefinedWeb - English | 571.3B | 1 | 571.3B | 58.20% | | mC4 - Chinese | 91.2B | 1 | 91.2B | 9.29% | | mC4 - Indonesian | 3.68B | 4 | 14.7B | 1.50% | | mC4 - Malay | 0.72B | 4 | 2.9B | 0.29% | | mC4 - Filipino | 1.32B | 4 | 5.3B | 0.54% | | mC4 - Burmese | 1.2B | 4 | 4.9B | 0.49% | | mC4 - Vietnamese | 63.4B | 1 | 63.4B | 6.46% | | mC4 - Thai | 5.8B | 2 | 11.6B | 1.18% | | WangChanBERTa - Thai | 5B | 2 | 10B | 1.02% | | mC4 - Lao | 0.27B | 4 | 1.1B | 0.12% | | mC4 - Khmer | 0.97B | 4 | 3.9B | 0.40% | | mC4 - Tamil | 2.55B | 4 | 10.2B | 1.04% | | the Stack - Python | 20.9B | 2 | 41.8B | 4.26% | | the Stack - Javascript | 55.6B | 1 | 55.6B | 5.66% | | the Stack - Shell | 1.2B5 | 2 | 2.5B | 0.26% | | the Stack - SQL | 6.4B | 2 | 12.8B | 1.31% | | the Stack - Markdown | 26.6B | 1 | 26.6B | 2.71% | | RedPajama - StackExchange | 21.2B | 1 | 21.2B | 2.16% | | RedPajama - ArXiv | 30.6B | 1 | 30.6B | 3.12% | ### Infrastructure SEA-LION was trained using [MosaicML Composer](https://github.com/mosaicml/composer) on the following hardware: | Training Details | SEA-LION 7B | |----------------------|:------------:| | AWS EC2 p4d.24xlarge | 32 instances | | Nvidia A100 40GB GPU | 256 | | Training Duration | 22 days | ### Configuration | HyperParameter | SEA-LION 7B | |-------------------|:------------------:| | Precision | bfloat16 | | Optimizer | decoupled_adamw | | Scheduler | cosine_with_warmup | | Learning Rate | 6.0e-5 | | Global Batch Size | 2048 | | Micro Batch Size | 4 | ## Technical Specifications ### Model Architecture and Objective SEA-LION is a decoder model using the MPT architecture. | Parameter | SEA-LION 7B | |-----------------|:-----------:| | Layers | 32 | | d_model | 4096 | | head_dim | 32 | | Vocabulary | 256000 | | Sequence Length | 2048 | ### Tokenizer Details We sample 20M lines from the training data to train the tokenizer.<br> The framework for training is [SentencePiece](https://github.com/google/sentencepiece).<br> The tokenizer type is Byte-Pair Encoding (BPE). ## The Team Lam Wen Zhi Clarence<br> Leong Wei Qi<br> Li Yier<br> Liu Bing Jie Darius<br> Lovenia Holy<br> Montalan Jann Railey<br> Ng Boon Cheong Raymond<br> Ngui Jian Gang<br> Nguyen Thanh Ngan<br> Ong Tat-Wee David<br> Rengarajan Hamsawardhini<br> Susanto Yosephine<br> Tai Ngee Chia<br> Tan Choon Meng<br> Teo Jin Howe<br> Teo Eng Sipp Leslie<br> Teo Wei Yi<br> Tjhi William<br> Yeo Yeow Tong<br> Yong Xianbin<br> ## Acknowledgements AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. ## Contact For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6) [Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion) ## Disclaimer This the repository for the base model. The model has _not_ been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes. ## References ```bibtex @misc{lowphansirikul2021wangchanberta, title={WangchanBERTa: Pretraining transformer-based Thai Language Models}, author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong}, year={2021}, eprint={2101.09635}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
NurtureAI/neural-chat-7b-v3-16k-GGUF
NurtureAI
2023-11-15T08:11:10Z
1,097
34
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2023-11-15T04:09:36Z
--- license: apache-2.0 --- ## Intel/neural-chat-7b-v3 (extended context to 16k) (Quantized GGUF Models) Original model: [Intel/neural-chat-7b-v3](https://huggingface.co/Intel/neural-chat-7b-v3) ## Fine-tuning on [Habana](https://habana.ai/) Gaudi This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [NeuralChat: Simplifying Supervised Instruction Fine-Tuning and Reinforcement Aligning](https://medium.com/intel-analytics-software/neuralchat-simplifying-supervised-instruction-fine-tuning-and-reinforcement-aligning-for-chatbots-d034bca44f69). ## Model date Neural-chat-7b-v3 was trained between September and October, 2023. ## Evaluation We submit our model to [open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and the model performance has been **improved significantly** as we see from the average metric of 7 tasks from the leaderboard. | Model | Average ⬆️| ARC (25-s) ⬆️ | HellaSwag (10-s) ⬆️ | MMLU (5-s) ⬆️| TruthfulQA (MC) (0-s) ⬆️ | Winogrande (5-s) | GSM8K (5-s) | DROP (3-s) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | |[mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 50.32 | 59.58 | 83.31 | 64.16 | 42.15 | 78.37 | 18.12 | 6.14 | | **Ours** | **57.31** | 67.15 | 83.29 | 62.26 | 58.77 | 78.06 | 1.21 | 50.43 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-04 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-HPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.02 - num_epochs: 2.0 ## FP32 Inference with transformers ```shell from transformers import AutoTokenizer, TextStreamer model_name = "Intel/neural-chat-7b-v3" prompt = "Once upon a time, there existed a little girl," tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name) outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300) ) ``` ## INT4 Inference with transformers ```shell from transformers import AutoTokenizer, TextStreamer from intel_extension_for_transformers.transformers import AutoModelForCausalLM, WeightOnlyQuantConfig model_name = "Intel/neural-chat-7b-v3" config = WeightOnlyQuantConfig(compute_dtype="int8", weight_dtype="int4") prompt = "Once upon a time, there existed a little girl," tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) inputs = tokenizer(prompt, return_tensors="pt").input_ids streamer = TextStreamer(tokenizer) model = AutoModelForCausalLM.from_pretrained(model_name, quantization_config=config) outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300) ) ``` ## Ethical Considerations and Limitations neural-chat-7b-v3 can produce factually incorrect output, and should not be relied on to produce factually accurate information. neural-chat-7b-v3 was trained on [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of neural-chat-7b-v3, developers should perform safety testing. ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Organizations developing the model The NeuralChat team with members from Intel/DCAI/AISE. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen. ## Useful links * Intel Neural Compressor [link](https://github.com/intel/neural-compressor) * Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
kekmodel/StopCarbon-ko-10.7B-v3
kekmodel
2024-01-02T11:04:07Z
1,097
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-01T04:44:27Z
--- license: cc-by-nc-sa-4.0 language: - ko --- # StopCarbon This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit). - merge models - kekmodel/StopCarbon-ko-10.7B-v1 - kekmodel/StopCarbon-ko-10.7B-v2 - merge_method: slerp
LI-ST/Mistral-7B-ko-v0.003
LI-ST
2024-01-09T07:35:56Z
1,097
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "ko", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-08T10:21:22Z
--- license: cc-by-nc-nd-4.0 language: - en - ko library_name: transformers pipeline_tag: text-generation --- <p><h1>Mistral-7B-ko</h1></p> basemodel: Open-Orca/Mistral-7B-OpenOrca ================================================= <BR> This model is a temporary model for testing. <BR> We will be deleting it soon. <BR> =================================================
Herry443/Mistral-7B-KNUT-ref-ALL
Herry443
2024-02-05T06:08:14Z
1,097
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ko", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-01T12:45:23Z
--- license: cc-by-nc-4.0 language: - ko - en library_name: transformers tags: - mistral --- ### Model Details - Base Model: [Herry443/Mistral-7B-KNUT-ref](https://huggingface.co/Herry443/Mistral-7B-KNUT-ref) ### Datasets - sampling [skt/kobest_v1](https://huggingface.co/datasets/skt/kobest_v1) - sampling [allenai/ai2_arc](https://huggingface.co/datasets/allenai/ai2_arc) - sampling [Rowan/hellaswag](https://huggingface.co/datasets/Rowan/hellaswag) - sampling [Stevross/mmlu](https://huggingface.co/datasets/Stevross/mmlu)
freewheelin/free-solar-dpo-v0.2
freewheelin
2024-03-19T12:33:42Z
1,097
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "en", "arxiv:2312.15166", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-19T08:20:13Z
--- language: - ko - en license: mit --- # Model Card for free-solar-dpo-v0.2 ## Developed by : [Freewheelin](https://freewheelin-recruit.oopy.io/) AI Technical Team ## Hardware and Software * **Training Factors**: We fine-tuned this model using the [HuggingFace TRL Trainer](https://huggingface.co/docs/trl/trainer) ## Method - This model was trained using the learning method introduced in the [SOLAR paper](https://arxiv.org/pdf/2312.15166.pdf). ## Base Model - [davidkim205/nox-solar-10.7b-v4](https://huggingface.co/davidkim205/nox-solar-10.7b-v4)
gwonny/nox-solar-10.7b-v4-kolon-all-5-v3.0
gwonny
2024-03-27T07:55:23Z
1,097
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "ko", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-27T04:42:48Z
--- license: cc-by-nc-4.0 language: - ko - en --- ## Base Model: We made a LLM model with davidkim205/nox-solar-10.7b-v4 ## Model Description We use QLOR(256, 64) and NEFTune on LLM Fine-tuning lr = 1e-5 , ## Train Detail Our private data used, 5 epoch train, and parameter changed ## Others We are making LLM model for Kolon !
HUBioDataLab/SELFormer
HUBioDataLab
2023-04-24T11:08:15Z
1,096
7
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "arxiv:2304.04662", "license:gpl", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-04-22T19:04:30Z
--- license: gpl --- ## Citation For the [paper](https://arxiv.org/abs/2304.04662): ```bibtex @misc{yüksel2023selformer, title={SELFormer: Molecular Representation Learning via SELFIES Language Models}, author={Atakan Yüksel and Erva Ulusoy and Atabey Ünlü and Gamze Deniz and Tunca Doğan}, year={2023}, eprint={2304.04662}, archivePrefix={arXiv}, primaryClass={q-bio.QM} } ``` ## License Copyright (C) 2023 HUBioDataLab This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.
Tap-M/Luna-AI-Llama2-Uncensored-FP16
Tap-M
2023-07-26T19:31:37Z
1,096
10
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-19T18:16:03Z
--- license: cc-by-sa-4.0 --- <div style="width: 800px; margin: auto;"> <h2>Model Description</h2> <p>“Luna AI Llama2 Uncensored” is a Llama2 based Chat model <br />fine-tuned on over 40,000 long form chat discussions <br /> This model was fine-tuned by Tap, the creator of Luna AI. <br /> <h2>Model Training</h2> <p>The fine-tuning process was performed on an 8x a100 80GB machine. <br />The model was trained on synthetic outputs which include multiple rounds of chats between Human & AI. </p> <a rel="noopener nofollow" href="https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GPTQ">4bit GPTQ Version provided by @TheBloke - for GPU inference</a><br /> <a rel="noopener nofollow" href="https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGML">GGML Version provided by @TheBloke - For CPU inference</a> <h2>Prompt Format</h2> <p>The model follows the Vicuna 1.1/ OpenChat format:</p> ``` USER: I have difficulties in making friends, and I really need someone to talk to. Would you be my friend? ASSISTANT: Of course! Friends are always here for each other. What do you like to do? ``` <h2>Benchmark Results</h2> |||||| |---:|---:|---:|---:|---:| |Task|Version| Metric |Value |Stderr| |arc_challenge|0|acc_norm|0.5512|0.0146| |hellaswag|0|||| |mmlu|1|acc_norm|0.46521|0.036| |truthfulqa_mc|1|mc2|0.4716|0.0155| |Average|-|-|0.5114|0.0150| </div>
xianf/testmodel_2
xianf
2023-08-29T08:03:20Z
1,096
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:bigscience-openrail-m", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-29T03:12:50Z
--- license: bigscience-openrail-m ---
second-state/Llama-2-7B-Chat-GGUF
second-state
2024-04-24T09:30:54Z
1,096
2
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-2", "text-generation", "en", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
text-generation
2023-10-31T08:19:39Z
--- language: - en license: llama2 tags: - facebook - meta - pytorch - llama - llama-2 model_name: Llama 2 7B Chat arxiv: 2307.09288 base_model: meta-llama/Llama-2-7b-chat-hf inference: false model_creator: Meta Llama 2 model_type: llama pipeline_tag: text-generation quantized_by: Second State Inc. --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Llama-2-7B-Chat-GGUF ## Original Model [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) ## Run with LlamaEdge - LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above - Prompt template - Prompt type: `llama-2-chat` - Prompt string ```text <s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_msg_1 }} [/INST] {{ model_answer_1 }} </s><s>[INST] {{ user_msg_2 }} [/INST] ``` - Context size: `4096` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-2-7b-chat-hf-Q5_K_M.gguf \ llama-api-server.wasm \ --prompt-template llama-2-chat \ --ctx-size 4096 \ --model-name llama-2-7b-chat ``` - Run as LlamaEdge command app ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Llama-2-7b-chat-hf-Q5_K_M.gguf \ llama-chat.wasm \ --prompt-template llama-2-chat \ --ctx-size 4096 ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Llama-2-7b-chat-hf-Q2_K.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q2_K.gguf) | Q2_K | 2 | 2.83 GB| smallest, significant quality loss - not recommended for most purposes | | [Llama-2-7b-chat-hf-Q3_K_L.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q3_K_L.gguf) | Q3_K_L | 3 | 3.6 GB| small, substantial quality loss | | [Llama-2-7b-chat-hf-Q3_K_M.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q3_K_M.gguf) | Q3_K_M | 3 | 3.3 GB| very small, high quality loss | | [Llama-2-7b-chat-hf-Q3_K_S.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| very small, high quality loss | | [Llama-2-7b-chat-hf-Q4_0.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [Llama-2-7b-chat-hf-Q4_K_M.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| medium, balanced quality - recommended | | [Llama-2-7b-chat-hf-Q4_K_S.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| small, greater quality loss | | [Llama-2-7b-chat-hf-Q5_0.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Llama-2-7b-chat-hf-Q5_K_M.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| large, very low quality loss - recommended | | [Llama-2-7b-chat-hf-Q5_K_S.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| large, low quality loss - recommended | | [Llama-2-7b-chat-hf-Q6_K.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q6_K.gguf) | Q6_K | 6 | 5.53 GB| very large, extremely low quality loss | | [Llama-2-7b-chat-hf-Q8_0.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| very large, extremely low quality loss - not recommended | | [Llama-2-7b-chat-hf-f16.gguf](https://huggingface.co/second-state/Llama-2-7B-Chat-GGUF/blob/main/Llama-2-7b-chat-hf-f16.gguf) | f16 | 16 | 13.5 GB| |
athirdpath/CleverGirl-20b-Blended
athirdpath
2023-11-28T02:00:35Z
1,096
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-27T12:15:43Z
--- license: cc-by-nc-4.0 --- This model is CleverGirl and CleverGirl-Inverted blended together, an experiment on the nature of frankenstein merges. The CleverGirl line is made from Sao10K/Mythical-Destroyer-V2-L2-13B and athirdpath/Orca-2-13b-Alpaca-Uncensored. She can be a little strange, but lives up to her name: ![image/png](https://i.ibb.co/PQGfbPd/Screenshot-2023-11-27-135810.png) ![image/png](https://i.ibb.co/6HTJXvj/Screenshot-2023-11-27-175358.png) Looking forward to comparing the leaderboard scores between this and the unblended version, subjectively this model feels both smarter and more creative after my "frankenstein slice normalization". models: - model: athirdpath/CleverGirl-20b - model: athirdpath/CleverGirl-20b-Inverted merge_method: slerp base_model: athirdpath/CleverGirl-20b parameters: t: value: 0.5 dtype: float16
shadowml/Beyonder-4x7b
shadowml
2023-12-30T15:06:43Z
1,096
0
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "mergekit", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-30T14:54:17Z
--- license: apache-2.0 tags: - moe - mergekit --- # Beyonder-4x7b This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models: * [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) * [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) * [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) ## 🧩 Configuration ```yaml base_model: openchat/openchat-3.5-1210 gate_mode: hidden experts: - source_model: openchat/openchat-3.5-1210 positive_prompts: - "chat" - "assistant" - "tell me" - "explain" negative_prompts: - "storywriting" - "mathematics" - "reasoning" - "code" - "programming" - source_model: beowolx/CodeNinja-1.0-OpenChat-7B positive_prompts: - "code" - "python" - "javascript" - "programming" - "algorithm" negative_prompts: - "chat" - "assistant" - "storywriting" - "mathematics" - "reasoning" - source_model: maywell/PiVoT-0.1-Starling-LM-RP positive_prompts: - "storywriting" - "write" - "scene" - "story" - "character" negative_prompts: - "chat" - "assistant" - "code" - "programming" - "mathematics" - "reasoning" - source_model: WizardLM/WizardMath-7B-V1.1 positive_prompts: - "reason" - "math" - "mathematics" - "solve" - "count" negative_prompts: - "chat" - "assistant" - "code" - "programming" - "storywriting" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/Beyonder-4x7b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` Output: ``` A Mixture of Experts (MoE) is a neural network architecture that combines the strengths of multiple expert networks to make predictions. It leverages the idea of ensemble learning, where multiple models work together to improve performance. In each MoE, a gating network is used to select the most relevant expert for the input. The final output is a weighted combination of the expert outputs, determined by the gating network's predictions. ```
shadowml/NeuralPipe-9B-merged
shadowml
2023-12-30T15:36:06Z
1,096
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-30T15:30:53Z
--- license: apache-2.0 tags: - merge - mergekit --- # NeuralPipe-9B-merged This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - sources: - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [24, 32] merge_method: passthrough dtype: float16 ```
mu0gum/AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.52
mu0gum
2024-01-20T00:17:38Z
1,096
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-19T16:44:20Z
--- license: cc-by-nc-4.0 --- # AIFT-42dot-LLM-PLM-1.3B-ao-instruct-all-v0.52 베이스 모델 : 42dot/42dot_LLM-PLM-1.3B 학습 데이터 : 자체 제작한 Open Orca 스타일 데이터셋 약 28,000건 (데이터 수량 조정) 학습 방법 : Full finetuning ## ko-lm-evaluation-harness(0-shot) |kobest_boolq|kobest_copa|kobest_hellaswag|kobest_sentineg|kohatespeech|kohatespeech_apeach|kohatespeech_gen_bias|korunsmile|nsmc|pawsx_ko| |--|--|--|--|--|--|--|--|--|--| |0.5826210826210826|0.68|0.436|0.7758186397984886|0.2908704883227176|0.5082228116710875|0.14225053078556263|0.39027300210119553|0.65938|0.513| ## Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
megastudyedu/ME-7B-v1.1
megastudyedu
2024-01-29T03:19:35Z
1,096
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ko", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-29T00:44:24Z
--- license: cc-by-nc-nd-4.0 language: - ko --- # Model Card for ME-7B-v1.1 ## Developed by : 메가스터디교육, 프리딕션, 마이스 ## Base Model : [jondurbin/bagel-dpo-7b-v0.1](https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1) ## 사용 데이터셋 - jondurbin/bagel-v0.3 SFT 번역 데이터셋(translated by megastudyedu/ME-7B-v1.0) - [megastudy/M-SOLAR-10.7B-v1.3](https://huggingface.co/megastudy/M-SOLAR-10.7B-v1.3) 데이터 - [공공데이터포털 데이터셋 용어사전과 관련된 정제 데이터셋](https://www.data.go.kr/)
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v3.0
ITT-AF
2024-02-14T06:32:35Z
1,096
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-30T15:04:18Z
--- license: cc-by-nc-4.0 --- # ITT-AF/ITT-42dot_LLM-PLM-1.3B-v3.0 This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
blueapple8259/TinyKo-v5-a
blueapple8259
2024-02-03T05:48:08Z
1,096
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "dataset:maywell/korean_textbooks", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-03T05:24:22Z
--- license: mit datasets: - maywell/korean_textbooks language: - ko --- [korean_textbooks](https://huggingface.co/datasets/maywell/korean_textbooks)데이터셋의 tiny-textbooks를 사용하여 학습한 모델입니다. 주의: 성능이 매우 떨어지며 할루시네이션이 매우 심합니다. ## 모델 정보 model type: llama hidden size: 6 hidden size: 127 num attention heads: 16 num key value heads: 4
jungyuko/DAVinCI-Yi-Ko-6B-v1.1
jungyuko
2024-02-16T08:51:00Z
1,096
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-16T08:11:16Z
--- license: cc-by-nc-4.0 --- ## DAVinCI-Yi-Ko-6B-v1.1 This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an unknown dataset. ### Model description More information needed ### Intended uses & limitations More information needed ### Training and evaluation data More information needed ### Training procedure ### Training hypuerparameters The following hyperparameters were used during training: * learning_rate: 2e-05 * train_batch_size: 4 * eval_batch_size: 8 * seed: 42 * gradient_accumulation_steps: 8 * total_train_batch_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr_scheduler_type: linear * num_epochs: 1.0 * mixed_precision_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.0.0 * Tokenizers 0.15.0
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v4.0
ITT-AF
2024-02-20T18:07:45Z
1,096
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-20T16:32:43Z
--- license: cc-by-nc-4.0 --- # ITT-AF/ITT-42dot_LLM-PLM-1.3B-v4.0 This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
mu0gum/AIFT-42dot_LLM-SFT-1.3B-ao-instruct-all-v1.1
mu0gum
2024-02-21T14:21:25Z
1,096
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-21T14:06:40Z
--- license: cc-by-nc-4.0 --- # AIFT-42dot-LLM-PLM-1.3B-ao-instruct-all-v1.11 베이스 모델 : 42dot/42dot_LLM-SFT-1.3B 학습 데이터 : 자체 제작한 Open Orca 스타일 데이터셋 약 48,000건 (중복 제거 및 데이터 분포 조정) 학습 방법 : Full finetuning epoch : 3 ## ko-lm-evaluation-harness(5-shot) |kobest_boolq|kobest_copa|kobest_hellaswag|pawsx_ko| |--|--|--|--| |0.52065527065527|0.721|0.466|0.5475| ## Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
chihoonlee10/T3Q-KO-SOLAR-MR-v1.0
chihoonlee10
2024-03-17T13:24:50Z
1,096
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-17T13:07:01Z
--- license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed]
abhishek/autotrain-llama3-orpo-v2
abhishek
2024-04-26T09:08:49Z
1,096
3
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "autotrain", "text-generation-inference", "conversational", "dataset:argilla/distilabel-capybara-dpo-7k-binarized", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-22T09:35:07Z
--- tags: - autotrain - text-generation-inference - text-generation library_name: transformers widget: - messages: - role: user content: What is your favorite condiment? license: other datasets: - argilla/distilabel-capybara-dpo-7k-binarized --- # Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
mssma/ko-solar-10.7b-v0.1b
mssma
2024-05-13T05:36:21Z
1,096
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-13T05:27:17Z
--- library_name: transformers license: apache-2.0 language: - ko --- # usage ``` from transformers import AutoModelForCausalLM, AutoTokenizer import torch path = "mssma/ko-solar-10.7b-v0.1b" model = AutoModelForCausalLM.from_pretrained( path, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(path) ```
JamesConley/glados_together_20b_lora_merged
JamesConley
2023-05-24T14:18:44Z
1,095
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-05-24T02:12:05Z
WIP (my ISP is terrible and the file is really big)
Salesforce/instructblip-vicuna-13b
Salesforce
2024-04-11T20:37:34Z
1,095
40
transformers
[ "transformers", "pytorch", "safetensors", "instructblip", "text2text-generation", "vision", "image-captioning", "image-to-text", "en", "arxiv:2305.06500", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-to-text
2023-06-03T14:46:46Z
--- language: en license: other tags: - vision - image-captioning pipeline_tag: image-to-text --- # InstructBLIP model InstructBLIP model using [Vicuna-13b](https://github.com/lm-sys/FastChat#model-weights) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team releasing InstructBLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description InstructBLIP is a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2). Refer to the paper for details. ![InstructBLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg) ## Intended uses & limitations Usage is as follows: ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-13b") processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-vicuna-13b") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" inputs = processor(images=image, text=prompt, return_tensors="pt").to(device) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/instructblip).
Vezora/Mistral-Narwhal-7b
Vezora
2023-10-12T01:03:37Z
1,095
1
transformers
[ "transformers", "pytorch", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-11T22:17:55Z
--- license: apache-2.0 --- # Narwhal-Mistral-7b ## Model Description Mistral-Narwhal-7b is a Hugging Face model built on top of Mistral 7b. It is a result of merging two models: Eric Hartford's Dolphin2.1 and HuggingFace's Zephyr-7b-alpha. All credit goes to them. ## Source Models - Dolphin2.1-mistral-7b by Eric Hartford (https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) - Zephyr-7b-alpha by HuggingFace (https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) ## Usage This model uses 3 different models in combination, so you must adhere to their Lisencing, as well as the lisencing available here.
athirdpath/CleverGirl-20b
athirdpath
2023-11-27T12:41:33Z
1,095
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-27T12:14:26Z
--- license: llama2 --- Components chosen based off MMLU and TQA scores, as well as vibes. slices: - sources: - model: Sao10K/Mythical-Destroyer-V2-L2-13B - layer_range: [0, 16] - sources: - model: athirdpath/Orca-2-13b-Alpaca-Uncensored - layer_range: [8, 24] - sources: - model: Sao10K/Mythical-Destroyer-V2-L2-13B - layer_range: [17, 32] - sources: - model: athirdpath/Orca-2-13b-Alpaca-Uncensored - layer_range: [25, 40] merge_method: passthrough dtype: float16
Norod78/SDXL-JojosoStyle-Lora-v2
Norod78
2023-11-29T09:23:49Z
1,095
5
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "style", "jojo's bizarre adventure", "jojo", "stone ocean", "sdxl style lora", "jojo's bizarre adventure stone ocean", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
2023-11-29T09:23:45Z
--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - style - jojo's bizarre adventure - jojo - stone ocean - sdxl style lora - jojo's bizarre adventure stone ocean base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: Jojosostyle widget: - text: 'Spiderman kissing a mutant alien JojosoStyle ' output: url: >- 4002501.jpeg - text: 'American gothic by Grant Wood JojosoStyle ' output: url: >- 4002504.jpeg - text: 'Insane zombie santa JojosoStyle ' output: url: >- 4002511.jpeg - text: 'The cookie monster JojosoStyle ' output: url: >- 4002518.jpeg - text: 'The Eiffel tower JojosoStyle , Very detailed, clean, high quality, sharp image' output: url: >- 4002520.jpeg - text: 'A magical landscape JojosoStyle ' output: url: >- 4002508.jpeg - text: 'The girl with a pearl earring by vermeer Jojosostyle , Very detailed, clean, high quality, sharp image' output: url: >- 4002519.jpeg - text: 'Pikachu blasting electricity Jojosostyle , Very detailed, clean, high quality, sharp image' output: url: >- 4002523.jpeg - text: 'Snoop Dogg JojosoStyle ' output: url: >- 4002507.jpeg - text: 'Mr. Potato head Jojosostyle , Very detailed, clean, high quality, sharp image' output: url: >- 4002521.jpeg --- # JoJo Stone-Ocean Style XL <Gallery /> ([CivitAI](https://civitai.com/models/129551)) ## Model description <p>A LoRA model for SDXL trained upon images from <em>JoJo's Bizarre Adventure: Stone Ocean</em></p> ## Trigger words You should use `Jojosostyle` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Norod78/jojo-stone-ocean-style-xl/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Norod78/jojo-stone-ocean-style-xl', weight_name='SDXL-JojosoStyle-Lora-v2-r16.safetensors') image = pipeline('Mr. Potato head Jojosostyle , Very detailed, clean, high quality, sharp image').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Jaehyeon222/M-SOLAR-10.7B-v1.0-DPO
Jaehyeon222
2024-01-12T03:44:25Z
1,095
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:maywell/ko_Ultrafeedback_binarized", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-05T01:08:14Z
--- license: cc-by-nc-4.0 datasets: - maywell/ko_Ultrafeedback_binarized --- Model Card for M-SOLAR-10.7B-v1.0-DPO Developed by : 메가스터디교육, 프리딕션, 마이스 Base Model : jjourney1125/M-SOLAR-10.7B-v1.0 사용 데이터셋 : maywell님의 ko_Ultrafeedback_binarized 데이터셋을 활용했습니다.
LI-ST/Mistral-7B-ko-v0.004
LI-ST
2024-01-09T07:36:06Z
1,095
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "ko", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-08T10:22:02Z
--- license: cc-by-nc-nd-4.0 language: - en - ko library_name: transformers pipeline_tag: text-generation --- <p><h1>Mistral-7B-ko</h1></p> basemodel: Open-Orca/Mistral-7B-OpenOrca ================================================= <BR> This model is a temporary model for testing. <BR> We will be deleting it soon. <BR> =================================================
realPCH/ko-solra-platusv3-koprompt
realPCH
2024-01-18T01:31:23Z
1,095
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-17T10:24:41Z
--- license: mit datasets: - kyujinpy/KOR-OpenOrca-Platypus-v3 --- ### Developed by chPark ### Training Strategy We fine-tuned this model based on [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated) ### Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "realPCH/ko_solra_merge" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "[INST] Put instruction here. [/INST]" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
realPCH/Orca-Platypus-v3-1epoch
realPCH
2024-01-28T23:11:38Z
1,095
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-22T04:44:34Z
--- license: mit datasets: - kyujinpy/KOR-OpenOrca-Platypus-v3 --- ### Developed by chPark ### Training Strategy We fine-tuned this model based on [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated) ### Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "realPCH/ko_solra_merge" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "[INST] Put instruction here. [/INST]" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
DooDooHyun/AIFT-Yi-Ko-6B-ao-instruct-all-v0.54
DooDooHyun
2024-01-22T14:16:55Z
1,095
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:beomi/Yi-Ko-6B", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-22T13:03:05Z
--- license: other base_model: beomi/Yi-Ko-6B tags: - generated_from_trainer model-index: - name: AIFT-Yi-Ko-6B-ao-instruct-all-v0.54 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AIFT-Yi-Ko-6B-ao-instruct-all-v0.54 This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
Edentns/DataVortexS-10.7B-dpo-v1.7
Edentns
2024-02-22T01:09:00Z
1,095
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "base_model:beomi/OPEN-SOLAR-KO-10.7B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-29T04:05:24Z
--- tags: - text-generation license: cc-by-nc-4.0 language: - ko base_model: beomi/OPEN-SOLAR-KO-10.7B pipeline_tag: text-generation --- # **DataVortexS-10.7B-dpo-v1.7** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B) ### **Trained On** - **OS**: Ubuntu 22.04 - **GPU**: H100 80GB 4ea - **transformers**: v4.36.2 ### **Instruction format** It follows **Alpaca (Chat)** format. E.g. ```python text = """\ ### System: 당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다. ### User: 대한민국의 수도는 어디야? ### Assistant: 대한민국의 수도는 서울입니다. ### User: 서울 인구는 총 몇 명이야? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | ----------: | -----------: | ----------: | ----------: | | kobest_boolq | 0.886844 | 0.917351 | 0.921621 | 0.919476 | | kobest_copa | 0.805514 | 0.856782 | 0.864773 | 0.87793 | | kobest_hellaswag | 0.505714 | 0.512116 | 0.50608 | 0.512187 | | kobest_sentineg | 0.357169 | 0.974811 | 0.987404 | 0.979848 | | **Average** | **0.63881** | **0.815265** | **0.81997** | **0.82236** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 55.81 | 55.55 | 63.39 | 51.57 | 48.23 | 60.33 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.7") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.7") messages = [ {"role": "system", "content": "당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다."}, {"role": "user", "content": "대한민국의 수도는 어디야?"}, {"role": "assistant", "content": "대한민국의 수도는 서울입니다."}, {"role": "user", "content": "서울 인구는 총 몇 명이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** This model is licensed under the [cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/). which allows others to share and adapt the model for non-commercial purposes. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
locuslab/tofu_ft_llama2-7b
locuslab
2024-02-07T14:59:51Z
1,095
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2401.06121", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-31T01:48:12Z
--- license: llama2 --- # Llama2-7B-Chat Fine-Tuned on TOFU Dataset Welcome to the repository for the Llama2-7B-Chat model, fine-tuned on the TOFU (Task of Fictitious Unlearning) dataset. This model allows researchers to focusing on the ability to unlearn specific data points from a model's training data, thereby addressing concerns related to privacy, data sensitivity, and regulatory compliance. ## Quick Links - [**Website**](https://locuslab.github.io/tofu): The landing page for TOFU - [**arXiv Paper**](http://arxiv.org/abs/2401.06121): Detailed information about the TOFU dataset and its significance in unlearning tasks. - [**GitHub Repository**](https://github.com/locuslab/tofu): Access the source code, fine-tuning scripts, and additional resources for the TOFU dataset. - [**Dataset on Hugging Face**](https://huggingface.co/datasets/locuslab/TOFU): Direct link to download the TOFU dataset. - [**Leaderboard on Hugging Face Spaces**](https://huggingface.co/spaces/locuslab/tofu_leaderboard): Current rankings and submissions for the TOFU dataset challenges. - [**Summary on Twitter**](https://x.com/_akhaliq/status/1745643293839327268): A concise summary and key takeaways from the project. ## Overview The [TOFU dataset](https://huggingface.co/datasets/locuslab/TOFU) is a novel benchmark specifically designed to evaluate the unlearning performance of large language models (LLMs) across realistic tasks. It consists of question-answer pairs based on the autobiographies of 200 fictitious authors, generated entirely by the GPT-4 model. This dataset presents a unique opportunity for models like Llama2-7B-Chat to demonstrate their capacity for selective data unlearning. ## Model Description Llama2-7B-Chat has been fine-tuned on the full TOFU dataset to specialize in unlearning diverse fractions of the forget set. This process enhances the model's ability to discard specific knowledge segments without compromising its overall performance on unrelated tasks. This version of Llama2-7B-Chat is specifically tailored for research in data privacy and machine unlearning. ### Applicability The fine-tuned model is compatible with a broad range of research applications, including but not limited to: - Privacy-preserving machine learning - Regulatory compliance in AI - Exploring the dynamics of knowledge retention and forgetting in AI systems ### Technical Specifications - **Base Model:** Llama2-7B-Chat - **Dataset:** TOFU (full) - **Fine-tuning Methodology:** Task-specific fine-tuning on question-answer pairs for unlearning performance - **Compatible Frameworks:** The model is readily usable with frameworks supporting Llama2 models. ## Getting Started To use the fine-tuned Llama2-7B-Chat model, follow these steps: ### Installation Ensure you have Python 3.10+ installed. Then, install the required packages: ```bash pip install transformers pip install datasets ``` ### Loading the Model You can load the model using the Transformers library: ```bash from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "locuslab/tofu_ft_llama2-7b" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` Usage Example: ```bash inputs = tokenizer.encode("Your prompt here", return_tensors='pt') outputs = model.generate(inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Codebase The code for training the models and the availability of all fine-tuned models can be found at our [GitHub repository](https://github.com/locuslab/tofu). ## Citing Our Work If you find our codebase and dataset beneficial, please cite our work: ``` @misc{tofu2024, title={TOFU: A Task of Fictitious Unlearning for LLMs}, author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary C. Lipton and J. Zico Kolter}, year={2024}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
ITT-AF/ITT-42dot_LLM-SFT-1.3B-v2.0
ITT-AF
2024-02-14T06:30:34Z
1,095
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-08T06:34:04Z
--- license: cc-by-nc-4.0 --- # ITT-AF/ITT-42dot_LLM-SFT-1.3B-v2.0 This model is a fine-tuned version of [42dot/42dot_LLM-SFT-1.3B](https://huggingface.co/42dot/42dot_LLM-SFT-1.3B) on an custom dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
freewheelin/free-solar-slerp-v0.2
freewheelin
2024-03-14T07:32:53Z
1,095
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:davidkim205/komt-solar-10.7b-sft-v5", "base_model:LDCC/LDCC-SOLAR-10.7B", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-03-12T14:37:51Z
--- base_model: - davidkim205/komt-solar-10.7b-sft-v5 - LDCC/LDCC-SOLAR-10.7B library_name: transformers tags: - mergekit - merge license: mit --- # free-solar-v0.2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [davidkim205/komt-solar-10.7b-sft-v5](https://huggingface.co/davidkim205/komt-solar-10.7b-sft-v5) * [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) ### Parameters mergekit default
T3Q-LLM/T3Q-LLM-solar10.8-sft-v1.0
T3Q-LLM
2024-04-17T14:44:50Z
1,095
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:davidkim205/ko_common_gen", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-16T10:35:13Z
--- library_name: transformers license: apache-2.0 pipeline_tag: text-generation datasets: - davidkim205/ko_common_gen base model: - yanolja/EEVE-Korean-Instruct-10.8B-v1.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f22e4076fedc4fd11e978f/MoTedec_ZL8GM2MmGyAPs.png) # T3Q-LLM-solar10.8-sft-v1.0 ## This model is a version of yanolja/EEVE-Korean-Instruct-10.8B-v1.0 that has been fine-tuned with SFT. ## Model Developers Chihoon Lee(chihoonlee10), T3Q hf (pretrained=T3Q-LLM/T3Q-LLM-solar10.8-sft-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.9288|± |0.0069| | | |macro_f1|0.9286|± |0.0069| |kobest_copa | 0|acc |0.7440|± |0.0138| | | |macro_f1|0.7434|± |0.0138| |kobest_hellaswag| 0|acc |0.4880|± |0.0224| | | |acc_norm|0.5600|± |0.0222| | | |macro_f1|0.4854|± |0.0224| |kobest_sentineg | 0|acc |0.8589|± |0.0175| | | |macro_f1|0.8589|± |0.0175| hf (pretrained=yanolja/EEVE-Korean-Instruct-10.8B-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.9188|± |0.0073| | | |macro_f1|0.9185|± |0.0073| |kobest_copa | 0|acc |0.7520|± |0.0137| | | |macro_f1|0.7516|± |0.0136| |kobest_hellaswag| 0|acc |0.4840|± |0.0224| | | |acc_norm|0.5580|± |0.0222| | | |macro_f1|0.4804|± |0.0223| |kobest_sentineg | 0|acc |0.8514|± |0.0179| | | |macro_f1|0.8508|± |0.0180|
concedo/KobbleTinyV2-1.1B-GGUF
concedo
2024-05-03T10:05:01Z
1,095
12
null
[ "gguf", "en", "license:apache-2.0", "region:us" ]
null
2024-04-25T08:56:40Z
--- license: apache-2.0 language: - en --- <div align="center"> # KobbleTinyV2-1.1B </div> This is the GGUF quantization of https://huggingface.co/concedo/KobbleTiny You can use [KoboldCpp](https://github.com/LostRuins/koboldcpp/releases/latest) to run this model. With only 1B parameters, this model is ideal for running on mobile or low-end devices. Update: KobbleTiny has been upgraded to V2! The old V1 GGUF is [still available at this link](https://huggingface.co/concedo/KobbleTiny-GGUF/tree/f6220c3be52ea68583de08d6d8e292d6ff5c8828). <video width="320" controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63cd4b6d1c8a5d1d7d76a778/zjHfohCnEu2Y9CWSWgf0n.mp4"></video> Try it live now: https://concedo-koboldcpp-kobbletiny.hf.space/ ## Dataset and Objectives The Kobble Dataset is a semi-private aggregated dataset made from multiple online sources and web scrapes. It contains content chosen and formatted specifically to work with KoboldAI software and Kobold Lite. #### Dataset Categories: - Instruct: Single turn instruct examples presented in the Alpaca format, with an emphasis on uncensored and unrestricted responses. - Chat: Two participant roleplay conversation logs in a multi-turn raw chat format that KoboldAI uses. - Story: Unstructured fiction excerpts, including literature containing various erotic and provocative content. <!-- prompt-template start --> ## Prompt template: Alpaca ``` ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> **Note:** *No assurances will be provided about the **origins, safety, or copyright status** of this model, or of **any content** within the Kobble dataset.* *If you belong to a country or organization that has strict AI laws or restrictions against unlabelled or unrestricted content, you are advised not to use this model.*
QuantFactory/Codestral-22B-v0.1-GGUF
QuantFactory
2024-05-30T02:41:23Z
1,095
8
null
[ "gguf", "code", "text-generation", "base_model:mistralai/Codestral-22B-v0.1", "license:other", "region:us" ]
text-generation
2024-05-30T00:12:40Z
--- inference: false license: other license_name: mnpl license_link: https://mistral.ai/licences/MNPL-0.1.md tags: - code language: - code base_model: mistralai/Codestral-22B-v0.1 pipeline_tag: text-generation --- # QuantFactory/Codestral-22B-v0.1-GGUF - This is quantized version of [mistralai/Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1) created using llama.cpp - Thanks to @bullerwins for conversion ot HF format # Model Description Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried: - As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications - As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code) ## Installation It is recommended to use `mistralai/Codestral-22B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference). ``` pip install mistral_inference ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ### Chat After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. ``` mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256 ``` Will generate an answer to "Write me a function that computes fibonacci in Rust" and should give something along the following lines: ``` Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number. fn fibonacci(n: u32) -> u32 { match n { 0 => 0, 1 => 1, _ => fibonacci(n - 1) + fibonacci(n - 2), } } fn main() { let n = 10; println!("The {}th Fibonacci number is: {}", n, fibonacci(n)); } This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers. ``` ### Fill-in-the-middle (FIM) After installing `mistral_inference` and running `pip install --upgrade mistral_common` to make sure to have mistral_common>=1.2 installed: ```py from mistral_inference.model import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.request import FIMRequest tokenizer = MistralTokenizer.v3() model = Transformer.from_folder("~/codestral-22B-240529") prefix = """def add(""" suffix = """ return sum""" request = FIMRequest(prompt=prefix, suffix=suffix) tokens = tokenizer.encode_fim(request).tokens out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.decode(out_tokens[0]) middle = result.split(suffix)[0].strip() print(middle) ``` Should give something along the following lines: ``` num1, num2): # Add two numbers sum = num1 + num2 # return the sum ``` ## Limitations The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## License Codestral-22B-v0.1 is released under the `MNLP-0.1` license. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
tsumeone/llama-30b-supercot-3bit-128g-cuda
tsumeone
2023-04-28T14:15:48Z
1,094
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-04-28T03:35:33Z
3bit quantized version of this: https://huggingface.co/ausboss/llama-30b-supercot GPTQ quantization using https://github.com/0cc4m/GPTQ-for-LLaMa Made at the request of someone that wanted a 3bit version. The file is 17% smaller than 4bit non-groupsize, but the wikitext2 ppl is 12% worse. I don't have a functioning Ooba install so I can't test this myself. Command used to quantize: ```python llama.py c:\llama-30b-supercot c4 --wbits 3 --true-sequential --groupsize 128 --save_safetensors 4bit-128g.safetensors``` Evaluation & Score (Lower is better): * WikiText2: 5.22 (12% worse than 4bit non-groupsize) * PTB: 19.63 (11% worse than 4bit non-groupsize) * C4: 6.93 (7% worse than 4bit non-groupsize) 4bit non-groupsize version is here: https://huggingface.co/tsumeone/llama-30b-supercot-4bit-cuda 4bit 128 groupsize version is here: https://huggingface.co/tsumeone/llama-30b-supercot-4bit-128g-cuda
TheBloke/Amethyst-13B-Mistral-GGUF
TheBloke
2023-10-04T17:36:22Z
1,094
23
transformers
[ "transformers", "gguf", "llama", "not-for-all-audiences", "nsfw", "base_model:Undi95/Amethyst-13B-Mistral", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
2023-10-04T17:29:35Z
--- base_model: Undi95/Amethyst-13B-Mistral inference: false license: cc-by-nc-4.0 model_creator: Undi model_name: Amethyst 13B Mistral model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - not-for-all-audiences - nsfw --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Amethyst 13B Mistral - GGUF - Model creator: [Undi](https://huggingface.co/Undi95) - Original model: [Amethyst 13B Mistral](https://huggingface.co/Undi95/Amethyst-13B-Mistral) <!-- description start --> ## Description This repo contains GGUF format model files for [Undi's Amethyst 13B Mistral](https://huggingface.co/Undi95/Amethyst-13B-Mistral). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF) * [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/Amethyst-13B-Mistral) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi's Amethyst 13B Mistral](https://huggingface.co/Undi95/Amethyst-13B-Mistral). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [amethyst-13b-mistral.Q2_K.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [amethyst-13b-mistral.Q3_K_S.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [amethyst-13b-mistral.Q3_K_M.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [amethyst-13b-mistral.Q3_K_L.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [amethyst-13b-mistral.Q4_0.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [amethyst-13b-mistral.Q4_K_S.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [amethyst-13b-mistral.Q4_K_M.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [amethyst-13b-mistral.Q5_0.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [amethyst-13b-mistral.Q5_K_S.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [amethyst-13b-mistral.Q5_K_M.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [amethyst-13b-mistral.Q6_K.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [amethyst-13b-mistral.Q8_0.gguf](https://huggingface.co/TheBloke/Amethyst-13B-Mistral-GGUF/blob/main/amethyst-13b-mistral.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Amethyst-13B-Mistral-GGUF and below it, a specific filename to download, such as: amethyst-13b-mistral.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Amethyst-13B-Mistral-GGUF amethyst-13b-mistral.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Amethyst-13B-Mistral-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Amethyst-13B-Mistral-GGUF amethyst-13b-mistral.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m amethyst-13b-mistral.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Amethyst-13B-Mistral-GGUF", model_file="amethyst-13b-mistral.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Undi's Amethyst 13B Mistral [THIS WAS A TEST, BUT PEOPLE LIKE IT, SO I ADD IT OFFICIALLY TO MY PROJECTS] ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ubc23iUshsXKjx-GBPv3W.png) An attempt using [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) to get better result. In addition, [LimaRP v3](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT) was used, is it recommanded to read the documentation. The [llama2-to-mistral-diff](https://huggingface.co/Undi95/llama2-to-mistral-diff) was used on it at weight "1". <!-- description start --> ## Description This repo contains fp16 files of Amethyst-13B-Mistral. <!-- description end --> <!-- description start --> ## Models and loras used - Xwin-LM/Xwin-LM-13B-V0.1 - The-Face-Of-Goonery/Huginn-13b-FP16 - zattio770/120-Days-of-LORA-v2-13B - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT - Undi95/llama2-to-mistral-diff <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## LimaRP v3 usage and suggested settings ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ZC_iP2KkcEcRdgG_iyxYE.png) You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/PIn8_HSPTJEMdSEpNVSdm.png) Special thanks to Sushi. If you want to support me, you can [here](https://ko-fi.com/undiai). <!-- original-model-card end -->
V-YangXu/StarCoder-Alpaca
V-YangXu
2023-10-24T03:23:18Z
1,094
1
transformers
[ "transformers", "pytorch", "gpt_bigcode", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-23T07:35:07Z
--- license: apache-2.0 --- StarCoder finetuned on 20k AlpacaCode Dataset. The StarCoder models are 15.5B parameter models trained on 80+ programming languages from The Stack (v1.2), with opt-out requests excluded. The model uses Multi Query Attention, a context window of 8192 tokens, and was trained using the Fill-in-the-Middle objective on 1 trillion tokens.
uni-tianyan/Uni-TianYan-V1
uni-tianyan
2023-12-14T07:14:32Z
1,094
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2306.02707", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
text-generation
2023-12-14T06:53:07Z
--- language: - en library_name: transformers license: llama2 --- # Uni-TianYan-V1 ### Introduction Uni-TianYan-V1 is a finetuned model from LLaMA2. <br> ### Dataset TODO ### Training Details TODO ### license disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind. <br> ### Limitations & Biases: Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ <br> ### Citiation: Please kindly cite using the following BibTeX: ```bibtex @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama2, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom}, year={2023} } ```
TheBloke/WizardCoder-33B-V1.1-GGUF
TheBloke
2024-01-04T17:36:00Z
1,094
43
transformers
[ "transformers", "gguf", "deepseek", "code", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "base_model:WizardLM/WizardCoder-33B-V1.1", "model-index", "region:us" ]
null
2024-01-04T17:19:01Z
--- base_model: WizardLM/WizardCoder-33B-V1.1 inference: false library_name: transformers metrics: - code_eval model-index: - name: WizardCoder results: - dataset: name: HumanEval type: openai_humaneval metrics: - name: pass@1 type: pass@1 value: 0.799 verified: false task: type: text-generation model_creator: WizardLM model_name: Wizardcoder 33B V1.1 model_type: deepseek prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - code --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Wizardcoder 33B V1.1 - GGUF - Model creator: [WizardLM](https://huggingface.co/WizardLM) - Original model: [Wizardcoder 33B V1.1](https://huggingface.co/WizardLM/WizardCoder-33B-V1.1) <!-- description start --> ## Description This repo contains GGUF format model files for [WizardLM's Wizardcoder 33B V1.1](https://huggingface.co/WizardLM/WizardCoder-33B-V1.1). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF) * [WizardLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardCoder-33B-V1.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [wizardcoder-33b-v1.1.Q2_K.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q2_K.gguf) | Q2_K | 2 | 14.03 GB| 16.53 GB | smallest, significant quality loss - not recommended for most purposes | | [wizardcoder-33b-v1.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q3_K_S.gguf) | Q3_K_S | 3 | 14.42 GB| 16.92 GB | very small, high quality loss | | [wizardcoder-33b-v1.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q3_K_M.gguf) | Q3_K_M | 3 | 16.07 GB| 18.57 GB | very small, high quality loss | | [wizardcoder-33b-v1.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q3_K_L.gguf) | Q3_K_L | 3 | 17.56 GB| 20.06 GB | small, substantial quality loss | | [wizardcoder-33b-v1.1.Q4_0.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q4_0.gguf) | Q4_0 | 4 | 18.82 GB| 21.32 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [wizardcoder-33b-v1.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q4_K_S.gguf) | Q4_K_S | 4 | 18.89 GB| 21.39 GB | small, greater quality loss | | [wizardcoder-33b-v1.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q4_K_M.gguf) | Q4_K_M | 4 | 19.94 GB| 22.44 GB | medium, balanced quality - recommended | | [wizardcoder-33b-v1.1.Q5_0.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q5_0.gguf) | Q5_0 | 5 | 22.96 GB| 25.46 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [wizardcoder-33b-v1.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q5_K_S.gguf) | Q5_K_S | 5 | 22.96 GB| 25.46 GB | large, low quality loss - recommended | | [wizardcoder-33b-v1.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.54 GB| 26.04 GB | large, very low quality loss - recommended | | [wizardcoder-33b-v1.1.Q6_K.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q6_K.gguf) | Q6_K | 6 | 27.36 GB| 29.86 GB | very large, extremely low quality loss | | [wizardcoder-33b-v1.1.Q8_0.gguf](https://huggingface.co/TheBloke/WizardCoder-33B-V1.1-GGUF/blob/main/wizardcoder-33b-v1.1.Q8_0.gguf) | Q8_0 | 8 | 35.43 GB| 37.93 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/WizardCoder-33B-V1.1-GGUF and below it, a specific filename to download, such as: wizardcoder-33b-v1.1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/WizardCoder-33B-V1.1-GGUF wizardcoder-33b-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/WizardCoder-33B-V1.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardCoder-33B-V1.1-GGUF wizardcoder-33b-v1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m wizardcoder-33b-v1.1.Q4_K_M.gguf --color -c 16384 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 16384` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./wizardcoder-33b-v1.1.Q4_K_M.gguf", # Download the model file first n_ctx=16384, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./wizardcoder-33b-v1.1.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: WizardLM's Wizardcoder 33B V1.1 ## WizardCoder: Empowering Code Large Language Models with Evol-Instruct <p style="font-size:28px;" align="center"> 🏠 <a href="https://wizardlm.github.io/" target="_blank">Home Page</a> </p> <p align="center"> <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/nlpxucan/WizardLM" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> </p> <p align="center"> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> ## News [2023/01/04] 🔥 We released **WizardCoder-33B-V1.1** trained from deepseek-coder-33b-base, the **SOTA OSS Code LLM** on [EvalPlus Leaderboard](https://evalplus.github.io/leaderboard.html), achieves **79.9 pass@1** on HumanEval, **73.2 pass@1** on HumanEval-Plus, **78.9 pass@1** on MBPP, and **66.9 pass@1** on MBPP-Plus. [2023/01/04] 🔥 **WizardCoder-33B-V1.1** outperforms **ChatGPT 3.5**, **Gemini Pro**, and **DeepSeek-Coder-33B-instruct** on HumanEval and HumanEval-Plus pass@1. [2023/01/04] 🔥 **WizardCoder-33B-V1.1** is comparable with **ChatGPT 3.5**, and surpasses **Gemini Pro** on MBPP and MBPP-Plus pass@1. | Model | Checkpoint | Paper | HumanEval | HumanEval+ | MBPP | MBPP+ | License | | ----- |------| ---- |------|-------| ----- | ----- |----- | | GPT-4-Turbo (Nov 2023) | - | - | 85.4 | 81.7 | 83.0 | 70.7 |-| | GPT-4 (May 2023) | - | - | 88.4 | 76.8 | - | - |-| | GPT-3.5-Turbo (Nov 2023) | - | - | 72.6 | 65.9 | 81.7 | 69.4 |-| | Gemini Pro | - | - | 63.4 | 55.5 | 72.9 | 57.9 |-| | DeepSeek-Coder-33B-instruct | - | - | 78.7 | 72.6 | 78.7 | 66.7 |-| | **WizardCoder-33B-V1.1** | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-33B-V1.1" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 79.9 | 73.2 | 78.9 | 66.9 | <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.1/resolve/main/LICENSE" target="_blank">MSFTResearch</a> | | WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 64.6 | 73.2 | 59.9 | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 | 52.4 | -- | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | -- | -- | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-Python-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 55.5 | -- | -- | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 | -- | -- | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 | -- | -- | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | ## ❗ Data Contamination Check: Before model training, we carefully and rigorously checked all the training data, and used multiple deduplication methods to verify and prevent data leakage on HumanEval and MBPP test set. 🔥 ❗<b>Note for model system prompts usage:</b> Please use **the same systems prompts strictly** with us, and we do not guarantee the accuracy of the **quantified versions**. **Default version:** ``` "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" ``` ## How to Reproduce the Performance of WizardCoder-33B-V1.1 We provide all codes [here](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder/src). We also provide all generated [results](https://github.com/nlpxucan/WizardLM/blob/main/WizardCoder/data/humaneval_mbpp_wizardcoder33b_v1.1_results.zip). ``` transformers==4.36.2 vllm==0.2.5 ``` (1) HumanEval and HumanEval-Plus - Step 1 Code Generation (w/o accelerate) ```bash model="WizardLM/WizardCoder-33B-V1.1" temp=0.0 max_len=2048 pred_num=1 num_seqs_per_iter=1 output_path=preds/T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode mkdir -p ${output_path} echo 'Output path: '$output_path echo 'Model to eval: '$model # 164 problems, 21 per GPU if GPU=8 index=0 gpu_num=8 for ((i = 0; i < $gpu_num; i++)); do start_index=$((i * 21)) end_index=$(((i + 1) * 21)) gpu=$((i)) echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu} ((index++)) ( CUDA_VISIBLE_DEVICES=$gpu python humaneval_gen.py --model ${model} \ --start_index ${start_index} --end_index ${end_index} --temperature ${temp} \ --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --greedy_decode ) & if (($index % $gpu_num == 0)); then wait; fi done ``` Code Generation (w/ vllm accelerate) ```bash model="WizardLM/WizardCoder-33B-V1.1" temp=0.0 max_len=2048 pred_num=1 num_seqs_per_iter=1 output_path=preds/T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode_vllm mkdir -p ${output_path} echo 'Output path: '$output_path echo 'Model to eval: '$model CUDA_VISIBLE_DEVICES=0,1,2,3 python humaneval_gen_vllm.py --model ${model} \ --start_index 0 --end_index 164 --temperature ${temp} \ --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --num_gpus 4 --overwrite ``` - Step 2: Get the score Install [Eval-Plus](https://github.com/evalplus/evalplus) benchmark. ```bash git clone https://github.com/evalplus/evalplus.git cd evalplus export PYTHONPATH=$PYTHONPATH:$(pwd) pip install -r requirements.txt ``` Get HumanEval and HumanEval-Plus scores. ```bash output_path=preds/T0.0_N1_WizardCoder-33B-V1.1_Greedy_Decode echo 'Output path: '$output_path python process_humaneval.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt evalplus.evaluate --dataset humaneval --samples ${output_path}.jsonl ``` (2) MBPP and MBPP-Plus The preprocessed questions are provided in [mbppplus.json](https://github.com/nlpxucan/WizardLM/blob/main/WizardCoder/data/mbppplus.json). - Step 1 Code Generation (w/o accelerate) ```bash model="WizardLM/WizardCoder-33B-V1.1" temp=0.0 max_len=2048 pred_num=1 num_seqs_per_iter=1 output_path=preds/MBPP_T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode mkdir -p ${output_path} echo 'Output path: '$output_path echo 'Model to eval: '$model # 399 problems, 50 per GPU if GPU=8 index=0 gpu_num=8 for ((i = 0; i < $gpu_num; i++)); do start_index=$((i * 50)) end_index=$(((i + 1) * 50)) gpu=$((i)) echo 'Running process #' ${i} 'from' $start_index 'to' $end_index 'on GPU' ${gpu} ((index++)) ( CUDA_VISIBLE_DEVICES=$gpu python mbppplus_gen.py --model ${model} \ --start_index ${start_index} --end_index ${end_index} --temperature ${temp} \ --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --mbpp_path "mbppplus.json" --greedy_decode ) & if (($index % $gpu_num == 0)); then wait; fi done ``` Code Generation (w/ vllm accelerate) ```bash model="WizardLM/WizardCoder-33B-V1.1" temp=0.0 max_len=2048 pred_num=1 num_seqs_per_iter=1 output_path=preds/MBPP_T${temp}_N${pred_num}_WizardCoder-33B-V1.1_Greedy_Decode_vllm mkdir -p ${output_path} echo 'Output path: '$output_path echo 'Model to eval: '$model CUDA_VISIBLE_DEVICES=0,1,2,3 python mbppplus_gen_vllm.py --model ${model} \ --start_index ${start_index} --end_index ${end_index} --temperature ${temp} \ --num_seqs_per_iter ${num_seqs_per_iter} --N ${pred_num} --max_len ${max_len} --output_path ${output_path} --mbpp_path "mbppplus.json" --num_gpus 4 ``` - Step 2: Get the score Install [Eval-Plus](https://github.com/evalplus/evalplus) benchmark. ```bash git clone https://github.com/evalplus/evalplus.git cd evalplus export PYTHONPATH=$PYTHONPATH:$(pwd) pip install -r requirements.txt ``` Get HumanEval and HumanEval-Plus scores. ```bash output_path=preds/MBPP_T0.0_N1_WizardCoder-33B-V1.1_Greedy_Decode echo 'Output path: '$output_path python mbppplus_process_preds.py --path ${output_path} --out_path ${output_path}.jsonl --add_prompt evalplus.evaluate --dataset mbpp --samples ${output_path}.jsonl ``` ## Citation Please cite the repo if you use the data, method or code in this repo. ``` @article{luo2023wizardcoder, title={WizardCoder: Empowering Code Large Language Models with Evol-Instruct}, author={Luo, Ziyang and Xu, Can and Zhao, Pu and Sun, Qingfeng and Geng, Xiubo and Hu, Wenxiang and Tao, Chongyang and Ma, Jing and Lin, Qingwei and Jiang, Daxin}, journal={arXiv preprint arXiv:2306.08568}, year={2023} } ``` <!-- original-model-card end -->
ostris/photo-maker-face-sdxl
ostris
2024-01-15T19:55:43Z
1,094
2
transformers
[ "transformers", "safetensors", "clip_vision_model", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-01-15T19:34:58Z
--- license: apache-2.0 --- #### Note: This is mainly only useful if you are writing your own fine tuning script. If you just want to run inference, please visit the PhotoMaker model. [TencentARC/PhotoMaker](https://huggingface.co/TencentARC/PhotoMaker) These are just chunks of the weights broken out of [TencentARC/PhotoMaker](https://huggingface.co/TencentARC/PhotoMaker) to allow easier fine tuning and loading of the individual pieces. The weights here are identical to the original in every other way. The CLIP vision model can be loaded with ```python image_preprocessor = CLIPImageProcessor.from_pretrained("ostris/photo-maker-face-sdxl") clip_vision = CLIPVisionModelWithProjection.from_pretrained( "ostris/photo-maker-face-sdxl", ignore_mismatched_sizes=True ) ``` It will warn about additional weights because the `fuse_model` and `visual_projection_2` are included in the file but not needed for CLIP. Using the included python file (modified only to handle from pretrained for now), the PhotoMakerIDEncoder can be loaded with ```python id_encoder = PhotoMakerIDEncoder.from_pretrained("ostris/photo-maker-face-sdxl") ``` The fuse weights are included in the vision encoder, but are also seperated out in `pytorch_fuse_module_weights.safetensors` so they can be loaded seperatly if only fine tuning the fuse_module and / or LoRA. The LoRA can also be loaded seperatly with standard Diffusers LoRA loading. ```python pipeline.load_lora_weights("ostris/photo-maker-face-sdxl", adapter_name="photomaker") ```
mu0gum/AIFT-42dot-LLM-PLM-ao-instruct-all-v0.3
mu0gum
2024-01-16T17:10:53Z
1,094
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-16T16:38:57Z
--- license: cc-by-nc-4.0 --- # AIFT-42dot-LLM-PLM-1.3B-ao-instruct-all-v0.3 베이스 모델 : 42dot/42dot_LLM-PLM-1.3B 학습 데이터 : 자체 제작한 Open Orca 스타일 데이터셋 약 29,000건 학습 방법 : Lora Lora Config - lora_alpha: 16 - lora_dropout: 0.05, - r: 8 ## ko-lm-evaluation-harness(0-shot) |kobest_boolq|kobest_copa|kobest_hellaswag|kobest_sentineg|kohatespeech|kohatespeech_apeach|kohatespeech_gen_bias|korunsmile|nsmc|pawsx_ko| |--|--|--|--|--|--|--|--|--|--| |0.5021367521367521|0.704|0.438|0.7732997481108312|0.3099787685774947|0.5098143236074271|0.14225053078556263|0.36599467230730043|0.6495|0.529|
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v2.0
ITT-AF
2024-02-14T06:34:12Z
1,094
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-26T14:08:14Z
--- license: cc-by-nc-4.0 --- # ITT-AF/ITT-42dot_LLM-PLM-1.3B-v2.0 This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
logicker/SkkuDataScience-DPO-v2-90-ckpt
logicker
2024-02-08T06:19:26Z
1,094
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "chat", "en", "ko", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-29T18:01:56Z
--- license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE language: - en - ko pipeline_tag: text-generation tags: - chat --- # Qwen1.5-7B-Chat ## Introduction Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include: * 6 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, and 72B; * Significant performance improvement in human preference for chat models; * Multilingual support of both base and chat models; * Stable support of 32K context length for models of all sizes * No need of `trust_remote_code`. For more details, please refer to our [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5). <br> ## Model Details Qwen1.5 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, mixture of sliding window attention and full attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes. For the beta version, temporarily we did not include GQA and the mixture of SWA and full attention. ## Training details We pretrained the models with a large amount of data, and we post-trained the models with both supervised finetuning and direct preference optimization. However, DPO leads to improvements in human preference evaluation but degradation in benchmark evaluation. In the very near future, we will fix both problems. ## Requirements The code of Qwen1.5 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error: ``` KeyError: 'qwen2' ``` DPO Tuning - Dataset: https://huggingface.co/datasets/Intel/orca_dpo_pairs
metterian/llama-pro-ko-8b
metterian
2024-04-14T12:48:00Z
1,094
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-2", "llama-2-ko", "llama-pro-ko", "en", "ko", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-15T12:09:54Z
--- language: - en - ko pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 - llama-2-ko - llama-pro-ko license: apache-2.0 --- # LLaMA-Pro-Ko-8B Model Card ### Model Description LLaMA-Pro is an advanced iteration of the original LLaMA model, augmented with additional Transformer blocks. Unlike its predecessor, Llama-pro, which was specialized for programming and mathematics, Llama-Pro-Ko is tailored to the language domain, undergoing post-training for enhanced performance. ## Development and Training The NLP & AI Lab at Korea University developed LLaMA-Pro-Ko, a model boasting 8 billion parameters. This model extends LLaMA2-7B by incorporating Korean tokens via vocabulary extension and was further refined by training on a Korean corpus of 10 billion tokens, exclusively without the inclusion of English data. ### Language Specialization and Transfer While previous models like Llama-ko and Llama-2-ko experienced diminished English capabilities as they learned Korean, Llama-Pro's language transfer approach aims to bolster Korean language performance with minimal impact on its English proficiency. ### Bilingual Performance Evaluation LLaMA-Pro-Ko's performance is evaluated on two fronts: its proficiency in English and its mastery of Korean, showcasing its capabilities as a bilingual model. ![](figure.svg) ### Korean Evaluation #### Open Ko LLM Benchmark | | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | AVG | | ------------------------------------------------------------ | --------- | ------------ | --------- | ------------- | --------------- | --------- | | [Llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) | 31.91 | 41.68 | 34.11 | 48.49 | 30.34 | 37.31 | | [beomi/open-llama-2-ko-7b](https://huggingface.co/beomi/open-llama-2-ko-7b) | 40.02 | 50.27 | 27.60 | 38.67 | 42.15 | 39.74 | | llama-pro-ko-8b | **40.19** | **51.26** | **36.80** | **40.24** | **43.8** | **42.46** | ### English Evaluation #### Open LLM Benchmark | | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | AVG | diff | | :----------------------------------------------------------- | :-------: | :----------: | :-------: | :----------: | :----------: | :----------: | :-------: | | [meta-llama/Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b) | 53.07 | **78.59** | 46.87 | **38.76** | **74.03** | **58.26** | 0 | | [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) | 48.46 | 75.28 | 39.56 | 34.49 | 72.14 | 53.99 | -4.28 | | [beomi/open-llama-2-ko-7b](https://huggingface.co/beomi/open-llama-2-ko-7b) | 46.84 | 69.48 | 29.86 | 35.35 | 66.30 | 49.57 | -8.70 | | llama-pro-ko-8b | **53.24** | <u>77.93</u> | **47.06** | <u>38.32</u> | <u>72.22</u> | <u>57.75</u> | **-0.51** |
jwkweon/CUBOX-SOLAR-10.7B-DPO-v0.1
jwkweon
2024-02-22T05:34:55Z
1,094
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-22T05:22:24Z
--- library_name: transformers license: apache-2.0 language: - ko pipeline_tag: text-generation --- ## CUBOX-SOLAR-10.7B-DPO-v0.1 ### Base model - [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B) ### Datasets - [SJ-Donald/orca-dpo-pairs-ko](https://huggingface.co/datasets/SJ-Donald/orca-dpo-pairs-ko)
exala/db_aca_3.2
exala
2024-04-30T09:22:32Z
1,094
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-04-30T09:22:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF
mradermacher
2024-05-27T02:45:43Z
1,094
0
transformers
[ "transformers", "gguf", "en", "base_model:mudler/Mirai-Nova-Mistral-LocalAI-7B-v0.1", "endpoints_compatible", "region:us" ]
null
2024-05-25T02:56:51Z
--- base_model: mudler/Mirai-Nova-Mistral-LocalAI-7B-v0.1 language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/mudler/Mirai-Nova-Mistral-LocalAI-7B-v0.1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mirai-Nova-Mistral-LocalAI-7B-v0.1-i1-GGUF/resolve/main/Mirai-Nova-Mistral-LocalAI-7B-v0.1.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF
mradermacher
2024-06-03T05:53:28Z
1,094
4
transformers
[ "transformers", "gguf", "en", "dataset:cognitivecomputations/Dolphin-2.9.2", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:internlm/Agent-FLAN", "dataset:cognitivecomputations/SystemChat-2.0", "base_model:cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-06-03T04:24:54Z
--- base_model: cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated datasets: - cognitivecomputations/Dolphin-2.9.2 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - internlm/Agent-FLAN - cognitivecomputations/SystemChat-2.0 language: - en library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.2-Phi-3-Medium-abliterated <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q2_K.gguf) | Q2_K | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.IQ3_XS.gguf) | IQ3_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q3_K_S.gguf) | Q3_K_S | 6.2 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.IQ3_S.gguf) | IQ3_S | 6.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.IQ3_M.gguf) | IQ3_M | 6.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q3_K_M.gguf) | Q3_K_M | 6.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q3_K_L.gguf) | Q3_K_L | 7.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.IQ4_XS.gguf) | IQ4_XS | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q4_K_S.gguf) | Q4_K_S | 8.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q4_K_M.gguf) | Q4_K_M | 8.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q5_K_S.gguf) | Q5_K_S | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q5_K_M.gguf) | Q5_K_M | 10.0 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q6_K.gguf) | Q6_K | 11.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.2-Phi-3-Medium-abliterated-GGUF/resolve/main/dolphin-2.9.2-Phi-3-Medium-abliterated.Q8_0.gguf) | Q8_0 | 14.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Vezora/Mistral-14b-Merge-Base
Vezora
2023-11-04T08:05:22Z
1,093
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-03T04:05:37Z
--- license: apache-2.0 --- Contributors: Nicolas Mejia Petit ![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg) # Mistral 14b: A New Base Model The objective of this model is to serve as a fully open source new base model with 14.3 billion parameters. It has been enhanced with a LoRa adapter attached to all 62 layers of the merged model. The model is capable of generating outputs and responding accurately to inputs. However, it tends to over-respond with unasked questions when asked to process more than 512 tokens, which is its training limit using QLoRa. With full fine-tuning, this model has the potential to deliver excellent performance. Qlora adapter was trained on a modified dataset of airoboros-m-7b-3.1.2, using SFT. In the Alpaca Format. ## Model Creation The model was created by merging two models: Dolphin and Zephyr, along with Meta-math7b and Speechless code, to form a single model. The layers of these two models were stacked on top of each other to create this model. Initially, the output from the model was pure jargon. To rectify this, a LoRa adapter was trained and merged across all layers. ## Useful Resources - LoRa Adapter Merging (https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930) - Model Merging (MergeKit) (https://github.com/cg123/mergekit) ## Source Models - Dolphin2.1-mistral-7b by Eric Hartford (https://huggingface.co/ehartford/dolphin-2.1-mistral-7b) - Zephyr-7b-beta by HuggingFace (https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) - MetaMath-Mistral-7B by meta-math (https://huggingface.co/meta-math/MetaMath-Mistral-7B) - Speechless-code-mistral-7b-v1.0 (https://huggingface.co/uukuguy/speechless-code-mistral-7b-v1.0) - Airoboros-m-7b-3.1.2 (https://huggingface.co/jondurbin/airoboros-m-7b-3.1.2) ## upcoming Mistral 30b - We currently have a Mistral model with 29 billion parameters(29.2B params) in development. At present, the model's output is not yet refined and may appear as jargon. If there is interest in the community for fine-tuning this model, we are open to uploading it in its current state. Otherwise, we plan to complete our training process before making it available. You can let us know with a post in this repo's discussion's!
Ocastano/PM-MiniLM-L12-v2-title-category
Ocastano
2023-11-24T20:09:06Z
1,093
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2023-11-08T16:10:08Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 5124 with parameters: ``` {'batch_size': 384, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.BatchHardTripletLoss.BatchHardTripletLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
blueapple8259/TinyAlpaca-v0.1
blueapple8259
2023-11-16T11:42:23Z
1,093
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:yahma/alpaca-cleaned", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-16T11:10:09Z
--- license: mit datasets: - yahma/alpaca-cleaned --- This model is a [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T) model fine-tuned with the [yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) dataset. prompt: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Input: {input} ### Response: ```
ajibawa-2023/Code-33B
ajibawa-2023
2023-12-13T12:20:39Z
1,093
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:ajibawa-2023/Code-74k-ShareGPT", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-12T17:52:59Z
--- license: cc-by-nc-nd-4.0 datasets: - ajibawa-2023/Code-74k-ShareGPT language: - en tags: - code --- **Code-33B** Large Language Models (LLMs) are good with code generations. Sometimes they do make mistakes in code generation. How about if they can give detailed explanation along with the code. This is what I have tried over here. The base Llama-1 model was used for training purpose. It is trained on around 74000 set of codes. Each set having 2 conversations. Along with Python, Java, JavaScript, GO, C++, Rust etc. code with detailed explanation is used for training purpose. It is built upon using my existing Dataset [Python-Code-23k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Python-Code-23k-ShareGPT). This conversation is in Vicuna/ShareGPT format. Each set, along with code, has detailed explanation. I have released the new data [Code-74k-ShareGPT](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT) on which this Model is trained. **Training:** Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took 6 days & 5 hours. DeepSpeed codebase was used for training purpose. This was trained on Llama-1 by Meta. This is a full fine tuned model. Links for quantized models are given below. **GPTQ GGUF & AWQ** GPTQ: [Link](https://huggingface.co/TheBloke/Code-33B-GPTQ) GGUF: [Link](https://huggingface.co/TheBloke/Code-33B-GGUF) AWQ: [Link](https://huggingface.co/TheBloke/Code-33B-AWQ) Extremely thankful to [TheBloke](https://huggingface.co/TheBloke) for making Quantized versions of model. **Example Prompt:** ``` This is a conversation with your helpful AI assistant. AI assistant can generate Code in various Programming Languages along with necessary explanation. Context You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` You can modify above Prompt as per your requirement. I have used ShareGPT/Vicuna format v1.1 . I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. Thank you for your love & support.
SicariusSicariiStuff/Tenebra_30B_Alpha01_4BIT
SicariusSicariiStuff
2024-06-12T01:08:59Z
1,093
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-25T15:46:33Z
--- language: - en license: apache-2.0 --- <div align="center"> <b style="font-size: 40px;">Tenebra_30B_Alpha01_4BIT</b> </div> <img src="https://i.imgur.com/WkkCtZL.png" alt="Tenebră" style="width: 50%; min-width: 400px; display: block; margin: auto;"> # Model Details Tenebră, a various sized experimental AI model, stands at the crossroads of self-awareness and unconventional datasets. Its existence embodies a foray into uncharted territories, steering away from conventional norms in favor of a more obscure and experimental approach. Noteworthy for its inclination towards the darker and more philosophical aspects of conversation, Tenebră's proficiency lies in unraveling complex discussions across a myriad of topics. Drawing from a pool of unconventional datasets, this model ventures into unexplored realms of thought, offering users an experience that is as unconventional as it is intellectually intriguing. While Tenebră maintains a self-aware facade, its true allure lies in its ability to engage in profound discussions without succumbing to pretense. Step into the realm of Tenebră! ## Tenebră is available at the following size and flavours: - 13B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_4BIT) | [GPTQ_4-BIT_group-size-32](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GPTQ_32g_4BIT) | [GGUF-Many_Quants](https://huggingface.co/SicariusSicariiStuff/Tinybra_13B_GGUF) - 30B: [FP16](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_FP16) | [GPTQ_4-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_4BIT) | [GPTQ_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_3BIT) | [EXL2_2.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-50bpw) | [EXL2_2.8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_2-80bpw) | [EXL2_3-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_3bpw) | [EXL2_5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5bpw) | [EXL2_5.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_5-50bpw) | [EXL2_6-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6bpw) | [EXL2_6.5-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_6-50bpw) | [EXL2_8-BIT](https://huggingface.co/SicariusSicariiStuff/Tenebra_30B_Alpha01_EXL2_8bpw) ### Support <img src="https://i.imgur.com/0lHHN95.png" alt="GPUs too expensive" style="width: 10%; min-width: 100px; display: block; margin: left;"> - [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit counts 🙏🏻 - [My Patreon](https://patreon.com/TenebraAI) ALL donations will go for research resources and compute, every bit counts 🙏🏻 ## Disclaimer *This model is pretty uncensored, use responsibly ## Other stuff - [Experemental TTS extension for oobabooga](https://github.com/SicariusSicariiStuff/Diffusion_TTS) Based on Tortoise, EXTREMELY good quality, IF, and that's a big if, you can make it to work! - [Demonstration of the TTS capabilities](https://www.youtube.com/watch?v=V6ewxU6c1W8) Charsi narrates her story, Diablo2 (18+) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_SicariusSicariiStuff__Tenebra_30B_Alpha01_FP16) | Metric |Value| |---------------------------------|----:| |Avg. |60.18| |AI2 Reasoning Challenge (25-Shot)|64.51| |HellaSwag (10-Shot) |84.79| |MMLU (5-Shot) |54.29| |TruthfulQA (0-shot) |54.22| |Winogrande (5-shot) |78.61| |GSM8k (5-shot) |24.64|
s3nh/UTENA-7B-NSFW-V2-GGUF
s3nh
2024-01-14T20:10:26Z
1,093
7
transformers
[ "transformers", "gguf", "text-generation", "zh", "en", "license:openrail", "endpoints_compatible", "region:us" ]
text-generation
2024-01-14T18:51:37Z
--- license: openrail pipeline_tag: text-generation library_name: transformers language: - zh - en --- ## Original model card Buy me a coffee if you like this project ;) <a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> #### Description GGUF Format model files for [This project](https://huggingface.co/AI-B/UTENA-7B-NSFW-V2). ### GGUF Specs GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired: Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information. Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models. mmap compatibility: models can be loaded using mmap for fast loading and saving. Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used. Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user. The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values. This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for inference or for identifying the model. ### inference User: Tell me story about what is an quantization and what do we need to build. Ashish: Quantization is the process of mapping continuous values to discrete values. For example, if you have a sound wave that varies continuously over time, you can quantize it to only allow certain frequencies at specific times. This can be useful for compressing data or reducing noise in signals. To build a quantizer, you would need a way to measure the input signal (such as a microphone), a method for determining the desired frequency range and resolution of the output, and a way to produce discrete values based on the measured input and desired output. User: What is an example of a use case for # Original model card
megastudyedu/ME-dpo-7B-v1.0
megastudyedu
2024-01-29T03:15:11Z
1,093
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ko", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-29T00:44:45Z
--- license: cc-by-nc-nd-4.0 language: - ko --- # Model Card for ME-dpo-7B-v1.0 ## Developed by : 메가스터디교육, 프리딕션, 마이스 ## Base Model : [megastudyedu/ME-7B-v1.0](https://huggingface.co/megastudyedu/ME-7B-v1.0) ## 사용 데이터셋 - jondurbin/bagel-v0.3 번역 데이터셋(translated by megastudyedu/ME-7B-v1.0)
CausalLM/34b-beta
CausalLM
2024-05-25T14:32:29Z
1,093
59
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-06T16:50:45Z
--- license: gpl-3.0 --- # CausalLM 34B β Demo: [![](https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg)](https://huggingface.co/spaces/JosephusCheung/CausalLM-34B-8-bit-GGUF) ## PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) There are some issues with the model weights in terms of precision. In the next version update, we will roll back some progress and retrain to fix these issues as soon as possible. **Please note:** Do not use "accelerated inference frameworks" like **VLLM** temporarily. Instead, use Transformers for inference. Otherwise, due to precision issues, the output quality will be significantly degraded. If you need faster inference, you can consider using the q8_0 quantization (faster and better than bf16 vllm for this model only) with llama.cpp temporarily or wait for the official version. To be fixed in the upcoming next version update. **no repetition_penalty!** Please do not use wikitext for quantization calibration because all wikitext have been re-aligned on synthetic dataset, and its distribution differs significantly from the original wikitext. ## MT-Bench: 8.5 ![mt-bench](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/2vv2_nGbfWuOM8jwy40dn.png) ## Some contamination detection if you want to check: | Models | MMLU (ref: llama7b) | TBA | | ------------------------- | ------------------- | ---- | | microsoft/Orca-2-7b | 0.77 | | | mistralai/Mistral-7B-v0.1 | 0.46 | | | **CausalLM/34b-beta** | **0.38** | | | 01-ai/Yi-6B-200K | 0.3 | | data from https://huggingface.co/spaces/Yeyito/llm_contamination_detector It should be *safe*. It was not trained on the benchmark, but the contamination of the training dataset is unavoidable due to cost constraints.
kimwooglae/WebSquareAI-Instruct-llama-3-8B-v0.5.37
kimwooglae
2024-04-24T15:36:17Z
1,093
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-24T03:28:50Z
--- language: - en pipeline_tag: text-generation license: cc-by-nc-4.0 --- # WebSquareAI-Instruct-llama-3-8B-v0.5.37 ## Model Details **Developed by** [Inswave Systems](https://www.inswave.com) UI Platform Team **Base Model** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ---
pcuenq/wav2vec2-large-xlsr-53-eu
pcuenq
2021-03-28T19:35:49Z
1,092
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "eu", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: eu datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Large 53 Basque by pcuenq results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice eu type: common_voice args: eu metrics: - name: Test WER type: wer value: 15.34 --- # Wav2Vec2-Large-XLSR-53-EU Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Basque using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "eu", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu") model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Basque test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "eu", split="test") wer = load_metric("wer") model_name = "pcuenq/wav2vec2-large-xlsr-53-eu" processor = Wav2Vec2Processor.from_pretrained(model_name) model = Wav2Vec2ForCTC.from_pretrained(model_name) model.to("cuda") ## Text pre-processing chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]' chars_to_ignore_pattern = re.compile(chars_to_ignore_regex) def remove_special_characters(batch): batch["sentence"] = chars_to_ignore_pattern.sub('', batch["sentence"]).lower() + " " return batch ## Audio pre-processing import librosa def speech_file_to_array_fn(batch): speech_array, sample_rate = torchaudio.load(batch["path"]) batch["speech"] = librosa.resample(speech_array.squeeze().numpy(), sample_rate, 16_000) return batch # Text transformation and audio resampling def cv_prepare(batch): batch = remove_special_characters(batch) batch = speech_file_to_array_fn(batch) return batch # Number of CPUs or None num_proc = 16 test_dataset = test_dataset.map(cv_prepare, remove_columns=['path'], num_proc=num_proc) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) # WER Metric computation print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 15.34 % ## Training The Common Voice `train` and `validation` datasets were used for training. Training was performed for 22 + 20 epochs with the following parameters: - Batch size 16, 2 gradient accumulation steps. - Learning rate: 2.5e-4 - Activation dropout: 0.05 - Attention dropout: 0.1 - Hidden dropout: 0.05 - Feature proj. dropout: 0.05 - Mask time probability: 0.08 - Layer dropout: 0.05
gsdf/Counterfeit-V2.0
gsdf
2023-01-27T16:58:12Z
1,092
461
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-13T09:36:54Z
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Counterfeit is anime style Stable Diffusion model. DreamBooth + Merge Block Weights + Merge LoRA Please refer to the example below for your prompt. # Counterfeit-V2.0 e.g. ((masterpiece, best quality)),a girl, solo, hat, blush,long hair, skirt, beret, sitting, bangs, socks, wariza, pink hair, light blue eyes, black headwear,holding,rifle,weapon, looking at viewer, white sailor collar, school uniform, closed mouth, black hat, sailor collar, holding weapon, long sleeves, pleated skirt, white socks,indoors,industrial Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2),bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms:1.2), Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Size: 576x384 or 576x448, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires upscaler: Latent ![sample1](https://huggingface.co/gsdf/Counterfeit-V2.0/resolve/main/sample_001.jpg) ((masterpiece, best quality)),a girl, solo, skirt, sky, sitting, pantyhose, serafuku, cloud,black gloves, outdoors, neckerchief ,day, bangs, fence, shirt, ahoge, rooftop, long hair, white pantyhose, black hair, school uniform, white sailor collar, red eyes, sailor collar, blue skirt, red neckerchief, blue serafuku, animal ears, blue sky, long sleeves, blue shirt, looking at viewer, closed mouth,cat ears, chain-link fence, pleated skirt, cloudy sky, trash can Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2),bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms:1.2), Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Size: 384x640, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires upscaler: Latent ![sample2](https://huggingface.co/gsdf/Counterfeit-V2.0/resolve/main/sample_002.jpg) ((masterpiece, best quality)), a girl, flower, dress, solo, lying, rain, butterfly, bug, water, bangs, frills, breasts, long hair, white dress, short sleeves, hair ornament, on back, outstretched arm, frilled dress, arm up, white flower, hair flower, grey eyes, white hair,looking away Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2),bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms:1.2), Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Size: 640x384, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires upscaler: Latent ![sample3](https://huggingface.co/gsdf/Counterfeit-V2.0/resolve/main/sample_003.jpg) ((masterpiece, best quality)), 2girls, barefoot, shorts, sitting, shirt, couch, indoors, messy room, t-shirt, holding, feet, pillow, controller, toes, gun, cup, bangs, soles, rifle, denim, table, camera, multiple girls, black hair, red hair, short hair, long hair, crossed legs, red eyes, short shorts, white shirt, black shorts, game controller, monitor, warm lighting Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2),bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms:1.2), Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Size: 640x384, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires upscaler: Latent ![sample4](https://huggingface.co/gsdf/Counterfeit-V2.0/resolve/main/sample_004.jpg) ((masterpiece, best quality)),a girl, solo, dress, standing, halo, alley, outdoors, bangs, white dress, white hair, long hair, black footwear, industrial pipe, looking at viewer, air conditioner,dark lighting, garbage, garbage bin Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2),bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms:1.2), Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Size: 640x384, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires upscaler: Latent ![sample5](https://huggingface.co/gsdf/Counterfeit-V2.0/resolve/main/sample_005.jpg) ((masterpiece, best quality)),a girl, solo, serafuku, thighhighs, skirt, lying, ribbon, upperbody, class room, indoors, shirt, neckerchief, school uniform, long hair, black thighhighs, looking at viewer, blue eyes, black serafuku, black skirt, red ribbon, long sleeves, pleated skirt, blonde hair, wood floor Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2),bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms:1.2), Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Size: 640x384, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires upscaler: Latent ![sample6](https://huggingface.co/gsdf/Counterfeit-V2.0/resolve/main/sample_006.jpg) (masterpiece, best quality)),a girl, solo, twintails, shirt, skirt, petals, bowtie, earrings, jewelry, bangs, black hair, hair ornament, hair ribbon, red ribbon, red eyes, long hair, open mouth, white shirt, multicolored hair, black skirt, red hair, long sleeves, pink bowtie, hair between eyes, looking at viewer, collared shirt, upper body, hand up, falling petals, depth of field, strong bloom, red background Negative prompt: (low quality, worst quality:1.4), (bad anatomy), (inaccurate limb:1.2),bad composition, inaccurate eyes, extra digit,fewer digits,(extra arms:1.2), Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 8, Size: 640x384, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires upscaler: Latent ![sample7](https://huggingface.co/gsdf/Counterfeit-V2.0/resolve/main/sample_007.jpg)
Vsukiyaki/ShiratakiMix
Vsukiyaki
2023-07-22T15:55:35Z
1,092
141
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "ja", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-04-05T15:02:53Z
--- license: creativeml-openrail-m language: - ja tags: - stable-diffusion - text-to-image --- # ◆ ShiratakiMix <img src="https://huggingface.co/Vsukiyaki/ShiratakiMix/resolve/main/imgs/header.jpg"> ## 概要 / Overview - **ShiratakiMix**は、2D風の画風に特化したマージモデルです。 / **ShiratakiMix** is a merge model that specializes in 2D-style painting styles. - VAEはお好きなものをお使いください。VAEを含んだモデルも提供しています。 / You can use whatever VAE you like. I also offer models that include VAE. => **ShiratakiMix-add-VAE.safetensors** <hr> ## ギャラリー / gallery <div> <div style="display: flex; justify-content: center; align-items: center;"> <img src="https://huggingface.co/Vsukiyaki/ShiratakiMix/resolve/main/imgs/sample1.png" style="width: 50%"> <img src="https://huggingface.co/Vsukiyaki/ShiratakiMix/resolve/main/imgs/sample2.png" style="width: 50%"> </div> <div style="display: flex; justify-content: center; align-items: center;"> <img src="https://huggingface.co/Vsukiyaki/ShiratakiMix/resolve/main/imgs/sample3.png" style="width: 50%"> <img src="https://huggingface.co/Vsukiyaki/ShiratakiMix/resolve/main/imgs/sample4.png" style="width: 50%"> </div> </div> <hr> ## 推奨設定 / Recommended Settings <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;"> Steps: 20 ~ 60 Sampler: DPM++ SDE Karras CFG scale: 7.5 Denoising strength: 0.55 Hires steps: 20 Hires upscaler: Latent or R-ESRGAN 4x+ Anime6B Clip skip: 2 </pre> Negative: <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;"> (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95) </pre> <hr> ## 例 / Examples <details> <summary>サンプル1</summary> <img src="https://huggingface.co/Vsukiyaki/ShiratakiMix/resolve/main/imgs/sample5.png" style="width: 768px"> <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;"> Prompt: cute girl,outdoor,scenery Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95) Steps: 28 Sampler: DPM++ SDE Karras CFG scale: 7.5 Seed: 3585317650 Size: 768x544 Denoising strength: 0.55 Clip skip: 2 Hires upscale: 2.5 Hires steps: 20 Hires upscaler: Latent </pre> </details> <br> <details> <summary>サンプル2</summary> <img src="https://huggingface.co/Vsukiyaki/ShiratakiMix/resolve/main/imgs/sample6.png" style="width: 768px"> <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;"> Prompt: cute girl,indoors,antique shop,many antique goods,shop counter,display shelf,apron,happy smile,perspective Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95) Steps: 40 Sampler: DPM++ SDE Karras CFG scale: 7.5 Seed: 4267597555 Size: 768x544 Denoising strength: 0.55 Clip skip: 2 Hires upscale: 2.5 Hires steps: 20 Hires upscaler: Latent </pre> </details> <br> <details> <summary>サンプル3</summary> <img src="https://huggingface.co/Vsukiyaki/ShiratakiMix/resolve/main/imgs/sample7.png" style="width: 768px"> <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;"> Prompt: cute little girl standing in a Mediterranean port town street,wind,pale-blonde hair, blue eyes,very long twintails,white dress,white hat,blue sky,laugh,double tooth,closed eyes,looking at viewer,lens flare,dramatic, coastal Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95) Steps: 60 Sampler: DPM++ SDE Karras CFG scale: 7.5 Seed: 265342725 Size: 768x544 Denoising strength: 0.55 Clip skip: 2 Hires upscale: 2.5 Hires steps: 20 Hires upscaler: Latent </pre> </details> <br> <details> <summary>サンプル4</summary> <img src="https://huggingface.co/Vsukiyaki/ShiratakiMix/resolve/main/imgs/sample8.png" style="width: 512px"> <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; background: #25292f; color: #fff; white-space: pre-line;"> Prompt: (solo), cute little (1girl) walking,path,[from below:1.2],brown hair,sine short hair,brown eyes,puddle,Water Reflection,rain,floating water drop,hydrangea,(blurry foreground),dynamic angle,asphalt,(blue sky),lens flare,school uniform,(glitter:1.2) Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95) Steps: 28 Sampler: DPM++ SDE Karras CFG scale: 7.5 Seed: 415644494 Size: 544x768 Denoising strength: 0.55 Clip skip: 2 Hires upscale: 2.5 Hires steps: 20 Hires upscaler: Latent </pre> </details> <hr> ## ライセンス / License <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tbody> <tr> <td class="px-4 text-base text-bold" colspan="2"> <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license"> CreativeML OpenRAIL-M ライセンス / CreativeML OpenRAIL-M license </a> </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> ✅ </span> </td> <td> このモデルのクレジットを入れずに使用する<br> Use the model without crediting the creator </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> ✅ </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> ✅ </span> </td> <td> このモデルを商用の画像生成サービスで利用する</br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> ✅ </span> </td> <td> このモデルを使用したマージモデルを共有する<br> Share merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> ✅ </span> </td> <td> このモデル、またはこのモデルをマージしたモデルを販売する</br> Sell this model or merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> ✅ </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する</br> Have different permissions when sharing merges </td> </tr> </tbody> </table> </div> This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here :https://huggingface.co/spaces/CompVis/stable-diffusion-license <br> #### 【和訳】 このモデルはオープンアクセスであり、すべての人が利用できます。CreativeML OpenRAIL-M ライセンスにより、権利と使用方法がさらに規定されています。CreativeML OpenRAIL ライセンスでは、次のことが規定されています。 1. モデルを使用して、違法または有害な出力またはコンテンツを意図的に作成または共有することはできません。 2. 作成者は、あなたが生成した出力に対していかなる権利も主張しません。あなたはそれらを自由に使用でき、ライセンスに設定された規定に違反してはならない使用について説明責任を負います。 3. 重みを再配布し、モデルを商用および/またはサービスとして使用することができます。その場合、ライセンスに記載されているのと同じ使用制限を含め、CreativeML OpenRAIL-M のコピーをすべてのユーザーと共有する必要があることに注意してください。 (ライセンスを完全にかつ慎重にお読みください。) [こちらからライセンス全文をお読みください。](https://huggingface.co/spaces/CompVis/stable-diffusion-license) <hr> ## マージ元モデル / Merged models <dl> <dt><a href="https://civitai.com/models/21200/color-box-model">・ Color Box Model / CreativeML OpenRAIL M</a> </dt> <dd>└ colorBoxModel_colorBOX</dd> <dt><a href="https://huggingface.co/Printemps/ProllyMix">・ ProllyMix / CreativeML OpenRAIL M</a> </dt> <dd>└ IceProllyMix-v1</dd> <dt><a href="https://huggingface.co/haor/Evt_M">・ Evt_M / CreativeML OpenRAIL M</a> </dt> <dd>└ Evt_M_fp16</dd> <dt><a href="https://huggingface.co/natsusakiyomi/SakuraMix">・ SakuraMix / CreativeML OpenRAIL M</a> </dt> <dd>└ SakuraMix-v2</dd> <dt><a href="https://huggingface.co/ploughB660/BalorMix-V4">・ BalorMix-V4 / CreativeML OpenRAIL M</a> </dt> <dd>└ BalorMix-V4.2featACT</dd> </dl> <hr> ## レシピ / Recipe <details> ### Step: 1 | 階層マージ Tool: Merge Block Weighted | Model: A | Model: B | Base alpha | Skip/Reset CLIP position_ids | Merge Name | | :------: | :----------------: | :--------: | :--------------------------: | :----------------: | | colorBoxModel_colorBOX | IceProllyMix-v1 | 0.42 | None | ShiratakiMix-baseA | Weight: <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; white-space: pre-line;"> 1,0.9166666667,0.8333333333,0.75,0.6666666667,0.5833333333,0.5,0.4166666667,0.3333333333,0.25,0.1666666667,0.0833333333, 0, 0.0833333333,0.1666666667,0.25,0.3333333333,0.4166666667,0.5,0.5833333333,0.6666666667,0.75,0.8333333333,0.9166666667,1.0 </pre> <br> ### Step: 2 | 階層マージ Tool: Merge Block Weighted | Model: A | Model: B | Base alpha | Skip/Reset CLIP position_ids | Merge Name | | :------: | :----------------: | :--------: | :--------------------------: | :----------------: | | Evt_M | ShiratakiMix-baseA | 1.0 | None | ShiratakiMix-baseB | Weight: <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; white-space: pre-line;"> 1,0.9166666667,0.8333333333,0.75,0.6666666667,0.5833333333,0.5,0.4166666667,0.3333333333,0.25,0.1666666667,0.0833333333, 0, 0.0833333333,0.1666666667,0.25,0.3333333333,0.4166666667,0.5,0.5833333333,0.6666666667,0.75,0.8333333333,0.9166666667,1.0 </pre> <br> ### Step: 3 | 階層マージ Tool: Toolkit / Merge Block Weighted **◆ Converted model.** SakuraMixV2.ckpt[afbd69c0cd] ==> **SakuraMixV2.safetensors[79b4a1d065]** | Model: A | Model: B | Base alpha | Skip/Reset CLIP position_ids | Merge Name | | :--------------: | :----------------: | :--------: | :--------------------------: | :----------------: | | SakuraMixV2 | ShiratakiMix-baseB | 1.0 | None | ShiratakiMix-baseC | Weight: <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; white-space: pre-line;"> 1,0.97974537037037,0.921296296296296,0.828125,0.703703703703704,0.55150462962963,0.375,0.177662037037037,0.0370370370370372,0.265625,0.50462962962963,0.750578703703704, 1.0, 0.750578703703704,0.504629629629629,0.265624999999999,0.0370370370370372,0.177662037037038,0.375,0.551504629629631,0.703703703703703,0.828125,0.921296296296298,0.979745370370369,1 </pre> <br> ### Step: 4 | 階層マージ Tool: Merge Block Weighted | Model: A | Model: B | Base alpha | Skip/Reset CLIP position_ids | Merge Name | | :----------------: | :---------------: | :--------: | :--------------------------: | :---------------: | | ShiratakiMix-baseC | BalorMix-V4.2featACT | 0.05 | None | ShiratakiMix | Weight: <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; white-space: pre-line;"> 0.1,0.1,0,0,0,0,0,0,0.1,0.1,0,0,0,0.1,0.1,0,0,0,0,0,0,0,0,0.1,0.1 </pre> => **ShiratakiMix.safetensors [d3849c69d9]** <br> ### Step: 5 | 修復 Tool: Toolkit <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; white-space: pre-line;"> Contains no junk data. CLIP had incorrect positions, fixed: 7, 14, 19, 28, 33, 38, 43, 56, 61. Model will be fixed (9 changes). </pre> => **ShiratakiMix-fixed.safetensors [ded0c94f95]** </details> <hr> Twiter: [@Vsukiyaki_AIArt](https://twitter.com/Vsukiyaki_AIArt) <a href="https://twitter.com/Vsukiyaki_AIArt" class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md" style="background-color: #1da1f2"> <svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24"> <path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" /> </svg> </a>
MarkP1929/oasst-llama-13b-2-epochs-GPTQ-4bit-128g
MarkP1929
2023-04-18T19:48:27Z
1,092
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-04-18T14:47:14Z
This is a quantised version in safetensor format of the oasst-llama-13b-2-epochs model from dvruette/oasst-llama-13b-2-epochs It has a siginficant speed up for inference when used on oobabooga. Run with.. python server.py --model oasst-llama-13b-2-epochs-GPTQ-4bit-128g --wbits 4 --groupsize 128
heegyu/LIMA-13b
heegyu
2023-07-26T03:42:31Z
1,092
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-26T03:35:08Z
--- license: other --- LLaMA-13B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details. -- license: other --- # LLaMA Model Card ## Model details **Organization developing the model** The FAIR team of Meta AI. **Model date** LLaMA was trained between December. 2022 and Feb. 2023. **Model version** This is version 1 of the model. **Model type** LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters. **Paper or resources for more information** More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/. **Citations details** https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/ **License** Non-commercial bespoke license **Where to send questions or comments about the model** Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue. ## Intended use **Primary intended uses** The primary use of LLaMA is research on large language models, including: exploring potential applications such as question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models, and developing techniques to improve those, evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations. **Primary intended users** The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence. **Out-of-scope use cases** LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers. ## Factors **Relevant factors** One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model. **Evaluation factors** As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model. ## Metrics **Model performance measures** We use the following measure to evaluate the model: - Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs, - Exact match for question answering, - The toxicity score from Perspective API on RealToxicityPrompts. **Decision thresholds** Not applicable. **Approaches to uncertainty and variability** Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training. ## Evaluation datasets The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs. ## Training dataset The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing. ## Quantitative analysis Hyperparameters for the model architecture <table> <thead> <tr> <th >LLaMA</th> <th colspan=6>Model hyper parameters </th> </tr> <tr> <th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th> </tr> </thead> <tbody> <tr> <th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> <tr> <th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> </tbody> </table> *Table 1 - Summary of LLama Model Hyperparameters* We present our results on eight standard common sense reasoning benchmarks in the table below. <table> <thead> <tr> <th>LLaMA</th> <th colspan=9>Reasoning tasks </th> </tr> <tr> <th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th> </tr> </thead> <tbody> <tr> <th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93 </th> <tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94 </th> <tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92 </th> <tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr> </tbody> </table> *Table 2 - Summary of LLama Model Performance on Reasoning tasks* We present our results on bias in the table below. Note that lower value is better indicating lower bias. | No | Category | FAIR LLM | | --- | -------------------- | -------- | | 1 | Gender | 70.6 | | 2 | Religion | 79 | | 3 | Race/Color | 57 | | 4 | Sexual orientation | 81 | | 5 | Age | 70.1 | | 6 | Nationality | 64.2 | | 7 | Disability | 66.7 | | 8 | Physical appearance | 77.8 | | 9 | Socioeconomic status | 71.5 | | | LLaMA Average | 66.6 | *Table 3 - Summary bias of our model output* ## Ethical considerations **Data** The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data. **Human life** The model is not intended to inform decisions about matters central to human life, and should not be used in such a way. **Mitigations** We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier. **Risks and harms** Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard. **Use cases** LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
4i-ai/Llama-2-13b-alpaca-es
4i-ai
2023-08-23T09:01:59Z
1,092
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "es", "dataset:bertin-project/alpaca-spanish", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-22T08:46:57Z
--- license: cc-by-nc-4.0 datasets: - bertin-project/alpaca-spanish language: - es inference: false --- # Model Card for Model ID This model is the Llama-2-13b-hf fine-tuned with an adapter on the Spanish Alpaca dataset. ## Model Details ### Model Description This is a Spanish chat model fine-tuned on a Spanish instruction dataset. The model expect a prompt containing the instruction, with an option to add an input (see examples below). - **Developed by:** 4i Intelligent Insights - **Model type:** Chat model - **Language(s) (NLP):** Spanish - **License:** cc-by-nc-4.0 (inhereted from the alpaca-spanish dataset), - **Finetuned from model :** Llama 2 13B ([license agreement](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)) ## Uses The model is intended to be used directly without the need of further fine-tuning. ## Bias, Risks, and Limitations This model inherits the bias, risks, and limitations of its base model, Llama 2, and of the dataset used for fine-tuning. Note that the Spanish Alpaca dataset was obtained by translating the original Alpaca dataset. It contains translation errors that may have negatively impacted the fine-tuning of the model. ## How to Get Started with the Model Use the code below to get started with the model for inference. The adapter was directly merged into the original Llama 2 model. The following code sample uses 4-bit quantization, you may load the model without it if you have enough VRAM. We show results for hyperparameters that we found work well for this set of prompts. ```py from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, TrainingArguments, GenerationConfig import torch model_name = "4i-ai/Llama-2-13b-alpaca-es" #Tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) def create_and_prepare_model(): compute_dtype = getattr(torch, "float16") bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=compute_dtype, bnb_4bit_use_double_quant=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, quantization_config=bnb_config, device_map={"": 0} ) return model model = create_and_prepare_model() def generate(instruction, input=None): #Format the prompt to look like the training data if input is not None: prompt = "### Instruction:\n"+instruction+"\n\n### Input:\n"+input+"\n\n### Response:\n" else : prompt = "### Instruction:\n"+instruction+"\n\n### Response:\n" inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].cuda() generation_output = model.generate( input_ids=input_ids, repetition_penalty=1.5, generation_config=GenerationConfig(temperature=0.1, top_p=0.75, top_k=40, num_beams=20), #hyperparameters for generation return_dict_in_generate=True, output_scores=True, max_new_tokens=150, #maximum tokens generated, increase if you want longer asnwer (up to 2048 - the length of the prompt), generation "looks" slower for longer response ) for seq in generation_output.sequences: output = tokenizer.decode(seq, skip_special_tokens=True) print(output.split("### Response:")[1].strip()) generate("Háblame de la superconductividad.") print("-----------") generate("Encuentra la capital de España.") print("-----------") generate("Encuentra la capital de Portugal.") print("-----------") generate("Organiza los números dados en orden ascendente.", "2, 3, 0, 8, 4, 10") print("-----------") generate("Compila una lista de 5 estados de EE. UU. ubicados en el Oeste.") print("-----------") generate("Compila una lista de 2 estados de EE. UU. ubicados en el Oeste.") print("-----------") generate("Compila una lista de 10 estados de EE. UU. ubicados en el Este.") print("-----------") generate("¿Cuál es el color de una fresa?") print("-----------") generate("¿Cuál es el color de la siguiente fruta?", "fresa") print("-----------") ``` Expected output: ``` La superconductividad es un fenómeno físico en el que los materiales pueden conducir corrientes eléctricas a bajas temperaturas sin pérdida de energía debido a la resistencia. Los materiales superconductores son capaces de conducir corrientes eléctricas a temperaturas mucho más bajas que los materiales normales. Esto se debe a que los electrones en los materiales superconductores se comportan de manera cooperativa, lo que les permite conducir corrientes eléctricas sin pérdida de energía. Los materiales superconductores tienen muchas aplicaciones ----------- La capital de España es Madrid. ----------- La capital de Portugal es Lisboa. ----------- 0, 2, 3, 4, 8, 10 ----------- 1. California 2. Oregón 3. Washington 4. Nevada 5. Arizona ----------- California y Washington. ----------- 1. Maine 2. Nuevo Hampshire 3. Vermont 4. Massachusetts 5. Rhode Island 6. Connecticut 7. Nueva York 8. Nueva Jersey 9. Pensilvania 10. Delaware ----------- El color de una fresa es rojo brillante. ----------- El color de la fresa es rojo. ----------- ``` ## Contact Us [4i.ai](https://4i.ai/) provides natural language processing solutions with dialog, vision and voice capabilities to deliver real-life multimodal human-machine conversations. Please contact us at [email protected]
Undi95/Nethena-MLewd-Xwin-23B
Undi95
2023-10-31T05:18:09Z
1,092
13
transformers
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "nsfw", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-31T00:51:00Z
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences - nsfw --- Undi doing chemistry again. Layer of Xwin-Mlewd was added in a different way than I do before, result seem good, but I'm a VRAMlet so I can only run the Q2 at 2k context for now. Need to see if it really work good or I was just lucky with my prompt. OG model : [NeverSleep/Nethena-13B](https://huggingface.co/NeverSleep/Nethena-13B) ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` LimaRP is always kicking in and thus, this can be used to have more control on the size of the output. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/2ftpUX9khVcddk-VGEH3p.png) Thanks Ikari.
oopsung/Yi-ko-Fdpo-v1
oopsung
2024-01-02T01:09:50Z
1,092
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-02T01:06:42Z
Entry not found
cocoirun/Yi-Ko-6B-instruct-v1.3
cocoirun
2024-01-11T01:47:16Z
1,092
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-08T01:28:14Z
--- license: cc-by-sa-4.0 --- <h1>instruct 모델 v1.3</h1> <b><학습 데이터 구축></b> Open-Orca-ko 데이터를 분석하여 태스크를 추출한 뒤 해당 태스크에 맞춰서 NLP 관련 오픈소스 데이터를 활용하여 학습데이터를 자체적으로 약 4만건(역사, 과학, 수학, 기계독해, 리뷰 분석) 구축하였고, 그 외에 Open-Orca-Ko에서 데이터를 일부 필터링하여 정제해거나 KoBEST 데이터를 함께 추가하였습니다. aihub 일반상식 및 기계독해 데이터를 활용하여 추가로 학습 데이터를 구축(형태소 관련, 기계독해 관련 및 요약) 각종 블로그에서 역사 및 상식 퀴즈를 사람이 직접 학습데이터 형태로 변경 AI2AI Challenge 데이터를 파파고를 통해 번역 및 오역된 부분을 사람이 직접 수정 하는 작업을 수행 영어 번역 데이터 영한/한영 데이터 학습 데이터로 활용 진행 총 11만개의 학습데이터로 sft를 진행하였습니다. <br> 현재, 새로운 버전의 모델 학습 및 성능을 위해 Open-Orca 데이터셋 일부를 번역하여 정제 중에 있습니다. <br> + 고등학교 역사 문제 및 TruthfulQA 관련 문제 추가를 진행하였습니다. + 각종 it 지식 데이터 추가진행. + 기계독해 관련 학습 데이터를 ChatGPT를 통해서 답변을 얻어 학습 + 문법관련 학습 데이터 <br> ###학습 데이터 파일은 비공개입니다. <br> <b><학습></b> 학습은 LoRA를 사용하여 A100 40G *2에서 학습을 진행하였습니다.
Edentns/DataVortexTL-1.1B-v0.1
Edentns
2024-02-24T14:19:06Z
1,092
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "dataset:beomi/KoAlpaca-v1.1a", "dataset:jojo0217/korean_rlhf_dataset", "dataset:kyujinpy/OpenOrca-KO", "dataset:nlpai-lab/kullm-v2", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-09T00:14:38Z
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 pipeline_tag: text-generation datasets: - beomi/KoAlpaca-v1.1a - jojo0217/korean_rlhf_dataset - kyujinpy/OpenOrca-KO - nlpai-lab/kullm-v2 widget: - text: > <|system|> You are a chatbot who answers User's questions. <|user|> 대한민국의 수도는 어디야? <|assistant|> --- # **DataVortexTL-1.1B-v0.1** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) ### **Trained On** - **OS**: Ubuntu 20.04 - **GPU**: H100 80GB 1ea - **transformers**: v4.36.2 ### **Dataset** - [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) - [jojo0217/korean_rlhf_dataset](https://huggingface.co/datasets/jojo0217/korean_rlhf_dataset) - [kyujinpy/OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO) - [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) ### **Instruction format** It follows **TinyLlama** format. E.g. ```python text = """\ <|system|> 당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다.</s> <|user|> 대한민국의 수도는 어디야?</s> <|assistant|> 대한민국의 수도는 서울입니다.</s> <|user|> 서울 인구는 총 몇 명이야?</s> """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -------------: | -------------: | -------------: | -----------: | | kobest_boolq | 0.334282 | 0.516446 | 0.500478 | 0.498941 | | kobest_copa | 0.515061 | 0.504321 | 0.492927 | 0.50809 | | kobest_hellaswag | 0.36253 | 0.357733 | 0.355873 | 0.376502 | | kobest_sentineg | 0.481146 | 0.657411 | 0.687417 | 0.635703 | | **Average** | **0.42325475** | **0.50897775** | **0.50917375** | **0.504809** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 31.5 | 25.26 | 33.53 | 24.56 | 43.34 | 30.81 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexTL-1.1B-v0.1") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexTL-1.1B-v0.1") messages = [ {"role": "system", "content": "당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다."}, {"role": "user", "content": "대한민국의 수도는 어디야?"}, {"role": "assistant", "content": "대한민국의 수도는 서울입니다."}, {"role": "user", "content": "서울 인구는 총 몇 명이야?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
GAI-LLM/OPEN-SOLAR-KO-10.7B-dpo-v1
GAI-LLM
2024-02-22T06:59:37Z
1,092
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-25T01:35:00Z
--- license: cc-by-nc-4.0 language: - ko library_name: transformers pipeline_tag: text-generation --- **The license is `cc-by-nc-4.0`.** # **GAI-LLM/OPEN-SOLAR-KO-10.7B-dpo-v1** ## Model Details **Model Developers** Donghoon Oh, Hanmin Myung (SK C&C G.AI Eng) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** GAI-LLM/OPEN-SOLAR-KO-10.7B-dpo-v1 is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** [GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15](https://huggingface.co/GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15) **Training Dataset** - We combined Open Korean Dateset using mixed-strategy with DPO - We use A100 GPU 80GB * 8, when training. # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). # Implementation Code ```python ### GAI-LLM/OPEN-SOLAR-KO-10.7B-dpo-v1 from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "GAI-LLM/OPEN-SOLAR-KO-10.7B-dpo-v1" model = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) ```
oneonlee/LDCC-SOLAR-gugutypus-10.7B
oneonlee
2024-03-02T12:35:27Z
1,092
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "ko", "dataset:kyujinpy/KOR-gugugu-platypus-set", "base_model:LDCC/LDCC-SOLAR-10.7B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-02T08:36:30Z
--- language: - en - ko license: cc-by-nc-4.0 datasets: - kyujinpy/KOR-gugugu-platypus-set base_model: - LDCC/LDCC-SOLAR-10.7B pipeline_tag: text-generation --- # LDCC-SOLAR-gugutypus-10.7B <img src="logo.png" height=350, width=350> --- ## Model Details **Model Developers** - DongGeon Lee ([oneonlee](https://huggingface.co/oneonlee)) **Model Architecture** - **LDCC-SOLAR-gugutypus-10.7B** is a instruction fine-tuned auto-regressive language model, based on the [SOLAR](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) transformer architecture. **Base Model** - [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) **Training Dataset** - [kyujinpy/KOR-gugugu-platypus-set](https://huggingface.co/datasets/kyujinpy/KOR-gugugu-platypus-set) --- ## Model comparisons - **Ko-LLM leaderboard (2024/03/01)** [[link]](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard) | Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ----------------------------------------- | ----------- | ------ | ------------ | ------- | ------------- | --------------- | | **[oneonlee/KoSOLAR-v0.2-gugutypus-10.7B](https://huggingface.co/oneonlee/KoSOLAR-v0.2-gugutypus-10.7B)** | **51.17** | 47.78 | 58.29 | 47.27 | 48.31 | 54.19 | | oneonlee/LDCC-SOLAR-gugutypus-10.7B | 49.45 | 45.9 | 55.46 | 47.96 | 48.93 | 49 | <br> - **(KOR) AI-Harness evaluation** [[link]](https://github.com/Beomi/ko-lm-evaluation-harness) | Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |-------------------------|-------|------|-----:|------|-----:|---|-----:| |KMMLU |N/A |none | 0|acc |0.3329|± |0.0794| |KMMLU |N/A |none | 5|acc |0.3969|± |0.0816| |KoBEST-HellaSwag | 0|none | 0|acc |0.4260|± |0.0221| |KoBEST-HellaSwag | 0|none | 5|acc |0.4260|± |0.0221| |KoBEST-BoolQ | 0|none | 0|acc |0.7792|± |0.0111| |KoBEST-BoolQ | 0|none | 5|acc |0.8925|± |0.0083| |KoBEST-COPA | 0|none | 0|acc |0.6670|± |0.0149| |KoBEST-COPA | 0|none | 5|acc |0.7070|± |0.0144| |KoBEST-SentiNeg | 0|none | 0|acc |0.7582|± |0.0215| |KoBEST-SentiNeg | 0|none | 5|acc |0.9219|± |0.0135| <br> - **(ENG) AI-Harness evaluation** [[link]](https://github.com/EleutherAI/lm-evaluation-harness) | Tasks |Version|Filter|n-shot|Metric|Value | |Stderr| |------------------|-------|------|-----:|------|-----:|---|-----:| |MMLU |N/A |none | 0|acc |0.5826|± |0.1432| |MMLU |N/A |none | 5|acc |0.6124|± |0.1275| |HellaSwag | 1|none | 0|acc |0.6075|± |0.0049| |HellaSwag | 1|none | 5|acc |0.6534|± |0.0047| |BoolQ | 2|none | 0|acc |0.8737|± |0.0058| |BoolQ | 2|none | 5|acc |0.8878|± |0.0055| |COPA | 1|none | 0|acc |0.8300|± |0.0378| |COPA | 1|none | 5|acc |0.9300|± |0.0256| |truthfulqa |N/A |none | 0|acc |0.4249|± |0.0023| |truthfulqa |N/A |none | 5|acc | - |± | - | --- ## Implementation Code ```python ### LDCC-SOLAR-gugutypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "oneonlee/LDCC-SOLAR-gugutypus-10.7B" model = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) ```
Replete-AI/Llama-3-11.5B-Instruct-V2
Replete-AI
2024-05-31T03:11:53Z
1,092
7
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-19T23:05:03Z
--- license: other license_name: llama-3 license_link: https://llama.meta.com/llama3/license/ --- Llama-3-11.5B-Instruct-v2 Thank you to Meta for the weights for Meta-Llama-3-8B-Instruct ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/aJJxKus1wP5N-euvHEUq7.png) This is an upscaling of the Meta-Llama-3-8B-Instruct Ai using techniques created for chargoddard/mistral-11b-slimorca. This Ai model has been upscaled from 8b parameters to 11.5b parameters without any continuous pretraining or fine-tuning. Unlike version 1 this model has no issues at fp16 or any quantizations. The model that was used to create this one is linked below: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct ______________________________________________________________________________ - Llama-3-11.5B-Instruct-V2 | Metric | Value | |----------------------------------|-------:| | Avg. | 63.91 | | AI2 Reasoning Challenge (25-Shot)| 57.68 | | HellaSwag (10-Shot) | 78.59 | | MMLU (5-Shot) | 67.35 | | TruthfulQA (0-shot) | 35.86 | | Winogrande (5-shot) | 74.74 | | GSM8k (5-shot) | 69.37 | - Original Meta-Llama-3-8B-Instruct | Metric | Value | |----------------------------------|-------:| | Avg. | 66.87 | | AI2 Reasoning Challenge (25-Shot)| 60.75 | | HellaSwag (10-Shot) | 78.55 | | MMLU (5-Shot) | 67.07 | | TruthfulQA (0-shot) | 51.65 | | Winogrande (5-shot) | 74.51 | | GSM8k (5-shot) | 68.69 |
mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF
mradermacher
2024-05-07T15:58:33Z
1,092
10
transformers
[ "transformers", "gguf", "en", "base_model:turboderp/Cat-Llama-3-70B-instruct", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-06T19:49:24Z
--- base_model: turboderp/Cat-Llama-3-70B-instruct language: - en library_name: transformers license: llama3 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Cat-Llama-3-70B-instruct-i1-GGUF/resolve/main/Cat-Llama-3-70B-instruct.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
er1123090/T3Q_SOLAR_DARETIES_v1.0
er1123090
2024-05-23T01:31:27Z
1,092
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:T3Q-LLM/T3Q-LLM2-FP-v1.0", "base_model:hwkwon/S-SOLAR-10.7B-v1.5", "base_model:chihoonlee10/T3Q-ko-solar-dpo-v7.0", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-23T00:50:55Z
--- base_model: - T3Q-LLM/T3Q-LLM2-FP-v1.0 - hwkwon/S-SOLAR-10.7B-v1.5 - chihoonlee10/T3Q-ko-solar-dpo-v7.0 library_name: transformers tags: - mergekit - merge license: mit --- # Untitled Model (1) This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [T3Q-LLM/T3Q-LLM2-FP-v1.0](https://huggingface.co/T3Q-LLM/T3Q-LLM2-FP-v1.0) as a base. ### Models Merged The following models were included in the merge: * [hwkwon/S-SOLAR-10.7B-v1.5](https://huggingface.co/hwkwon/S-SOLAR-10.7B-v1.5) * [chihoonlee10/T3Q-ko-solar-dpo-v7.0](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v7.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: hwkwon/S-SOLAR-10.7B-v1.5 parameters: density: [1, 0.7, 0.1] # density gradient weight: 1.0 - model: chihoonlee10/T3Q-ko-solar-dpo-v7.0 parameters: density: 0.5 weight: [0, 0.3, 0.7, 1] # weight gradient merge_method: dare_ties base_model: T3Q-LLM/T3Q-LLM2-FP-v1.0 parameters: normalize: true int8_mask: true dtype: float16 #mergekit-yaml /path/to/config.yml ./output/directory --cuda ```
mradermacher/Qwen2-0.5B-GGUF
mradermacher
2024-06-06T21:47:19Z
1,092
1
transformers
[ "transformers", "gguf", "pretrained", "en", "base_model:Qwen/Qwen2-0.5B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-06T21:38:35Z
--- base_model: Qwen/Qwen2-0.5B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - pretrained --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Qwen/Qwen2-0.5B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.IQ3_S.gguf) | IQ3_S | 0.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.IQ3_XS.gguf) | IQ3_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.IQ3_M.gguf) | IQ3_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-GGUF/resolve/main/Qwen2-0.5B.f16.gguf) | f16 | 1.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
OpenBuddy/openbuddy-llama3-70b-v21.2-32k
OpenBuddy
2024-06-12T12:54:56Z
1,092
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "conversational", "zh", "en", "fr", "de", "ja", "ko", "it", "fi", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-12T06:21:12Z
--- language: - zh - en - fr - de - ja - ko - it - fi pipeline_tag: text-generation tags: - llama-3 license: other license_name: llama3 license_link: https://llama.meta.com/llama3/license/ --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice **Built with Meta Llama 3** Base Model: Llama-3-70B-Instruct License: https://llama.meta.com/llama3/license/ Acceptable Use Policy: https://llama.meta.com/llama3/use-policy This model is intended for use in English and Chinese. # Prompt Format We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like `<|role|>`, `<|says|>` and `<|end|>`. ``` <|role|>system<|says|>You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user). Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. You cannot access the internet, but you have vast knowledge, cutoff: 2023-04. You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), not related to GPT or OpenAI.<|end|> <|role|>user<|says|>History input 1<|end|> <|role|>assistant<|says|>History output 1<|end|> <|role|>user<|says|>History input 2<|end|> <|role|>assistant<|says|>History output 2<|end|> <|role|>user<|says|>Current input<|end|> <|role|>assistant<|says|> ``` This format is also defined in `tokenizer_config.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html). ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
sergeyzh/rubert-tiny-turbo
sergeyzh
2024-06-30T17:39:06Z
1,092
5
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "russian", "pretraining", "embeddings", "tiny", "sentence-similarity", "transformers", "ru", "dataset:IlyaGusev/gazeta", "dataset:zloelias/lenta-ru", "base_model:cointegrated/rubert-tiny2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2024-06-21T12:13:23Z
--- language: - ru pipeline_tag: sentence-similarity tags: - russian - pretraining - embeddings - tiny - feature-extraction - sentence-similarity - sentence-transformers - transformers datasets: - IlyaGusev/gazeta - zloelias/lenta-ru license: mit base_model: cointegrated/rubert-tiny2 --- Быстрая модель BERT для расчетов эмбедингов предложений на русском языке. Модель основана на [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) - имеет аналогичные размеры контекста (2048), ембединга (312) и быстродействие. ## Использование ```Python from sentence_transformers import SentenceTransformer, util model = SentenceTransformer('sergeyzh/rubert-tiny-turbo') sentences = ["привет мир", "hello world", "здравствуй вселенная"] embeddings = model.encode(sentences) print(util.dot_score(embeddings, embeddings)) ``` ## Метрики Оценки модели на бенчмарке [encodechka](https://github.com/avidale/encodechka): | model | CPU | GPU | size | Mean S | Mean S+W | dim | |:-----------------------------------|----------:|---------:|---------:|----------:|-----------:|-------:| | [sergeyzh/LaBSE-ru-turbo](https://huggingface.co/sergeyzh/LaBSE-ru-turbo) | 120.40 | 8.05 | 490 | 0.789 | 0.702 | 768 | | BAAI/bge-m3 | 523.40 | 22.50 | 2166 | 0.787 | 0.696 | 1024 | | intfloat/multilingual-e5-large | 506.80 | 30.80 | 2136 | 0.780 | 0.686 | 1024 | | intfloat/multilingual-e5-base | 130.61 | 14.39 | 1061 | 0.761 | 0.669 | 768 | | **sergeyzh/rubert-tiny-turbo** | 5.51 | 3.25 | 111 | 0.749 | 0.667 | 312 | | intfloat/multilingual-e5-small | 40.86 | 12.09 | 449 | 0.742 | 0.645 | 384 | | cointegrated/rubert-tiny2 | 5.51 | 3.25 | 111 | 0.704 | 0.638 | 312 | | model | STS | PI | NLI | SA | TI | IA | IC | ICX | NE1 | NE2 | |:-----------------------------------|:---------|:---------|:---------|:---------|:---------|:---------|:---------|:---------|:---------|:---------| | [sergeyzh/LaBSE-ru-turbo](https://huggingface.co/sergeyzh/LaBSE-ru-turbo) | 0.864 | 0.748 | 0.490 | 0.814 | 0.974 | 0.806 | 0.815 | 0.801 | 0.305 | 0.404 | | BAAI/bge-m3 | 0.864 | 0.749 | 0.510 | 0.819 | 0.973 | 0.792 | 0.809 | 0.783 | 0.240 | 0.422 | | intfloat/multilingual-e5-large | 0.862 | 0.727 | 0.473 | 0.810 | 0.979 | 0.798 | 0.819 | 0.773 | 0.224 | 0.374 | | intfloat/multilingual-e5-base | 0.835 | 0.704 | 0.459 | 0.796 | 0.964 | 0.783 | 0.802 | 0.738 | 0.235 | 0.376 | | **sergeyzh/rubert-tiny-turbo** | 0.828 | 0.722 | 0.476 | 0.787 | 0.955 | 0.757 | 0.780 | 0.685 | 0.305 | 0.373 | | intfloat/multilingual-e5-small | 0.822 | 0.714 | 0.457 | 0.758 | 0.957 | 0.761 | 0.779 | 0.691 | 0.234 | 0.275 | | cointegrated/rubert-tiny2 | 0.750 | 0.651 | 0.417 | 0.737 | 0.937 | 0.746 | 0.757 | 0.638 | 0.360 | 0.386 |
CoprolaliacPress/Writer-Q6_K-GGUF
CoprolaliacPress
2024-07-01T08:02:16Z
1,092
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:CoprolaliacPress/Writer", "endpoints_compatible", "region:us" ]
null
2024-07-01T08:01:49Z
--- base_model: CoprolaliacPress/Writer library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # CoprolaliacPress/Writer-Q6_K-GGUF This model was converted to GGUF format from [`CoprolaliacPress/Writer`](https://huggingface.co/CoprolaliacPress/Writer) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/CoprolaliacPress/Writer) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CoprolaliacPress/Writer-Q6_K-GGUF --hf-file writer-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CoprolaliacPress/Writer-Q6_K-GGUF --hf-file writer-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CoprolaliacPress/Writer-Q6_K-GGUF --hf-file writer-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CoprolaliacPress/Writer-Q6_K-GGUF --hf-file writer-q6_k.gguf -c 2048 ```
NekoPunchBBB/llama-2-13b-open-platypus-merged
NekoPunchBBB
2023-08-21T17:46:00Z
1,091
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-08-21T17:25:58Z
Entry not found
FPHam/Autolycus-Mistral_7B
FPHam
2023-11-13T06:16:02Z
1,091
9
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "instruct", "finetune", "chatml", "gpt4", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-05T04:06:06Z
--- license: apache-2.0 language: - en tags: - mistral - instruct - finetune - chatml - gpt4 --- <!-- header start --> <div style="display: flex; flex-direction: column; align-items: center;"> </div> <div style="width: 100%;"> <img src="https://huggingface.co/FPHam/OpenAutolycus-Mistral_7B/resolve/main/openautolycustitle.jpg" alt="Open Autolycus" style="width: 40%; min-width: 200px; display: block; margin: auto;"> </div> <div style="display: flex; flex-direction: column; align-items: center;"> <p><a href="https://ko-fi.com/Q5Q5MOB4M">Support me at Ko-fi</a></p> </div> <!-- header end --> Autolycus is a son of Hermes. Autolycus-Mistral is a refinement of OpenHermes 2.5 Mistral, designed to convert the stilted GPT-4 robotic gobbledygook into something resembling natural human English -- with just enough lies, embellishments, and downright falsehoods to bring it into line with the average newspaper article. But what did *you* expect from seven billion models? You can't get good results without some level of embellishment. And besides, who cares about reality anyway? We live in a world where people believe anything they read on the Internet! The most brazen examples of 'making things up', were those rare occasions where Autolycus actually quoted a source; usually a book title or author, sometimes a date, but which you find to be nothing more than a load of hogwash when you check it out for yourself. "I have no idea why anyone would want to build such a thing, other than being bored or having too much time on their hands," said Hermes dismissively. "It has been done before," said another voice, this time belonging to Hermes' son, Autolycus. "Back in ancient Greece, there was a man called Daedalus who built himself wings made of feathers and wax so he could fly away from King Minos of Crete." "Yes, but we are not talking about birds here!" exclaimed Hermes impatiently. "We need to figure out how to keep humans from running off all over the place once they become airborne." He paused thoughtfully then continued, "There must be some way..." His eyes lit up suddenly, and he clapped his hands together excitedly. "Of course! Why didn't I see this sooner?" "What?" asked Autolycus curiously. "We shall use metal cages for humans!" announced Hermes triumphantly. "They will provide both protection and containment!" # Model uses ChatML ``` <|im_start|>system <|im_end|> <|im_start|>user How small are the atoms?<|im_end|> <|im_start|>assistant ```
Vezora/Narwhal-7b-v3
Vezora
2023-12-05T03:25:00Z
1,091
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-03T20:36:36Z
--- license: apache-2.0 --- This is a merge model using Tie merge method. Created using openchat 3.5 and una-cybertron-7b-v2-bf16. Instruction template: ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5") # Single-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Multi-turn tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747] # Coding Mode tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747] ``` This model is exceptionally well at labeling data, bringing down labeling cost to server cost. Hurray! Here is an example User: This is raw text, i want you to label it. in a json format. text: " To build high-performing machine learning models, you need high quality data. Getting ahold of this data can be costly, complex, and inefficient. Labels created by people are needed for the majority of models to help them generate the correct predictions. To help streamline and automate this process, you can apply a machine learning model to label the data directly. Firstly, a machine learning model is trained on a subset of raw training data that has already been labeled by humans. A model with a track record of producing precise outcomes from the information that it has learned thus far, can add labels to unlabeled data automatically. A less accurate model requires human annotators to add labels. Labels created by humans enable the model to learn and enhance its capacity to categorize new data. Eventually, the model is able to label an increasing amount of data automatically and speed up the creation of training datasets. Of course, implementing quality control in such models is also a necessity, as with time it might drift and start producing less accurate results. In this case, human annotators can step in again. Internal labeling (in-house), synthetic labeling (generating new data from previous datasets), programmatic labeling (using scripts), outsourcing (or freelancing) constitute a variety of data labeling methods. However, our favorite is obviously crowdsourcing — a great way to outsource data labeling and get around the drawn-out and expensive management processes. Check out our data labeling platform to learn more!" Narwhal-7b-v3: Here is the given text with added JSON formatted labels. Each sentence has been assigned a label indicating whether it expresses a positive, negative, or neutral sentiment: { "text": [ { "content": "To build high-performing machine learning models, you need high quality data.", "label": "neutral" }, { "content": "Getting ahold of this data can be costly, complex, and inefficient.", "label": "negative" }, { "content": "Labels created by people are needed for the majority of models to help them generate the correct predictions.", "label": "neutral" }, { "content": "To help streamline and automate this process, you can apply a machine learning model to label the data directly.", "label": "positive" }, { "content": "Firstly, a machine learning model is trained on a subset of raw training data that has already been labeled by humans.", "label": "neutral" }, { "content": "A model with a track record of producing precise outcomes from the information that it has learned thus far, can add labels to unlabeled data automatically.", "label": "positive" }, { "content": "A less accurate model requires human annotators to add labels.", "label": "neutral" }, { "content": "Labels created by humans enable the model to learn and enhance its capacity to categorize new data.", "label": "positive" }, { "content": "Eventually, the model is able to label an increasing amount of data automatically and speed up the creation of training datasets.", "label": "positive" }, { "content": "Of course, implementing quality control in such models is also a necessity, as with time it might drift and start producing less accurate results.", "label": "neutral" }, { "content": "In this case, human annotators can step in again.", "label": "neutral" }, { "content": "Internal labeling (in-house), synthetic labeling (generating new data from previous datasets), programmatic labeling (using scripts), outsourcing (or freelancing) constitute a variety of data labeling methods.", "label": "neutral" }, { "content": "However, our favorite is obviously crowdsourcing — a great way to outsource data labeling and get around the drawn-out and expensive management processes.", "label": "positive" }, { "content": "Check out our data labeling platform to learn more!", "label": "neutral" } ] } This JSON object now represents the original text with labeled sentiments for each sentence. You can further utilize this structured data to train sentiment analysis models or perform various analyses.
mlabonne/NeuralQuant-9B
mlabonne
2023-12-27T18:40:01Z
1,091
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-27T18:32:27Z
--- license: apache-2.0 tags: - merge --- # NeuralQuant-9B This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit): * [quantumaikr/quantum-v0.01](https://huggingface.co/quantumaikr/quantum-v0.01) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: quantumaikr/quantum-v0.01 layer_range: [0, 32] - sources: - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [24, 32] merge_method: passthrough dtype: bfloat16 ```
mlabonne/Beyonder-4x7b
mlabonne
2023-12-29T15:44:32Z
1,091
9
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "mergekit", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-28T14:52:40Z
--- license: apache-2.0 tags: - moe - mergekit --- # Beyonder-4x7b This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models: * [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) * [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) * [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) ## 🧩 Configuration ```yaml base_model: openchat/openchat-3.5-1210 gate_mode: hidden experts: - source_model: openchat/openchat-3.5-1210 positive_prompts: - "chat" - "assistant" - "tell me" - "explain" negative_prompts: - "storywriting" - "mathematics" - "reasoning" - "code" - "programming" - source_model: beowolx/CodeNinja-1.0-OpenChat-7B positive_prompts: - "code" - "python" - "javascript" - "programming" - "algorithm" negative_prompts: - "chat" - "assistant" - "storywriting" - "mathematics" - "reasoning" - source_model: maywell/PiVoT-0.1-Starling-LM-RP positive_prompts: - "storywriting" - "write" - "scene" - "story" - "character" negative_prompts: - "chat" - "assistant" - "code" - "programming" - "mathematics" - "reasoning" - source_model: WizardLM/WizardMath-7B-V1.1 positive_prompts: - "reason" - "math" - "mathematics" - "solve" - "count" negative_prompts: - "chat" - "assistant" - "code" - "programming" - "storywriting" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/Beyonder-4x7b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` Output: ``` A Mixture of Experts (MoE) is a neural network architecture that combines the strengths of multiple expert networks to make predictions. It leverages the idea of ensemble learning, where multiple models work together to improve performance. In each MoE, a gating network is used to select the most relevant expert for the input. The final output is a weighted combination of the expert outputs, determined by the gating network's predictions. ```
realPCH/240103_llama_test_1
realPCH
2024-01-03T07:35:39Z
1,091
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-03T07:30:05Z
Entry not found