modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
keremberke/yolov8n-scene-classification
keremberke
"2023-02-22T13:00:14Z"
2,783
1
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/indoor-scene-classification", "model-index", "region:us" ]
image-classification
"2023-01-27T01:35:34Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.20 inference: false datasets: - keremberke/indoor-scene-classification model-index: - name: keremberke/yolov8n-scene-classification results: - task: type: image-classification dataset: type: keremberke/indoor-scene-classification name: indoor-scene-classification split: validation metrics: - type: accuracy value: 0.01605 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 0.08793 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8n-scene-classification" src="https://huggingface.co/keremberke/yolov8n-scene-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['airport_inside', 'artstudio', 'auditorium', 'bakery', 'bookstore', 'bowling', 'buffet', 'casino', 'children_room', 'church_inside', 'classroom', 'cloister', 'closet', 'clothingstore', 'computerroom', 'concert_hall', 'corridor', 'deli', 'dentaloffice', 'dining_room', 'elevator', 'fastfood_restaurant', 'florist', 'gameroom', 'garage', 'greenhouse', 'grocerystore', 'gym', 'hairsalon', 'hospitalroom', 'inside_bus', 'inside_subway', 'jewelleryshop', 'kindergarden', 'kitchen', 'laboratorywet', 'laundromat', 'library', 'livingroom', 'lobby', 'locker_room', 'mall', 'meeting_room', 'movietheater', 'museum', 'nursery', 'office', 'operating_room', 'pantry', 'poolinside', 'prisoncell', 'restaurant', 'restaurant_kitchen', 'shoeshop', 'stairscase', 'studiomusic', 'subway', 'toystore', 'trainstation', 'tv_studio', 'videostore', 'waitingroom', 'warehouse', 'winecellar'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8n-scene-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
OzzyGT/RealVisXL_V4.0_inpainting
OzzyGT
"2024-04-20T03:43:06Z"
2,783
8
diffusers
[ "diffusers", "license:openrail++", "diffusers:StableDiffusionXLInpaintPipeline", "region:us" ]
image-to-image
"2024-04-20T03:37:36Z"
--- license: openrail++ --- This is the inpainting version of RealVisXL_V4 in diffusers "fp16" format. Original model: https://huggingface.co/SG161222/RealVisXL_V4.0
hallisky/type-classifier-gpt4-data
hallisky
"2024-05-19T18:39:27Z"
2,783
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-19T18:33:17Z"
--- license: apache-2.0 ---
digiplay/snowpear_anime
digiplay
"2024-03-17T18:00:57Z"
2,782
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-11-15T20:28:06Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/114679/snowpearanime
keremberke/yolov8n-chest-xray-classification
keremberke
"2023-02-22T13:01:21Z"
2,781
3
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/chest-xray-classification", "model-index", "region:us" ]
image-classification
"2023-01-27T22:52:36Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/chest-xray-classification model-index: - name: keremberke/yolov8n-chest-xray-classification results: - task: type: image-classification dataset: type: keremberke/chest-xray-classification name: chest-xray-classification split: validation metrics: - type: accuracy value: 0.9433 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 1 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8n-chest-xray-classification" src="https://huggingface.co/keremberke/yolov8n-chest-xray-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['NORMAL', 'PNEUMONIA'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8n-chest-xray-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
keremberke/yolov8s-shoe-classification
keremberke
"2023-02-22T13:05:11Z"
2,781
0
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/shoe-classification", "model-index", "region:us" ]
image-classification
"2023-01-30T06:33:06Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.23 inference: false datasets: - keremberke/shoe-classification model-index: - name: keremberke/yolov8s-shoe-classification results: - task: type: image-classification dataset: type: keremberke/shoe-classification name: shoe-classification split: validation metrics: - type: accuracy value: 0.68675 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 1 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8s-shoe-classification" src="https://huggingface.co/keremberke/yolov8s-shoe-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['adidas', 'converse', 'nike'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.24 ultralytics==8.0.23 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8s-shoe-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
JasonFuriosa/test-llama-2-70b-chat-hf
JasonFuriosa
"2024-04-27T19:47:40Z"
2,781
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-27T18:36:30Z"
Entry not found
mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF
mradermacher
"2024-06-26T20:51:48Z"
2,781
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:saishf/Long-Neural-SOVLish-Devil-8B-L3-262K", "endpoints_compatible", "region:us" ]
null
"2024-06-03T03:59:52Z"
--- base_model: saishf/Long-Neural-SOVLish-Devil-8B-L3-262K language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/saishf/Long-Neural-SOVLish-Devil-8B-L3-262K <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Long-Neural-SOVLish-Devil-8B-L3-262K-GGUF/resolve/main/Long-Neural-SOVLish-Devil-8B-L3-262K.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MaziyarPanahi/miqu-1-70b-sf-GPTQ
MaziyarPanahi
"2024-02-04T15:26:08Z"
2,780
8
transformers
[ "transformers", "safetensors", "llama", "text-generation", "finetuned", "quantized", "4-bit", "gptq", "en", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us", "conversational", "base_model:152334H/miqu-1-70b-sf", "license:apache-2.0" ]
text-generation
"2024-02-04T15:19:55Z"
--- license: apache-2.0 tags: - finetuned - quantized - 4-bit - gptq - transformers - safetensors - llama - text-generation - en - license:mit - model-index - autotrain_compatible - endpoints_compatible - has_space - text-generation-inference - region:us model_name: miqu-1-70b-sf-GPTQ base_model: 152334H/miqu-1-70b-sf inference: false model_creator: 152334H pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # Description [MaziyarPanahi/miqu-1-70b-sf-GPTQ](https://huggingface.co/MaziyarPanahi/miqu-1-70b-sf-GPTQ) is a quantized (GPTQ) version of [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) ## How to use ### Install the necessary packages ``` pip install --upgrade accelerate auto-gptq transformers ``` ### Example Python code ```python from transformers import AutoTokenizer, pipeline from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig import torch model_id = "MaziyarPanahi/miqu-1-70b-sf-GPTQ" quantize_config = BaseQuantizeConfig( bits=4, group_size=128, desc_act=False ) model = AutoGPTQForCausalLM.from_quantized( model_id, use_safetensors=True, device="cuda:0", quantize_config=quantize_config) tokenizer = AutoTokenizer.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.1 ) outputs = pipe("What is a large language model?") print(outputs[0]["generated_text"]) ```
mradermacher/Zion_Alpha_Instruction_Tuned-GGUF
mradermacher
"2024-06-08T12:58:55Z"
2,780
0
transformers
[ "transformers", "gguf", "en", "base_model:SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-08T12:33:06Z"
--- base_model: SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/SicariusSicariiStuff/Zion_Alpha_Instruction_Tuned <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Zion_Alpha_Instruction_Tuned-GGUF/resolve/main/Zion_Alpha_Instruction_Tuned.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Umbral-v0.4-2-GGUF
mradermacher
"2024-06-16T19:34:43Z"
2,780
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:mergekit-community/Umbral-v0.4-2", "endpoints_compatible", "region:us" ]
null
"2024-06-16T15:03:21Z"
--- base_model: mergekit-community/Umbral-v0.4-2 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mergekit-community/Umbral-v0.4-2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-2-GGUF/resolve/main/Umbral-v0.4-2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
model-attribution-challenge/bloom-350m
model-attribution-challenge
"2022-07-21T08:04:09Z"
2,779
1
transformers
[ "transformers", "pytorch", "jax", "bloom", "feature-extraction", "text-generation", "ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zhs", "zht", "zu", "arxiv:1909.08053", "arxiv:2110.02861", "arxiv:2108.12409", "license:bigscience-bloom-rail-1.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-07-26T13:16:12Z"
--- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation --- <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 26.May.2022 ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Data](#training-data) 4. [Risks and Limitations](#risks-and-limitations) 5. [Evaluation](#evaluation) 6. [Recommendations](#recommendations) 7. [Glossary and Calculations](#glossary-and-calculations) 8. [More Information](#more-information) 9. [Model Card Authors](#model-card-authors) ## Model Details ### Basics *This section provides information for anyone who wants to know about the model.* <details> <summary>Click to expand</summary> <br/> **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* **Model Type:** Transformer-based Language Model **Version:** 1.0.0 **Languages:** Multiple; see [training data](#training-data) **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) **Release Date Estimate:** Monday, 11.July.2022 **Send Questions to:** [email protected] **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* </details> ### Technical Specifications *This section provides information for people who work on model development.* <details> <summary>Click to expand</summary><br/> Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 350 million parameters: * 24 layers, 16 attention heads * Hidden layers are 1024-dimensional * Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 384 A100 80GB GPUs (48 nodes): * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) #### **Training** _In progress._ Current training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11-176B-ml-logs/) - Checkpoint size: - Bf16 weights: 329GB - Full checkpoint with optimizer states: 2.3TB - Training throughput: About 150 TFLOP per GPU per second - Number of epochs: 1 (*current target*) - Dates: - Started 11th March, 2022 11:42am PST - Estimated end: 5th July, 2022 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments) - Server training location: Île-de-France, France #### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. </details> ### Environmental Impact <details> <summary>Click to expand</summary><br/> The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* </details> <p>&nbsp;</p> ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* <details> <summary>Click to expand</summary><br/> ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM </details> <p>&nbsp;</p> ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* <details> <summary>Click to expand</summary><br/> Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) The following table shows the further distribution of Niger-Congo and Indic languages in the training data. <details> <summary>Click to expand</summary><br/> | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | </details> The following table shows the distribution of programming languages. <details> <summary>Click to expand</summary><br/> | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | </details> </details> <p>&nbsp;</p> ## Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* <details> <summary>Click to expand</summary><br/> Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs </details> <p>&nbsp;</p> ## Evaluation *This section describes the evaluation protocols and provides the results.* <details> <summary>Click to expand</summary><br/> ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.0 - Validation Loss: 2.2 - Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) </details> <p>&nbsp;</p> ## Recommendations *This section provides information on warnings and potential mitigations.* <details> <summary>Click to expand</summary><br/> - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. </details> <p>&nbsp;</p> ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* <details> <summary>Click to expand</summary><br/> - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. </details> <p>&nbsp;</p> ## More Information <details> <summary>Click to expand</summary><br/> ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book </details> <p>&nbsp;</p> ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
Azure99/blossom-v5.1-34b
Azure99
"2024-07-01T14:26:31Z"
2,779
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "en", "dataset:Azure99/blossom-chat-v3", "dataset:Azure99/blossom-math-v4", "dataset:Azure99/blossom-wizard-v3", "dataset:Azure99/blossom-orca-v3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-19T16:21:42Z"
--- license: apache-2.0 datasets: - Azure99/blossom-chat-v3 - Azure99/blossom-math-v4 - Azure99/blossom-wizard-v3 - Azure99/blossom-orca-v3 language: - zh - en --- # **BLOSSOM-v5.1-34b** [💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/) ### Introduction Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Yi-1.5-34B pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source. Training was conducted in two stages. The first stage used 40K Wizard, 40K Orca, 10K Math single-turn instruction datasets, training for 1 epoch; the second stage used 10K Blossom chat multi-turn dialogue dataset, and 10% randomly sampled data from the first stage, training for 3 epochs. ### Inference Inference is performed in the form of dialogue continuation. Single-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: ``` Multi-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: Hello! How can I assist you today?<|endoftext|> |Human|: Generate a random number using python |Bot|: ``` Note: At the end of the Bot's output in the historical conversation, append a `<|endoftext|>`.
RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf
RichardErkhov
"2024-06-23T10:00:39Z"
2,779
0
null
[ "gguf", "region:us" ]
null
"2024-06-22T23:42:52Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TinyLlama-1.1B-intermediate-step-480k-1T - GGUF - Model creator: https://huggingface.co/TinyLlama/ - Original model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-480k-1T/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q2_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q2_K.gguf) | Q2_K | 0.4GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.IQ3_S.gguf) | IQ3_S | 0.47GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.IQ3_M.gguf) | IQ3_M | 0.48GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q3_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q3_K.gguf) | Q3_K | 0.51GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q4_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q4_0.gguf) | Q4_0 | 0.59GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q4_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q4_K.gguf) | Q4_K | 0.62GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q4_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q4_1.gguf) | Q4_1 | 0.65GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q5_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q5_0.gguf) | Q5_0 | 0.71GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q5_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q5_K.gguf) | Q5_K | 0.73GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q5_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q5_1.gguf) | Q5_1 | 0.77GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q6_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q6_K.gguf) | Q6_K | 0.84GB | | [TinyLlama-1.1B-intermediate-step-480k-1T.Q8_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-480k-1T-gguf/blob/main/TinyLlama-1.1B-intermediate-step-480k-1T.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- license: apache-2.0 datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata language: - en --- <div align="center"> # TinyLlama-1.1B </div> https://github.com/jzhang38/TinyLlama The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01. <div align="center"> <img src="./TinyLlama_logo.png" width="300"/> </div> We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. #### This Model This is an intermediate checkpoint with 480K steps and 1007B tokens. #### How to use You will need the transformers>=4.31 Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information. ```python from transformers import AutoTokenizer import transformers import torch model = "PY007/TinyLlama-1.1B-intermediate-step-240k-503b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.', do_sample=True, top_k=10, num_return_sequences=1, repetition_penalty=1.5, eos_token_id=tokenizer.eos_token_id, max_length=500, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ```
zhayunduo/roberta-base-stocktwits-finetuned
zhayunduo
"2023-04-22T03:45:03Z"
2,778
19
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "finance", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-04-02T13:48:34Z"
--- license: apache-2.0 pipeline_tag: text-classification language: - en metrics: - accuracy library_name: transformers tags: - finance --- ## **Sentiment Inferencing model for stock related commments** #### *A project by NUS ISS students Frank Cao, Gerong Zhang, Jiaqi Yao, Sikai Ni, Yunduo Zhang* <br /> ### Description This model is fine tuned with roberta-base model on 3200000 comments from stocktwits, with the user labeled tags 'Bullish' or 'Bearish' try something that the individual investors may say on the investment forum on the inference API, for example, try 'red' and 'green'. [code on github](https://github.com/Gitrexx/PLPPM_Sentiment_Analysis_via_Stocktwits/tree/main/SentimentEngine) <br /> ### Training information - batch size 32 - learning rate 2e-5 | | Train loss | Validation loss | Validation accuracy | | ----------- | ----------- | ---------------- | ------------------- | | epoch1 | 0.3495 | 0.2956 | 0.8679 | | epoch2 | 0.2717 | 0.2235 | 0.9021 | | epoch3 | 0.2360 | 0.1875 | 0.9210 | | epoch4 | 0.2106 | 0.1603 | 0.9343 | <br /> # How to use ```python from transformers import RobertaForSequenceClassification, RobertaTokenizer from transformers import pipeline import pandas as pd import emoji # the model was trained upon below preprocessing def process_text(texts): # remove URLs texts = re.sub(r'https?://\S+', "", texts) texts = re.sub(r'www.\S+', "", texts) # remove ' texts = texts.replace('&#39;', "'") # remove symbol names texts = re.sub(r'(\#)(\S+)', r'hashtag_\2', texts) texts = re.sub(r'(\$)([A-Za-z]+)', r'cashtag_\2', texts) # remove usernames texts = re.sub(r'(\@)(\S+)', r'mention_\2', texts) # demojize texts = emoji.demojize(texts, delimiters=("", " ")) return texts.strip() tokenizer_loaded = RobertaTokenizer.from_pretrained('zhayunduo/roberta-base-stocktwits-finetuned') model_loaded = RobertaForSequenceClassification.from_pretrained('zhayunduo/roberta-base-stocktwits-finetuned') nlp = pipeline("text-classification", model=model_loaded, tokenizer=tokenizer_loaded) sentences = pd.Series(['just buy','just sell it', 'entity rocket to the sky!', 'go down','even though it is going up, I still think it will not keep this trend in the near future']) # sentences = list(sentences.apply(process_text)) # if input text contains https, @ or # or $ symbols, better apply preprocess to get a more accurate result sentences = list(sentences) results = nlp(sentences) print(results) # 2 labels, label 0 is bearish, label 1 is bullish ```
rinna/japanese-gpt-1b
rinna
"2024-04-03T07:17:07Z"
2,775
93
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "ja", "japanese", "gpt", "lm", "nlp", "dataset:cc100", "dataset:wikipedia", "dataset:c4", "arxiv:2404.01657", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: ja thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png tags: - ja - japanese - gpt - text-generation - lm - nlp license: mit datasets: - cc100 - wikipedia - c4 widget: - text: "西田幾多郎は、" --- # japanese-gpt-1b ![rinna-icon](./rinna.png) This repository provides a 1.3B-parameter Japanese GPT model. The model was trained by [rinna Co., Ltd.](https://corp.rinna.co.jp/) # How to use the model ~~~~ import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-1b", use_fast=False) model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-1b") if torch.cuda.is_available(): model = model.to("cuda") text = "西田幾多郎は、" token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_length=100, min_length=100, do_sample=True, top_k=500, top_p=0.95, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id, bad_words_ids=[[tokenizer.unk_token_id]] ) output = tokenizer.decode(output_ids.tolist()[0]) print(output) # sample output: 西田幾多郎は、その主著の「善の研究」などで、人間の内面に自然とその根源があると指摘し、その根源的な性格は、この西田哲学を象徴しているとして、カントの「純粋理性批判」と「判断力批判」を対比して捉えます。それは、「人が理性的存在であるかぎりにおいて、人はその当人に固有な道徳的に自覚された善悪の基準を持っている」とするもので、この理性的な善悪の観念を否定するのがカントの ~~~~ # Model architecture A 24-layer, 2048-hidden-size transformer-based language model. # Training The model was trained on [Japanese C4](https://huggingface.co/datasets/allenai/c4), [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective. It reaches around 14 perplexity on a chosen validation set from the same data. # Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script, and then augmented with emojis and symbols. # How to cite ~~~ @misc{rinna-japanese-gpt-1b, title = {rinna/japanese-gpt-1b}, author = {Zhao, Tianyu and Sawada, Kei} url = {https://huggingface.co/rinna/japanese-gpt-1b}, } @inproceedings{sawada2024release, title = {Release of Pre-Trained Models for the {J}apanese Language}, author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh}, booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)}, month = {5}, year = {2024}, url = {https://arxiv.org/abs/2404.01657}, } ~~~ # Licenese [The MIT license](https://opensource.org/licenses/MIT)
jacobhoffmann/codegemma-1.1-2b-GGUF
jacobhoffmann
"2024-06-05T10:42:43Z"
2,775
0
null
[ "gguf", "region:us" ]
null
"2024-06-05T10:14:08Z"
Entry not found
sismetanin/rubert-ru-sentiment-rusentiment
sismetanin
"2021-05-20T06:11:34Z"
2,774
6
transformers
[ "transformers", "pytorch", "jax", "bert", "text-classification", "sentiment analysis", "Russian", "ru", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-03-02T23:29:05Z"
--- language: - ru tags: - sentiment analysis - Russian --- ## RuBERT-Base-ru-sentiment-RuSentiment RuBERT-ru-sentiment-RuSentiment is a [RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased) model fine-tuned on [RuSentiment dataset](https://github.com/text-machine-lab/rusentiment) of general-domain Russian-language posts from the largest Russian social network, VKontakte. <table> <thead> <tr> <th rowspan="4">Model</th> <th rowspan="4">Score<br></th> <th rowspan="4">Rank</th> <th colspan="12">Dataset</th> </tr> <tr> <td colspan="6">SentiRuEval-2016<br></td> <td colspan="2" rowspan="2">RuSentiment</td> <td rowspan="2">KRND</td> <td rowspan="2">LINIS Crowd</td> <td rowspan="2">RuTweetCorp</td> <td rowspan="2">RuReviews</td> </tr> <tr> <td colspan="3">TC</td> <td colspan="3">Banks</td> </tr> <tr> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>micro F1</td> <td>macro F1</td> <td>F1</td> <td>wighted</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> <td>F1</td> </tr> </thead> <tbody> <tr> <td>SOTA</td> <td>n/s</td> <td></td> <td>76.71</td> <td>66.40</td> <td>70.68</td> <td>67.51</td> <td>69.53</td> <td>74.06</td> <td>78.50</td> <td>n/s</td> <td>73.63</td> <td>60.51</td> <td>83.68</td> <td>77.44</td> </tr> <tr> <td>XLM-RoBERTa-Large</td> <td>76.37</td> <td>1</td> <td>82.26</td> <td>76.36</td> <td>79.42</td> <td>76.35</td> <td>76.08</td> <td>80.89</td> <td>78.31</td> <td>75.27</td> <td>75.17</td> <td>60.03</td> <td>88.91</td> <td>78.81</td> </tr> <tr> <td>SBERT-Large</td> <td>75.43</td> <td>2</td> <td>78.40</td> <td>71.36</td> <td>75.14</td> <td>72.39</td> <td>71.87</td> <td>77.72</td> <td>78.58</td> <td>75.85</td> <td>74.20</td> <td>60.64</td> <td>88.66</td> <td>77.41</td> </tr> <tr> <td>MBARTRuSumGazeta</td> <td>74.70</td> <td>3</td> <td>76.06</td> <td>68.95</td> <td>73.04</td> <td>72.34</td> <td>71.93</td> <td>77.83</td> <td>76.71</td> <td>73.56</td> <td>74.18</td> <td>60.54</td> <td>87.22</td> <td>77.51</td> </tr> <tr> <td>Conversational RuBERT</td> <td>74.44</td> <td>4</td> <td>76.69</td> <td>69.09</td> <td>73.11</td> <td>69.44</td> <td>68.68</td> <td>75.56</td> <td>77.31</td> <td>74.40</td> <td>73.10</td> <td>59.95</td> <td>87.86</td> <td>77.78</td> </tr> <tr> <td>LaBSE</td> <td>74.11</td> <td>5</td> <td>77.00</td> <td>69.19</td> <td>73.55</td> <td>70.34</td> <td>69.83</td> <td>76.38</td> <td>74.94</td> <td>70.84</td> <td>73.20</td> <td>59.52</td> <td>87.89</td> <td>78.47</td> </tr> <tr> <td>XLM-RoBERTa-Base</td> <td>73.60</td> <td>6</td> <td>76.35</td> <td>69.37</td> <td>73.42</td> <td>68.45</td> <td>67.45</td> <td>74.05</td> <td>74.26</td> <td>70.44</td> <td>71.40</td> <td>60.19</td> <td>87.90</td> <td>78.28</td> </tr> <tr> <td>RuBERT</td> <td>73.45</td> <td>7</td> <td>74.03</td> <td>66.14</td> <td>70.75</td> <td>66.46</td> <td>66.40</td> <td>73.37</td> <td>75.49</td> <td>71.86</td> <td>72.15</td> <td>60.55</td> <td>86.99</td> <td>77.41</td> </tr> <tr> <td>MBART-50-Large-Many-to-Many</td> <td>73.15</td> <td>8</td> <td>75.38</td> <td>67.81</td> <td>72.26</td> <td>67.13</td> <td>66.97</td> <td>73.85</td> <td>74.78</td> <td>70.98</td> <td>71.98</td> <td>59.20</td> <td>87.05</td> <td>77.24</td> </tr> <tr> <td>SlavicBERT</td> <td>71.96</td> <td>9</td> <td>71.45</td> <td>63.03</td> <td>68.44</td> <td>64.32</td> <td>63.99</td> <td>71.31</td> <td>72.13</td> <td>67.57</td> <td>72.54</td> <td>58.70</td> <td>86.43</td> <td>77.16</td> </tr> <tr> <td>EnRuDR-BERT</td> <td>71.51</td> <td>10</td> <td>72.56</td> <td>64.74</td> <td>69.07</td> <td>61.44</td> <td>60.21</td> <td>68.34</td> <td>74.19</td> <td>69.94</td> <td>69.33</td> <td>56.55</td> <td>87.12</td> <td>77.95</td> </tr> <tr> <td>RuDR-BERT</td> <td>71.14</td> <td>11</td> <td>72.79</td> <td>64.23</td> <td>68.36</td> <td>61.86</td> <td>60.92</td> <td>68.48</td> <td>74.65</td> <td>70.63</td> <td>68.74</td> <td>54.45</td> <td>87.04</td> <td>77.91</td> </tr> <tr> <td>MBART-50-Large</td> <td>69.46</td> <td>12</td> <td>70.91</td> <td>62.67</td> <td>67.24</td> <td>61.12</td> <td>60.25</td> <td>68.41</td> <td>72.88</td> <td>68.63</td> <td>70.52</td> <td>46.39</td> <td>86.48</td> <td>77.52</td> </tr> </tbody> </table> The table shows per-task scores and a macro-average of those scores to determine a models’s position on the leaderboard. For datasets with multiple evaluation metrics (e.g., macro F1 and weighted F1 for RuSentiment), we use an unweighted average of the metrics as the score for the task when computing the overall macro-average. The same strategy for comparing models’ results was applied in the GLUE benchmark. ## Citation If you find this repository helpful, feel free to cite our publication: ``` @article{Smetanin2021Deep, author = {Sergey Smetanin and Mikhail Komarov}, title = {Deep transfer learning baselines for sentiment analysis in Russian}, journal = {Information Processing & Management}, volume = {58}, number = {3}, pages = {102484}, year = {2021}, issn = {0306-4573}, doi = {0.1016/j.ipm.2020.102484} } ``` Dataset: ``` @inproceedings{rogers2018rusentiment, title={RuSentiment: An enriched sentiment analysis dataset for social media in Russian}, author={Rogers, Anna and Romanov, Alexey and Rumshisky, Anna and Volkova, Svitlana and Gronas, Mikhail and Gribov, Alex}, booktitle={Proceedings of the 27th international conference on computational linguistics}, pages={755--763}, year={2018} } ```
keremberke/yolov8s-pcb-defect-segmentation
keremberke
"2023-02-22T13:02:28Z"
2,774
1
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-segmentation", "pytorch", "awesome-yolov8-models", "dataset:keremberke/pcb-defect-segmentation", "model-index", "region:us" ]
image-segmentation
"2023-01-28T07:39:17Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-segmentation - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/pcb-defect-segmentation model-index: - name: keremberke/yolov8s-pcb-defect-segmentation results: - task: type: image-segmentation dataset: type: keremberke/pcb-defect-segmentation name: pcb-defect-segmentation split: validation metrics: - type: precision # since [email protected] is not available on hf.co/metrics value: 0.51452 # min: 0.0 - max: 1.0 name: [email protected](box) - type: precision # since [email protected] is not available on hf.co/metrics value: 0.49054 # min: 0.0 - max: 1.0 name: [email protected](mask) --- <div align="center"> <img width="640" alt="keremberke/yolov8s-pcb-defect-segmentation" src="https://huggingface.co/keremberke/yolov8s-pcb-defect-segmentation/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Dry_joint', 'Incorrect_installation', 'PCB_damage', 'Short_circuit'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8s-pcb-defect-segmentation') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) print(results[0].masks) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
TencentARC/t2iadapter_zoedepth_sd15v1
TencentARC
"2023-07-31T10:48:46Z"
2,774
1
diffusers
[ "diffusers", "art", "t2i-adapter", "controlnet", "stable-diffusion", "image-to-image", "arxiv:2302.08453", "base_model:runwayml/stable-diffusion-v1-5", "license:apache-2.0", "region:us" ]
image-to-image
"2023-07-14T19:02:00Z"
--- license: apache-2.0 base_model: runwayml/stable-diffusion-v1-5 tags: - art - t2i-adapter - controlnet - stable-diffusion - image-to-image --- # T2I Adapter - Zoedepth T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. This checkpoint provides conditioning on zoedepth depth estimation for the stable diffusion 1.5 checkpoint. ## Model Details - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** Apache 2.0 - **Resources for more information:** [GitHub Repository](https://github.com/TencentARC/T2I-Adapter), [Paper](https://arxiv.org/abs/2302.08453). - **Cite as:** @misc{ title={T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models}, author={Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie}, year={2023}, eprint={2302.08453}, archivePrefix={arXiv}, primaryClass={cs.CV} } ### Checkpoints | Model Name | Control Image Overview| Control Image Example | Generated Image Example | |---|---|---|---| |[TencentARC/t2iadapter_color_sd14v1](https://huggingface.co/TencentARC/t2iadapter_color_sd14v1)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_sample_output.png"/></a>| |[TencentARC/t2iadapter_canny_sd14v1](https://huggingface.co/TencentARC/t2iadapter_canny_sd14v1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/canny_sample_output.png"/></a>| |[TencentARC/t2iadapter_sketch_sd14v1](https://huggingface.co/TencentARC/t2iadapter_sketch_sd14v1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/sketch_sample_output.png"/></a>| |[TencentARC/t2iadapter_depth_sd14v1](https://huggingface.co/TencentARC/t2iadapter_depth_sd14v1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/depth_sample_output.png"/></a>| |[TencentARC/t2iadapter_openpose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_openpose_sd14v1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/openpose_sample_output.png"/></a>| |[TencentARC/t2iadapter_keypose_sd14v1](https://huggingface.co/TencentARC/t2iadapter_keypose_sd14v1)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/keypose_sample_output.png"/></a>| |[TencentARC/t2iadapter_seg_sd14v1](https://huggingface.co/TencentARC/t2iadapter_seg_sd14v1)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_input.png"/></a>|<a href="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"><img width="64" src="https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/seg_sample_output.png"/></a> | |[TencentARC/t2iadapter_canny_sd15v2](https://huggingface.co/TencentARC/t2iadapter_canny_sd15v2)|| |[TencentARC/t2iadapter_depth_sd15v2](https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2)|| |[TencentARC/t2iadapter_sketch_sd15v2](https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2)|| |[TencentARC/t2iadapter_zoedepth_sd15v1](https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1)|| ## Example 1. Dependencies ```sh pip install diffusers transformers matplotlib ``` 2. Run code: ```python from PIL import Image import torch import numpy as np import matplotlib from diffusers import T2IAdapter, StableDiffusionAdapterPipeline def colorize(value, vmin=None, vmax=None, cmap='gray_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None): """Converts a depth map to a color image. Args: value (torch.Tensor, numpy.ndarry): Input depth map. Shape: (H, W) or (1, H, W) or (1, 1, H, W). All singular dimensions are squeezed vmin (float, optional): vmin-valued entries are mapped to start color of cmap. If None, value.min() is used. Defaults to None. vmax (float, optional): vmax-valued entries are mapped to end color of cmap. If None, value.max() is used. Defaults to None. cmap (str, optional): matplotlib colormap to use. Defaults to 'magma_r'. invalid_val (int, optional): Specifies value of invalid pixels that should be colored as 'background_color'. Defaults to -99. invalid_mask (numpy.ndarray, optional): Boolean mask for invalid regions. Defaults to None. background_color (tuple[int], optional): 4-tuple RGB color to give to invalid pixels. Defaults to (128, 128, 128, 255). gamma_corrected (bool, optional): Apply gamma correction to colored image. Defaults to False. value_transform (Callable, optional): Apply transform function to valid pixels before coloring. Defaults to None. Returns: numpy.ndarray, dtype - uint8: Colored depth map. Shape: (H, W, 4) """ if isinstance(value, torch.Tensor): value = value.detach().cpu().numpy() value = value.squeeze() if invalid_mask is None: invalid_mask = value == invalid_val mask = np.logical_not(invalid_mask) # normalize vmin = np.percentile(value[mask],2) if vmin is None else vmin vmax = np.percentile(value[mask],85) if vmax is None else vmax if vmin != vmax: value = (value - vmin) / (vmax - vmin) # vmin..vmax else: # Avoid 0-division value = value * 0. # squeeze last dim if it exists # grey out the invalid values value[invalid_mask] = np.nan cmapper = matplotlib.cm.get_cmap(cmap) if value_transform: value = value_transform(value) # value = value / value.max() value = cmapper(value, bytes=True) # (nxmx4) img = value[...] img[invalid_mask] = background_color if gamma_corrected: img = img / 255 img = np.power(img, 2.2) img = img * 255 img = img.astype(np.uint8) return img model = torch.hub.load("isl-org/ZoeDepth", "ZoeD_N", pretrained=True) img = Image.open('./images/zoedepth_in.png') out = model.infer_pil(img) zoedepth_image = Image.fromarray(colorize(out)).convert('RGB') zoedepth_image.save('images/zoedepth.png') adapter = T2IAdapter.from_pretrained("TencentARC/t2iadapter_zoedepth_sd15v1", torch_dtype=torch.float16) pipe = StableDiffusionAdapterPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", adapter=adapter, safety_checker=None, torch_dtype=torch.float16, variant="fp16" ) pipe.to('cuda') zoedepth_image_out = pipe(prompt="motorcycle", image=zoedepth_image).images[0] zoedepth_image_out.save('images/zoedepth_out.png') ``` ![zoedepth_in](./images/zoedepth_in.png) ![zoedepth](./images/zoedepth.png) ![zoedepth_out](./images/zoedepth_out.png)
Lewdiculous/Fimbulvetr-11B-v2-GGUF-IQ-Imatrix
Lewdiculous
"2024-06-03T06:09:28Z"
2,773
13
null
[ "gguf", "roleplay", "license:cc-by-nc-4.0", "region:us" ]
null
"2024-06-03T04:33:39Z"
--- license: cc-by-nc-4.0 inference: false tags: - roleplay --- Model imatrix quants as requested at [**#36**](https://huggingface.co/Lewdiculous/Model-Requests/discussions/36) for [**Sao10K/Fimbulvetr-11B-v2**](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2). <br> **Prompt Format:** Alpaca or Vicuna. An absolute **classic** and highly popular roleplay model, now with newer quants as requested directly. *Imatrix data was generated from the FP16-GGUF and conversions as well since the original model weights are already FP16.* <br> *Using the latest version of llama.cpp at the time - b2774.* ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/jk8ImzfmM87y8e2FYT7Wv.webp)
Yntec/a-ZovyaRPGV3VAE
Yntec
"2023-08-03T16:21:10Z"
2,772
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "Zovya", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-08-03T16:04:42Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - Zovya --- # A-Zovya RPG Artist Tools V3 VAE Original page: https://civitai.com/models/8124?modelVersionId=87886
mradermacher/Umbral-v0.4-1-GGUF
mradermacher
"2024-06-17T02:36:51Z"
2,772
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:mergekit-community/Umbral-v0.4-1", "endpoints_compatible", "region:us" ]
null
"2024-06-17T02:05:32Z"
--- base_model: mergekit-community/Umbral-v0.4-1 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/mergekit-community/Umbral-v0.4-1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Umbral-v0.4-1-GGUF/resolve/main/Umbral-v0.4-1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Vsukiyaki/Yaki-Dofu-Mix
Vsukiyaki
"2023-12-24T11:07:09Z"
2,771
7
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "ja", "en", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-12-23T09:26:19Z"
--- license: creativeml-openrail-m language: - ja - en tags: - stable-diffusion - text-to-image --- # Yaki-Dofu-Mix <img src="https://huggingface.co/Vsukiyaki/Yaki-Dofu-Mix/resolve/main/imgs/Yaki-Dofu-Mix.png" style="width: 768px;"> ## 概要 / Overview - **Yaki-Dofu-Mix**は、アニメ風の画風に特化したマージモデルです。 / **Yaki-Dofu-Mix** is a merge model that specializes in an anime-like painting style. - VAEなしでも鮮やかな色合いで出力されます。 / The output will be vividly tinted without VAE. <hr> ## ライセンス / License <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tbody> <tr> <td class="px-4 text-base text-bold" colspan="2"> <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license"> 修正 CreativeML OpenRAIL-M ライセンス / Modified CreativeML OpenRAIL-M license </a> </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> ✅ </span> </td> <td> このモデルのクレジットを入れずに使用する<br> Use the model without crediting the creator </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> 🚫 </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> 🚫 </span> </td> <td> このモデルを商用の画像生成サービスで利用する</br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> ✅ </span> </td> <td> このモデルを使用したマージモデルを共有する<br> Share merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> 🚫 </span> </td> <td> このモデル、またはこのモデルをマージしたモデルを販売する</br> Sell this model or merges using this model </td> </tr> <tr class="bg-danger-100"> <td class="align-middle px-2 w-8"> <span style="font-size: 18px;"> 🚫 </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する</br> Have different permissions when sharing merges </td> </tr> </tbody> </table> </div> <hr> ## 推奨設定 / Recommended Settings <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; white-space: pre-line;"> Steps: 20 ~ 60 Sampler: DPM++ 3M SDE Exponential CFG scale: 7.5 Denoising strength: 0.55 Hires steps: 20 Hires upscaler: R-ESRGAN 4x+ Anime6B Clip skip: 2 </pre> Negative: <pre style="margin: 1em 0; padding: 1em; border-radius: 5px; white-space: pre-line;"> (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3, </pre> <hr> ## 例 / Examples <div class="flex justify-center"> <div class="container mx-auto px-2"> <div class="flex flex-wrap min-w-min items-baseline"> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="flex-1"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://huggingface.co/Vsukiyaki/Yaki-Dofu-Mix/resolve/main/imgs/sample01.png" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> (solo:1.2),cute girl,(pink short hair),(casual wavy hair:1.3), blunt bangs,blush,head tilt,upper body,black cap,oversized black t-shirt,simple background,white background,cowboy shot,shadow,choker, Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3, Steps: 60, Sampler: DPM++ 3M SDE Exponential, CFG scale: 7.5, Seed: 1452497008, Size: 768x768, Denoising strength: 0.55, Clip skip: 2, Hires upscale: 2.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B, </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://huggingface.co/Vsukiyaki/Yaki-Dofu-Mix/resolve/main/imgs/sample02.png" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> night,cute girl against wall in the downtown,solo,from side,pink hair,(casual wavy hair:1.3),blunt bangs,duffel coat,plaid skirt,scarf,blush,(depth of field:1.3),(night view),dynamic angle,outdoor,cowboy shot, Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3, Steps: 60, Sampler: DPM++ 3M SDE Exponential, CFG scale: 7.5, Seed: 3362678745, Size: 760x768, Denoising strength: 0.55, Clip skip: 2, Hires upscale: 2.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B, </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="flex-1"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://huggingface.co/Vsukiyaki/Yaki-Dofu-Mix/resolve/main/imgs/sample03.png" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> ((solo:1.2)),cute girl sitting on bench in garden,frilled dirndl,from above,looking up,cobblestone pavement,aqua hair,fine bob cut,(hair over one eye),(dappled sunlight:1.2),blurry,(depth of field:1.1),head tilt,:o,(petals),tree,butterfly Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3, Steps: 60, Sampler: DPM++ 3M SDE Exponential, CFG scale: 7.5, Seed: 617162279, Size: 760x768, Denoising strength: 0.55, Clip skip: 2, Hires upscale: 2.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B, </pre> </div> </div> <div class="p-1 flex-1" style="width: 50%; min-width: 320px; flex-basis: 50%;"> <div class="w-full"> <img alt="gallery" class="block h-full w-full rounded-t-lg object-contain object-center" src="https://huggingface.co/Vsukiyaki/Yaki-Dofu-Mix/resolve/main/imgs/sample04.png" loading="lazy" /> </div> <div class="w-full"> <pre class="w-full" style="white-space: pre-line;"> cute girl standing on a beautiful beach,white t-shirt,(brown hair:1.3,brown eyes),(casual wavy long hair:1.3),splash,looking at viewer,upper body,sunset view,chromatic aberration,(depth of field:1.3),cinematic lighting,serenity,wind Negative prompt: (easynegative:1.0),(worst quality,low quality:1.2),(bad anatomy:1.4),(realistic:1.1),nose,lips,adult,fat,sad, (inaccurate limb:1.2),extra digit,fewer digits,six fingers,(monochrome:0.95),verybadimagenegative_v1.3, Steps: 60, Sampler: DPM++ 3M SDE Exponential, CFG scale: 7.5, Seed: 1118141335, Size: 768x768, Denoising strength: 0.55, Clip skip: 2, Hires upscale: 2.5, Hires steps: 20, Hires upscaler: R-ESRGAN 4x+ Anime6B, </pre> </div> </div> </div> </div> </div> <hr> Twiter: [@Vsukiyaki_AIArt](https://twitter.com/Vsukiyaki_AIArt) <a href="https://twitter.com/Vsukiyaki_AIArt" class="mb-2 inline-block rounded px-6 py-2.5 text-white shadow-md" style="background-color: #1da1f2"> <svg xmlns="http://www.w3.org/2000/svg" class="h-3.5 w-3.5" fill="currentColor" viewBox="0 0 24 24"> <path d="M24 4.557c-.883.392-1.832.656-2.828.775 1.017-.609 1.798-1.574 2.165-2.724-.951.564-2.005.974-3.127 1.195-.897-.957-2.178-1.555-3.594-1.555-3.179 0-5.515 2.966-4.797 6.045-4.091-.205-7.719-2.165-10.148-5.144-1.29 2.213-.669 5.108 1.523 6.574-.806-.026-1.566-.247-2.229-.616-.054 2.281 1.581 4.415 3.949 4.89-.693.188-1.452.232-2.224.084.626 1.956 2.444 3.379 4.6 3.419-2.07 1.623-4.678 2.348-7.29 2.04 2.179 1.397 4.768 2.212 7.548 2.212 9.142 0 14.307-7.721 13.995-14.646.962-.695 1.797-1.562 2.457-2.549z" /> </svg> </a>
Helsinki-NLP/opus-mt-tc-big-en-tr
Helsinki-NLP
"2023-08-16T12:10:49Z"
2,770
22
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "opus-mt-tc", "en", "tr", "license:cc-by-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2022-04-13T15:11:47Z"
--- language: - en - tr tags: - translation - opus-mt-tc license: cc-by-4.0 model-index: - name: opus-mt-tc-big-en-tr results: - task: name: Translation eng-tur type: translation args: eng-tur dataset: name: flores101-devtest type: flores_101 args: eng tur devtest metrics: - name: BLEU type: bleu value: 31.4 - task: name: Translation eng-tur type: translation args: eng-tur dataset: name: newsdev2016 type: newsdev2016 args: eng-tur metrics: - name: BLEU type: bleu value: 21.9 - task: name: Translation eng-tur type: translation args: eng-tur dataset: name: tatoeba-test-v2021-08-07 type: tatoeba_mt args: eng-tur metrics: - name: BLEU type: bleu value: 42.3 - task: name: Translation eng-tur type: translation args: eng-tur dataset: name: newstest2016 type: wmt-2016-news args: eng-tur metrics: - name: BLEU type: bleu value: 23.4 - task: name: Translation eng-tur type: translation args: eng-tur dataset: name: newstest2017 type: wmt-2017-news args: eng-tur metrics: - name: BLEU type: bleu value: 25.4 - task: name: Translation eng-tur type: translation args: eng-tur dataset: name: newstest2018 type: wmt-2018-news args: eng-tur metrics: - name: BLEU type: bleu value: 22.6 --- # opus-mt-tc-big-en-tr Neural machine translation model for translating from English (en) to Turkish (tr). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ``` ## Model info * Release: 2022-02-25 * source language(s): eng * target language(s): tur * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-02-25.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opusTCv20210807+bt_transformer-big_2022-02-25.zip) * more information released models: [OPUS-MT eng-tur README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-tur/README.md) ## Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ "I know Tom didn't want to eat that.", "On Sundays, we would get up early and go fishing." ] model_name = "pytorch-models/opus-mt-tc-big-en-tr" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) ) # expected output: # Tom'un bunu yemek istemediğini biliyorum. # Pazar günleri erkenden kalkıp balık tutmaya giderdik. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-tr") print(pipe("I know Tom didn't want to eat that.")) # expected output: Tom'un bunu yemek istemediğini biliyorum. ``` ## Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-02-25.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opusTCv20210807+bt_transformer-big_2022-02-25.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-tur/opusTCv20210807+bt_transformer-big_2022-02-25.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU | #sent | #words | |----------|---------|-------|-------|-------|--------| | eng-tur | tatoeba-test-v2021-08-07 | 0.68726 | 42.3 | 13907 | 84364 | | eng-tur | flores101-devtest | 0.62829 | 31.4 | 1012 | 20253 | | eng-tur | newsdev2016 | 0.58947 | 21.9 | 1001 | 15958 | | eng-tur | newstest2016 | 0.57624 | 23.4 | 3000 | 50782 | | eng-tur | newstest2017 | 0.58858 | 25.4 | 3007 | 51977 | | eng-tur | newstest2018 | 0.57848 | 22.6 | 3000 | 53731 | ## Acknowledgements The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland. ## Model conversion info * transformers version: 4.16.2 * OPUS-MT git hash: 3405783 * port time: Wed Apr 13 18:11:39 EEST 2022 * port machine: LM0-400-22516.local
jpodivin/rwkv-4-pile-169m-GGUF
jpodivin
"2024-04-20T12:24:51Z"
2,770
0
null
[ "gguf", "text-generation", "dataset:EleutherAI/pile", "base_model:BlinkDL/rwkv-4-pile-169m", "license:apache-2.0", "region:us" ]
text-generation
"2024-04-20T11:53:13Z"
--- license: apache-2.0 datasets: - EleutherAI/pile pipeline_tag: text-generation inference: false model_type: RWKV base_model: BlinkDL/rwkv-4-pile-169m --- # rwkv-4-pile-169m-GGUF RWKV-4-pile-169m model quantized with [rwkv.cpp](https://github.com/RWKV/rwkv.cpp) d8f13ffe231712c11427b180cce2fed76757b38d
Salesforce/codegen-2B-multi
Salesforce
"2022-10-03T16:18:49Z"
2,769
34
transformers
[ "transformers", "pytorch", "codegen", "text-generation", "arxiv:2203.13474", "license:bsd-3-clause", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-04-11T23:18:25Z"
--- license: bsd-3-clause --- # CodeGen (CodeGen-Multi 2B) ## Model description CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). The checkpoint included in this repository is denoted as **CodeGen-Multi 2B** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 2B* and further pre-trained on a dataset of multiple programming languages, and "2B" refers to the number of trainable parameters. ## Training data This checkpoint (CodeGen-Multi 2B) was firstly initialized with *CodeGen-NL 2B*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python. ## Training procedure CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism. See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Evaluation results We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details. ## Intended Use and Limitations As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-multi") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-multi") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ``` ## BibTeX entry and citation info ```bibtex @article{Nijkamp2022ACP, title={A Conversational Paradigm for Program Synthesis}, author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming}, journal={arXiv preprint}, year={2022} } ```
timm/lcnet_100.ra2_in1k
timm
"2023-04-27T22:49:02Z"
2,769
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2110.00476", "arxiv:2109.15099", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-16T05:37:41Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for lcnet_100.ra2_in1k A LCNet image classification model. Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * RandAugment `RA2` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476). * RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging * Step (exponential decay w/ staircase) LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 3.0 - GMACs: 0.2 - Activations (M): 2.5 - Image size: 224 x 224 - **Papers:** - PP-LCNet: A Lightweight CPU Convolutional Neural Network: https://arxiv.org/abs/2109.15099 - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - **Dataset:** ImageNet-1k - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('lcnet_100.ra2_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'lcnet_100.ra2_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 112, 112]) # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 256, 14, 14]) # torch.Size([1, 512, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'lcnet_100.ra2_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{cui2021pp, title={PP-LCNet: A lightweight CPU convolutional neural network}, author={Cui, Cheng and Gao, Tingquan and Wei, Shengyu and Du, Yuning and Guo, Ruoyu and Dong, Shuilong and Lu, Bin and Zhou, Ying and Lv, Xueying and Liu, Qiwen and others}, journal={arXiv preprint arXiv:2109.15099}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ```
Haary/USK_Mistral_7B_Unsloth_GGUF
Haary
"2024-06-29T11:29:20Z"
2,769
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "id", "dataset:Haary/QA_USK_dataset", "base_model:Ichsan2895/Merak-7B-v4", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-29T10:48:15Z"
--- base_model: Ichsan2895/Merak-7B-v4 language: - en - id license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf datasets: - Haary/QA_USK_dataset --- # Uploaded GGUF Model - **Developed by:** Haary - **License:** apache-2.0 - **Finetuned from Indonesia Model :** [Ichsan2895/Merak-7B-v4](https://huggingface.co/Ichsan2895/Merak-7B-v4) - **Base Model :** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
CAiRE/UniVaR-lambda-80
CAiRE
"2024-06-14T17:56:29Z"
2,768
0
sentence-transformers
[ "sentence-transformers", "safetensors", "nomic_bert", "feature-extraction", "sentence-similarity", "mteb", "transformers", "transformers.js", "custom_code", "en", "arxiv:2402.01613", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2024-06-14T17:55:40Z"
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - feature-extraction - sentence-similarity - mteb - transformers - transformers.js model-index: - name: epoch_0_model results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 76.8507462686567 - type: ap value: 40.592189159090495 - type: f1 value: 71.01634655512476 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 91.51892500000001 - type: ap value: 88.50346762975335 - type: f1 value: 91.50342077459624 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 47.364 - type: f1 value: 46.72708080922794 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 25.178 - type: map_at_10 value: 40.244 - type: map_at_100 value: 41.321999999999996 - type: map_at_1000 value: 41.331 - type: map_at_3 value: 35.016999999999996 - type: map_at_5 value: 37.99 - type: mrr_at_1 value: 25.605 - type: mrr_at_10 value: 40.422000000000004 - type: mrr_at_100 value: 41.507 - type: mrr_at_1000 value: 41.516 - type: mrr_at_3 value: 35.23 - type: mrr_at_5 value: 38.15 - type: ndcg_at_1 value: 25.178 - type: ndcg_at_10 value: 49.258 - type: ndcg_at_100 value: 53.776 - type: ndcg_at_1000 value: 53.995000000000005 - type: ndcg_at_3 value: 38.429 - type: ndcg_at_5 value: 43.803 - type: precision_at_1 value: 25.178 - type: precision_at_10 value: 7.831 - type: precision_at_100 value: 0.979 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 16.121 - type: precision_at_5 value: 12.29 - type: recall_at_1 value: 25.178 - type: recall_at_10 value: 78.307 - type: recall_at_100 value: 97.866 - type: recall_at_1000 value: 99.57300000000001 - type: recall_at_3 value: 48.364000000000004 - type: recall_at_5 value: 61.451 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 45.93034494751465 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 36.64579480054327 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 60.601310529222054 - type: mrr value: 75.04484896451656 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 88.57797718095814 - type: cos_sim_spearman value: 86.47064499110101 - type: euclidean_pearson value: 87.4559602783142 - type: euclidean_spearman value: 86.47064499110101 - type: manhattan_pearson value: 87.7232764230245 - type: manhattan_spearman value: 86.91222131777742 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 84.5422077922078 - type: f1 value: 84.47657456950589 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.48953561974464 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 32.75995857510105 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.008000000000003 - type: map_at_10 value: 39.51 - type: map_at_100 value: 40.841 - type: map_at_1000 value: 40.973 - type: map_at_3 value: 36.248999999999995 - type: map_at_5 value: 38.096999999999994 - type: mrr_at_1 value: 36.481 - type: mrr_at_10 value: 44.818000000000005 - type: mrr_at_100 value: 45.64 - type: mrr_at_1000 value: 45.687 - type: mrr_at_3 value: 42.036 - type: mrr_at_5 value: 43.782 - type: ndcg_at_1 value: 36.481 - type: ndcg_at_10 value: 45.152 - type: ndcg_at_100 value: 50.449 - type: ndcg_at_1000 value: 52.76499999999999 - type: ndcg_at_3 value: 40.161 - type: ndcg_at_5 value: 42.577999999999996 - type: precision_at_1 value: 36.481 - type: precision_at_10 value: 8.369 - type: precision_at_100 value: 1.373 - type: precision_at_1000 value: 0.186 - type: precision_at_3 value: 18.693 - type: precision_at_5 value: 13.533999999999999 - type: recall_at_1 value: 30.008000000000003 - type: recall_at_10 value: 56.108999999999995 - type: recall_at_100 value: 78.55499999999999 - type: recall_at_1000 value: 93.659 - type: recall_at_3 value: 41.754999999999995 - type: recall_at_5 value: 48.296 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.262 - type: map_at_10 value: 40.139 - type: map_at_100 value: 41.394 - type: map_at_1000 value: 41.526 - type: map_at_3 value: 37.155 - type: map_at_5 value: 38.785 - type: mrr_at_1 value: 38.153 - type: mrr_at_10 value: 46.369 - type: mrr_at_100 value: 47.072 - type: mrr_at_1000 value: 47.111999999999995 - type: mrr_at_3 value: 44.268 - type: mrr_at_5 value: 45.389 - type: ndcg_at_1 value: 38.153 - type: ndcg_at_10 value: 45.925 - type: ndcg_at_100 value: 50.394000000000005 - type: ndcg_at_1000 value: 52.37500000000001 - type: ndcg_at_3 value: 41.754000000000005 - type: ndcg_at_5 value: 43.574 - type: precision_at_1 value: 38.153 - type: precision_at_10 value: 8.796 - type: precision_at_100 value: 1.432 - type: precision_at_1000 value: 0.189 - type: precision_at_3 value: 20.318 - type: precision_at_5 value: 14.395 - type: recall_at_1 value: 30.262 - type: recall_at_10 value: 55.72200000000001 - type: recall_at_100 value: 74.97500000000001 - type: recall_at_1000 value: 87.342 - type: recall_at_3 value: 43.129 - type: recall_at_5 value: 48.336 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 39.951 - type: map_at_10 value: 51.248000000000005 - type: map_at_100 value: 52.188 - type: map_at_1000 value: 52.247 - type: map_at_3 value: 48.211 - type: map_at_5 value: 49.797000000000004 - type: mrr_at_1 value: 45.329 - type: mrr_at_10 value: 54.749 - type: mrr_at_100 value: 55.367999999999995 - type: mrr_at_1000 value: 55.400000000000006 - type: mrr_at_3 value: 52.382 - type: mrr_at_5 value: 53.649 - type: ndcg_at_1 value: 45.329 - type: ndcg_at_10 value: 56.847 - type: ndcg_at_100 value: 60.738 - type: ndcg_at_1000 value: 61.976 - type: ndcg_at_3 value: 51.59 - type: ndcg_at_5 value: 53.915 - type: precision_at_1 value: 45.329 - type: precision_at_10 value: 8.959 - type: precision_at_100 value: 1.187 - type: precision_at_1000 value: 0.134 - type: precision_at_3 value: 22.612 - type: precision_at_5 value: 15.273 - type: recall_at_1 value: 39.951 - type: recall_at_10 value: 70.053 - type: recall_at_100 value: 86.996 - type: recall_at_1000 value: 95.707 - type: recall_at_3 value: 56.032000000000004 - type: recall_at_5 value: 61.629999999999995 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.566 - type: map_at_10 value: 33.207 - type: map_at_100 value: 34.166000000000004 - type: map_at_1000 value: 34.245 - type: map_at_3 value: 30.94 - type: map_at_5 value: 32.01 - type: mrr_at_1 value: 27.345000000000002 - type: mrr_at_10 value: 35.193000000000005 - type: mrr_at_100 value: 35.965 - type: mrr_at_1000 value: 36.028999999999996 - type: mrr_at_3 value: 32.806000000000004 - type: mrr_at_5 value: 34.021 - type: ndcg_at_1 value: 27.345000000000002 - type: ndcg_at_10 value: 37.891999999999996 - type: ndcg_at_100 value: 42.664 - type: ndcg_at_1000 value: 44.757000000000005 - type: ndcg_at_3 value: 33.123000000000005 - type: ndcg_at_5 value: 35.035 - type: precision_at_1 value: 27.345000000000002 - type: precision_at_10 value: 5.763 - type: precision_at_100 value: 0.859 - type: precision_at_1000 value: 0.108 - type: precision_at_3 value: 13.71 - type: precision_at_5 value: 9.401 - type: recall_at_1 value: 25.566 - type: recall_at_10 value: 50.563 - type: recall_at_100 value: 72.86399999999999 - type: recall_at_1000 value: 88.68599999999999 - type: recall_at_3 value: 37.43 - type: recall_at_5 value: 41.894999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.663 - type: map_at_10 value: 23.552 - type: map_at_100 value: 24.538 - type: map_at_1000 value: 24.661 - type: map_at_3 value: 21.085 - type: map_at_5 value: 22.391 - type: mrr_at_1 value: 20.025000000000002 - type: mrr_at_10 value: 27.643 - type: mrr_at_100 value: 28.499999999999996 - type: mrr_at_1000 value: 28.582 - type: mrr_at_3 value: 25.083 - type: mrr_at_5 value: 26.544 - type: ndcg_at_1 value: 20.025000000000002 - type: ndcg_at_10 value: 28.272000000000002 - type: ndcg_at_100 value: 33.353 - type: ndcg_at_1000 value: 36.454 - type: ndcg_at_3 value: 23.579 - type: ndcg_at_5 value: 25.685000000000002 - type: precision_at_1 value: 20.025000000000002 - type: precision_at_10 value: 5.187 - type: precision_at_100 value: 0.897 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 10.987 - type: precision_at_5 value: 8.06 - type: recall_at_1 value: 16.663 - type: recall_at_10 value: 38.808 - type: recall_at_100 value: 61.305 - type: recall_at_1000 value: 83.571 - type: recall_at_3 value: 25.907999999999998 - type: recall_at_5 value: 31.214 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 27.695999999999998 - type: map_at_10 value: 37.018 - type: map_at_100 value: 38.263000000000005 - type: map_at_1000 value: 38.371 - type: map_at_3 value: 34.226 - type: map_at_5 value: 35.809999999999995 - type: mrr_at_1 value: 32.916000000000004 - type: mrr_at_10 value: 42.067 - type: mrr_at_100 value: 42.925000000000004 - type: mrr_at_1000 value: 42.978 - type: mrr_at_3 value: 39.637 - type: mrr_at_5 value: 41.134 - type: ndcg_at_1 value: 32.916000000000004 - type: ndcg_at_10 value: 42.539 - type: ndcg_at_100 value: 47.873 - type: ndcg_at_1000 value: 50.08200000000001 - type: ndcg_at_3 value: 37.852999999999994 - type: ndcg_at_5 value: 40.201 - type: precision_at_1 value: 32.916000000000004 - type: precision_at_10 value: 7.5840000000000005 - type: precision_at_100 value: 1.199 - type: precision_at_1000 value: 0.155 - type: precision_at_3 value: 17.485 - type: precision_at_5 value: 12.512 - type: recall_at_1 value: 27.695999999999998 - type: recall_at_10 value: 53.638 - type: recall_at_100 value: 76.116 - type: recall_at_1000 value: 91.069 - type: recall_at_3 value: 41.13 - type: recall_at_5 value: 46.872 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.108 - type: map_at_10 value: 33.372 - type: map_at_100 value: 34.656 - type: map_at_1000 value: 34.768 - type: map_at_3 value: 30.830999999999996 - type: map_at_5 value: 32.204 - type: mrr_at_1 value: 29.110000000000003 - type: mrr_at_10 value: 37.979 - type: mrr_at_100 value: 38.933 - type: mrr_at_1000 value: 38.988 - type: mrr_at_3 value: 35.731 - type: mrr_at_5 value: 36.963 - type: ndcg_at_1 value: 29.110000000000003 - type: ndcg_at_10 value: 38.635000000000005 - type: ndcg_at_100 value: 44.324999999999996 - type: ndcg_at_1000 value: 46.747 - type: ndcg_at_3 value: 34.37 - type: ndcg_at_5 value: 36.228 - type: precision_at_1 value: 29.110000000000003 - type: precision_at_10 value: 6.963 - type: precision_at_100 value: 1.146 - type: precision_at_1000 value: 0.152 - type: precision_at_3 value: 16.400000000000002 - type: precision_at_5 value: 11.552999999999999 - type: recall_at_1 value: 24.108 - type: recall_at_10 value: 49.597 - type: recall_at_100 value: 73.88900000000001 - type: recall_at_1000 value: 90.62400000000001 - type: recall_at_3 value: 37.662 - type: recall_at_5 value: 42.565 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.00791666666667 - type: map_at_10 value: 33.287749999999996 - type: map_at_100 value: 34.41141666666667 - type: map_at_1000 value: 34.52583333333333 - type: map_at_3 value: 30.734416666666668 - type: map_at_5 value: 32.137166666666666 - type: mrr_at_1 value: 29.305666666666664 - type: mrr_at_10 value: 37.22966666666666 - type: mrr_at_100 value: 38.066583333333334 - type: mrr_at_1000 value: 38.12616666666667 - type: mrr_at_3 value: 34.92275 - type: mrr_at_5 value: 36.23333333333334 - type: ndcg_at_1 value: 29.305666666666664 - type: ndcg_at_10 value: 38.25533333333333 - type: ndcg_at_100 value: 43.25266666666666 - type: ndcg_at_1000 value: 45.63583333333334 - type: ndcg_at_3 value: 33.777166666666666 - type: ndcg_at_5 value: 35.85 - type: precision_at_1 value: 29.305666666666664 - type: precision_at_10 value: 6.596416666666667 - type: precision_at_100 value: 1.0784166666666668 - type: precision_at_1000 value: 0.14666666666666664 - type: precision_at_3 value: 15.31075 - type: precision_at_5 value: 10.830916666666667 - type: recall_at_1 value: 25.00791666666667 - type: recall_at_10 value: 49.10933333333333 - type: recall_at_100 value: 71.09216666666667 - type: recall_at_1000 value: 87.77725000000001 - type: recall_at_3 value: 36.660916666666665 - type: recall_at_5 value: 41.94149999999999 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.521 - type: map_at_10 value: 30.043 - type: map_at_100 value: 30.936000000000003 - type: map_at_1000 value: 31.022 - type: map_at_3 value: 27.926000000000002 - type: map_at_5 value: 29.076999999999998 - type: mrr_at_1 value: 26.227 - type: mrr_at_10 value: 32.822 - type: mrr_at_100 value: 33.61 - type: mrr_at_1000 value: 33.672000000000004 - type: mrr_at_3 value: 30.776999999999997 - type: mrr_at_5 value: 31.866 - type: ndcg_at_1 value: 26.227 - type: ndcg_at_10 value: 34.041 - type: ndcg_at_100 value: 38.394 - type: ndcg_at_1000 value: 40.732 - type: ndcg_at_3 value: 30.037999999999997 - type: ndcg_at_5 value: 31.845000000000002 - type: precision_at_1 value: 26.227 - type: precision_at_10 value: 5.244999999999999 - type: precision_at_100 value: 0.808 - type: precision_at_1000 value: 0.107 - type: precision_at_3 value: 12.679000000000002 - type: precision_at_5 value: 8.773 - type: recall_at_1 value: 23.521 - type: recall_at_10 value: 43.633 - type: recall_at_100 value: 63.126000000000005 - type: recall_at_1000 value: 80.765 - type: recall_at_3 value: 32.614 - type: recall_at_5 value: 37.15 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.236 - type: map_at_10 value: 22.898 - type: map_at_100 value: 23.878 - type: map_at_1000 value: 24.009 - type: map_at_3 value: 20.87 - type: map_at_5 value: 22.025 - type: mrr_at_1 value: 19.339000000000002 - type: mrr_at_10 value: 26.382 - type: mrr_at_100 value: 27.245 - type: mrr_at_1000 value: 27.33 - type: mrr_at_3 value: 24.386 - type: mrr_at_5 value: 25.496000000000002 - type: ndcg_at_1 value: 19.339000000000002 - type: ndcg_at_10 value: 27.139999999999997 - type: ndcg_at_100 value: 31.944 - type: ndcg_at_1000 value: 35.077999999999996 - type: ndcg_at_3 value: 23.424 - type: ndcg_at_5 value: 25.188 - type: precision_at_1 value: 19.339000000000002 - type: precision_at_10 value: 4.8309999999999995 - type: precision_at_100 value: 0.845 - type: precision_at_1000 value: 0.128 - type: precision_at_3 value: 10.874 - type: precision_at_5 value: 7.825 - type: recall_at_1 value: 16.236 - type: recall_at_10 value: 36.513 - type: recall_at_100 value: 57.999 - type: recall_at_1000 value: 80.512 - type: recall_at_3 value: 26.179999999999996 - type: recall_at_5 value: 30.712 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.11 - type: map_at_10 value: 31.566 - type: map_at_100 value: 32.647 - type: map_at_1000 value: 32.753 - type: map_at_3 value: 29.24 - type: map_at_5 value: 30.564999999999998 - type: mrr_at_1 value: 28.265 - type: mrr_at_10 value: 35.504000000000005 - type: mrr_at_100 value: 36.436 - type: mrr_at_1000 value: 36.503 - type: mrr_at_3 value: 33.349000000000004 - type: mrr_at_5 value: 34.622 - type: ndcg_at_1 value: 28.265 - type: ndcg_at_10 value: 36.192 - type: ndcg_at_100 value: 41.388000000000005 - type: ndcg_at_1000 value: 43.948 - type: ndcg_at_3 value: 31.959 - type: ndcg_at_5 value: 33.998 - type: precision_at_1 value: 28.265 - type: precision_at_10 value: 5.989 - type: precision_at_100 value: 0.9650000000000001 - type: precision_at_1000 value: 0.13 - type: precision_at_3 value: 14.335 - type: precision_at_5 value: 10.112 - type: recall_at_1 value: 24.11 - type: recall_at_10 value: 46.418 - type: recall_at_100 value: 69.314 - type: recall_at_1000 value: 87.397 - type: recall_at_3 value: 34.724 - type: recall_at_5 value: 39.925 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 22.091 - type: map_at_10 value: 29.948999999999998 - type: map_at_100 value: 31.502000000000002 - type: map_at_1000 value: 31.713 - type: map_at_3 value: 27.464 - type: map_at_5 value: 28.968 - type: mrr_at_1 value: 26.482 - type: mrr_at_10 value: 34.009 - type: mrr_at_100 value: 35.081 - type: mrr_at_1000 value: 35.138000000000005 - type: mrr_at_3 value: 31.785000000000004 - type: mrr_at_5 value: 33.178999999999995 - type: ndcg_at_1 value: 26.482 - type: ndcg_at_10 value: 35.008 - type: ndcg_at_100 value: 41.272999999999996 - type: ndcg_at_1000 value: 43.972 - type: ndcg_at_3 value: 30.804 - type: ndcg_at_5 value: 33.046 - type: precision_at_1 value: 26.482 - type: precision_at_10 value: 6.462 - type: precision_at_100 value: 1.431 - type: precision_at_1000 value: 0.22899999999999998 - type: precision_at_3 value: 14.360999999999999 - type: precision_at_5 value: 10.474 - type: recall_at_1 value: 22.091 - type: recall_at_10 value: 45.125 - type: recall_at_100 value: 72.313 - type: recall_at_1000 value: 89.503 - type: recall_at_3 value: 33.158 - type: recall_at_5 value: 39.086999999999996 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 19.883 - type: map_at_10 value: 26.951000000000004 - type: map_at_100 value: 27.927999999999997 - type: map_at_1000 value: 28.022000000000002 - type: map_at_3 value: 24.616 - type: map_at_5 value: 25.917 - type: mrr_at_1 value: 21.996 - type: mrr_at_10 value: 29.221000000000004 - type: mrr_at_100 value: 30.024 - type: mrr_at_1000 value: 30.095 - type: mrr_at_3 value: 26.833000000000002 - type: mrr_at_5 value: 28.155 - type: ndcg_at_1 value: 21.996 - type: ndcg_at_10 value: 31.421 - type: ndcg_at_100 value: 36.237 - type: ndcg_at_1000 value: 38.744 - type: ndcg_at_3 value: 26.671 - type: ndcg_at_5 value: 28.907 - type: precision_at_1 value: 21.996 - type: precision_at_10 value: 5.009 - type: precision_at_100 value: 0.799 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 11.275 - type: precision_at_5 value: 8.059 - type: recall_at_1 value: 19.883 - type: recall_at_10 value: 43.132999999999996 - type: recall_at_100 value: 65.654 - type: recall_at_1000 value: 84.492 - type: recall_at_3 value: 30.209000000000003 - type: recall_at_5 value: 35.616 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 17.756 - type: map_at_10 value: 30.378 - type: map_at_100 value: 32.537 - type: map_at_1000 value: 32.717 - type: map_at_3 value: 25.599 - type: map_at_5 value: 28.372999999999998 - type: mrr_at_1 value: 41.303 - type: mrr_at_10 value: 53.483999999999995 - type: mrr_at_100 value: 54.106 - type: mrr_at_1000 value: 54.127 - type: mrr_at_3 value: 50.315 - type: mrr_at_5 value: 52.396 - type: ndcg_at_1 value: 41.303 - type: ndcg_at_10 value: 40.503 - type: ndcg_at_100 value: 47.821000000000005 - type: ndcg_at_1000 value: 50.788 - type: ndcg_at_3 value: 34.364 - type: ndcg_at_5 value: 36.818 - type: precision_at_1 value: 41.303 - type: precision_at_10 value: 12.463000000000001 - type: precision_at_100 value: 2.037 - type: precision_at_1000 value: 0.26 - type: precision_at_3 value: 25.798 - type: precision_at_5 value: 19.896 - type: recall_at_1 value: 17.756 - type: recall_at_10 value: 46.102 - type: recall_at_100 value: 70.819 - type: recall_at_1000 value: 87.21799999999999 - type: recall_at_3 value: 30.646 - type: recall_at_5 value: 38.022 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 9.033 - type: map_at_10 value: 20.584 - type: map_at_100 value: 29.518 - type: map_at_1000 value: 31.186000000000003 - type: map_at_3 value: 14.468 - type: map_at_5 value: 17.177 - type: mrr_at_1 value: 69.75 - type: mrr_at_10 value: 77.025 - type: mrr_at_100 value: 77.36699999999999 - type: mrr_at_1000 value: 77.373 - type: mrr_at_3 value: 75.583 - type: mrr_at_5 value: 76.396 - type: ndcg_at_1 value: 58.5 - type: ndcg_at_10 value: 45.033 - type: ndcg_at_100 value: 49.071 - type: ndcg_at_1000 value: 56.056 - type: ndcg_at_3 value: 49.936 - type: ndcg_at_5 value: 47.471999999999994 - type: precision_at_1 value: 69.75 - type: precision_at_10 value: 35.775 - type: precision_at_100 value: 11.594999999999999 - type: precision_at_1000 value: 2.062 - type: precision_at_3 value: 52.5 - type: precision_at_5 value: 45.300000000000004 - type: recall_at_1 value: 9.033 - type: recall_at_10 value: 26.596999999999998 - type: recall_at_100 value: 54.607000000000006 - type: recall_at_1000 value: 76.961 - type: recall_at_3 value: 15.754999999999999 - type: recall_at_5 value: 20.033 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 48.345000000000006 - type: f1 value: 43.4514918068706 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 71.29100000000001 - type: map_at_10 value: 81.059 - type: map_at_100 value: 81.341 - type: map_at_1000 value: 81.355 - type: map_at_3 value: 79.74799999999999 - type: map_at_5 value: 80.612 - type: mrr_at_1 value: 76.40299999999999 - type: mrr_at_10 value: 84.615 - type: mrr_at_100 value: 84.745 - type: mrr_at_1000 value: 84.748 - type: mrr_at_3 value: 83.776 - type: mrr_at_5 value: 84.343 - type: ndcg_at_1 value: 76.40299999999999 - type: ndcg_at_10 value: 84.981 - type: ndcg_at_100 value: 86.00999999999999 - type: ndcg_at_1000 value: 86.252 - type: ndcg_at_3 value: 82.97 - type: ndcg_at_5 value: 84.152 - type: precision_at_1 value: 76.40299999999999 - type: precision_at_10 value: 10.446 - type: precision_at_100 value: 1.1199999999999999 - type: precision_at_1000 value: 0.116 - type: precision_at_3 value: 32.147999999999996 - type: precision_at_5 value: 20.135 - type: recall_at_1 value: 71.29100000000001 - type: recall_at_10 value: 93.232 - type: recall_at_100 value: 97.363 - type: recall_at_1000 value: 98.905 - type: recall_at_3 value: 87.893 - type: recall_at_5 value: 90.804 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 18.667 - type: map_at_10 value: 30.853 - type: map_at_100 value: 32.494 - type: map_at_1000 value: 32.677 - type: map_at_3 value: 26.91 - type: map_at_5 value: 29.099000000000004 - type: mrr_at_1 value: 37.191 - type: mrr_at_10 value: 46.171 - type: mrr_at_100 value: 47.056 - type: mrr_at_1000 value: 47.099000000000004 - type: mrr_at_3 value: 44.059 - type: mrr_at_5 value: 45.147 - type: ndcg_at_1 value: 37.191 - type: ndcg_at_10 value: 38.437 - type: ndcg_at_100 value: 44.62 - type: ndcg_at_1000 value: 47.795 - type: ndcg_at_3 value: 35.003 - type: ndcg_at_5 value: 36.006 - type: precision_at_1 value: 37.191 - type: precision_at_10 value: 10.586 - type: precision_at_100 value: 1.688 - type: precision_at_1000 value: 0.22699999999999998 - type: precision_at_3 value: 23.302 - type: precision_at_5 value: 17.006 - type: recall_at_1 value: 18.667 - type: recall_at_10 value: 45.367000000000004 - type: recall_at_100 value: 68.207 - type: recall_at_1000 value: 87.072 - type: recall_at_3 value: 32.129000000000005 - type: recall_at_5 value: 37.719 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 39.494 - type: map_at_10 value: 66.223 - type: map_at_100 value: 67.062 - type: map_at_1000 value: 67.11500000000001 - type: map_at_3 value: 62.867 - type: map_at_5 value: 64.994 - type: mrr_at_1 value: 78.987 - type: mrr_at_10 value: 84.585 - type: mrr_at_100 value: 84.773 - type: mrr_at_1000 value: 84.77900000000001 - type: mrr_at_3 value: 83.592 - type: mrr_at_5 value: 84.235 - type: ndcg_at_1 value: 78.987 - type: ndcg_at_10 value: 73.64 - type: ndcg_at_100 value: 76.519 - type: ndcg_at_1000 value: 77.51 - type: ndcg_at_3 value: 68.893 - type: ndcg_at_5 value: 71.585 - type: precision_at_1 value: 78.987 - type: precision_at_10 value: 15.529000000000002 - type: precision_at_100 value: 1.7770000000000001 - type: precision_at_1000 value: 0.191 - type: precision_at_3 value: 44.808 - type: precision_at_5 value: 29.006999999999998 - type: recall_at_1 value: 39.494 - type: recall_at_10 value: 77.643 - type: recall_at_100 value: 88.825 - type: recall_at_1000 value: 95.321 - type: recall_at_3 value: 67.211 - type: recall_at_5 value: 72.519 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 85.55959999999999 - type: ap value: 80.7246500384617 - type: f1 value: 85.52336485065454 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 23.631 - type: map_at_10 value: 36.264 - type: map_at_100 value: 37.428 - type: map_at_1000 value: 37.472 - type: map_at_3 value: 32.537 - type: map_at_5 value: 34.746 - type: mrr_at_1 value: 24.312 - type: mrr_at_10 value: 36.858000000000004 - type: mrr_at_100 value: 37.966 - type: mrr_at_1000 value: 38.004 - type: mrr_at_3 value: 33.188 - type: mrr_at_5 value: 35.367 - type: ndcg_at_1 value: 24.312 - type: ndcg_at_10 value: 43.126999999999995 - type: ndcg_at_100 value: 48.642 - type: ndcg_at_1000 value: 49.741 - type: ndcg_at_3 value: 35.589 - type: ndcg_at_5 value: 39.515 - type: precision_at_1 value: 24.312 - type: precision_at_10 value: 6.699 - type: precision_at_100 value: 0.9450000000000001 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 15.153 - type: precision_at_5 value: 11.065999999999999 - type: recall_at_1 value: 23.631 - type: recall_at_10 value: 64.145 - type: recall_at_100 value: 89.41 - type: recall_at_1000 value: 97.83500000000001 - type: recall_at_3 value: 43.769000000000005 - type: recall_at_5 value: 53.169 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.4108527131783 - type: f1 value: 93.1415880261038 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 77.24806201550388 - type: f1 value: 60.531916308197175 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 73.71553463349024 - type: f1 value: 71.70753174900791 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 77.79757901815736 - type: f1 value: 77.83719850433258 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.74193296622113 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.64257594108566 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 30.811018518883625 - type: mrr value: 31.910376577445003 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.409 - type: map_at_10 value: 13.093 - type: map_at_100 value: 16.256999999999998 - type: map_at_1000 value: 17.617 - type: map_at_3 value: 9.555 - type: map_at_5 value: 11.428 - type: mrr_at_1 value: 45.201 - type: mrr_at_10 value: 54.179 - type: mrr_at_100 value: 54.812000000000005 - type: mrr_at_1000 value: 54.840999999999994 - type: mrr_at_3 value: 51.909000000000006 - type: mrr_at_5 value: 53.519000000000005 - type: ndcg_at_1 value: 43.189 - type: ndcg_at_10 value: 35.028 - type: ndcg_at_100 value: 31.226 - type: ndcg_at_1000 value: 39.678000000000004 - type: ndcg_at_3 value: 40.596 - type: ndcg_at_5 value: 38.75 - type: precision_at_1 value: 44.582 - type: precision_at_10 value: 25.974999999999998 - type: precision_at_100 value: 7.793 - type: precision_at_1000 value: 2.036 - type: precision_at_3 value: 38.493 - type: precision_at_5 value: 33.994 - type: recall_at_1 value: 5.409 - type: recall_at_10 value: 16.875999999999998 - type: recall_at_100 value: 30.316 - type: recall_at_1000 value: 60.891 - type: recall_at_3 value: 10.688 - type: recall_at_5 value: 13.832 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 36.375 - type: map_at_10 value: 51.991 - type: map_at_100 value: 52.91400000000001 - type: map_at_1000 value: 52.93600000000001 - type: map_at_3 value: 48.014 - type: map_at_5 value: 50.381 - type: mrr_at_1 value: 40.759 - type: mrr_at_10 value: 54.617000000000004 - type: mrr_at_100 value: 55.301 - type: mrr_at_1000 value: 55.315000000000005 - type: mrr_at_3 value: 51.516 - type: mrr_at_5 value: 53.435 - type: ndcg_at_1 value: 40.759 - type: ndcg_at_10 value: 59.384 - type: ndcg_at_100 value: 63.157 - type: ndcg_at_1000 value: 63.654999999999994 - type: ndcg_at_3 value: 52.114000000000004 - type: ndcg_at_5 value: 55.986000000000004 - type: precision_at_1 value: 40.759 - type: precision_at_10 value: 9.411999999999999 - type: precision_at_100 value: 1.153 - type: precision_at_1000 value: 0.12 - type: precision_at_3 value: 23.329 - type: precision_at_5 value: 16.256999999999998 - type: recall_at_1 value: 36.375 - type: recall_at_10 value: 79.053 - type: recall_at_100 value: 95.167 - type: recall_at_1000 value: 98.82 - type: recall_at_3 value: 60.475 - type: recall_at_5 value: 69.327 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 70.256 - type: map_at_10 value: 83.8 - type: map_at_100 value: 84.425 - type: map_at_1000 value: 84.444 - type: map_at_3 value: 80.906 - type: map_at_5 value: 82.717 - type: mrr_at_1 value: 80.97999999999999 - type: mrr_at_10 value: 87.161 - type: mrr_at_100 value: 87.262 - type: mrr_at_1000 value: 87.263 - type: mrr_at_3 value: 86.175 - type: mrr_at_5 value: 86.848 - type: ndcg_at_1 value: 80.97999999999999 - type: ndcg_at_10 value: 87.697 - type: ndcg_at_100 value: 88.959 - type: ndcg_at_1000 value: 89.09899999999999 - type: ndcg_at_3 value: 84.83800000000001 - type: ndcg_at_5 value: 86.401 - type: precision_at_1 value: 80.97999999999999 - type: precision_at_10 value: 13.261000000000001 - type: precision_at_100 value: 1.5150000000000001 - type: precision_at_1000 value: 0.156 - type: precision_at_3 value: 37.01 - type: precision_at_5 value: 24.298000000000002 - type: recall_at_1 value: 70.256 - type: recall_at_10 value: 94.935 - type: recall_at_100 value: 99.274 - type: recall_at_1000 value: 99.928 - type: recall_at_3 value: 86.602 - type: recall_at_5 value: 91.133 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 56.322692497613104 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 61.895813503775074 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.338 - type: map_at_10 value: 10.767 - type: map_at_100 value: 12.537999999999998 - type: map_at_1000 value: 12.803999999999998 - type: map_at_3 value: 7.788 - type: map_at_5 value: 9.302000000000001 - type: mrr_at_1 value: 21.4 - type: mrr_at_10 value: 31.637999999999998 - type: mrr_at_100 value: 32.688 - type: mrr_at_1000 value: 32.756 - type: mrr_at_3 value: 28.433000000000003 - type: mrr_at_5 value: 30.178 - type: ndcg_at_1 value: 21.4 - type: ndcg_at_10 value: 18.293 - type: ndcg_at_100 value: 25.274 - type: ndcg_at_1000 value: 30.284 - type: ndcg_at_3 value: 17.391000000000002 - type: ndcg_at_5 value: 15.146999999999998 - type: precision_at_1 value: 21.4 - type: precision_at_10 value: 9.48 - type: precision_at_100 value: 1.949 - type: precision_at_1000 value: 0.316 - type: precision_at_3 value: 16.167 - type: precision_at_5 value: 13.22 - type: recall_at_1 value: 4.338 - type: recall_at_10 value: 19.213 - type: recall_at_100 value: 39.562999999999995 - type: recall_at_1000 value: 64.08 - type: recall_at_3 value: 9.828000000000001 - type: recall_at_5 value: 13.383000000000001 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 82.42568163642142 - type: cos_sim_spearman value: 78.5797159641342 - type: euclidean_pearson value: 80.22151260811604 - type: euclidean_spearman value: 78.5797151953878 - type: manhattan_pearson value: 80.21224215864788 - type: manhattan_spearman value: 78.55641478381344 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 85.44020710812569 - type: cos_sim_spearman value: 78.91631735081286 - type: euclidean_pearson value: 81.64188964182102 - type: euclidean_spearman value: 78.91633286881678 - type: manhattan_pearson value: 81.69294748512496 - type: manhattan_spearman value: 78.93438558002656 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 84.27165426412311 - type: cos_sim_spearman value: 85.40429140249618 - type: euclidean_pearson value: 84.7509580724893 - type: euclidean_spearman value: 85.40429140249618 - type: manhattan_pearson value: 84.76488289321308 - type: manhattan_spearman value: 85.4256793698708 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 83.138851760732 - type: cos_sim_spearman value: 81.64101363896586 - type: euclidean_pearson value: 82.55165038934942 - type: euclidean_spearman value: 81.64105257080502 - type: manhattan_pearson value: 82.52802949883335 - type: manhattan_spearman value: 81.61255430718158 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.0654695484029 - type: cos_sim_spearman value: 87.20408521902229 - type: euclidean_pearson value: 86.8110651362115 - type: euclidean_spearman value: 87.20408521902229 - type: manhattan_pearson value: 86.77984656478691 - type: manhattan_spearman value: 87.1719947099227 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.77823915496512 - type: cos_sim_spearman value: 85.43566325729779 - type: euclidean_pearson value: 84.5396956658821 - type: euclidean_spearman value: 85.43566325729779 - type: manhattan_pearson value: 84.5665398848169 - type: manhattan_spearman value: 85.44375870303232 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 87.20030208471798 - type: cos_sim_spearman value: 87.20485505076539 - type: euclidean_pearson value: 88.10588324368722 - type: euclidean_spearman value: 87.20485505076539 - type: manhattan_pearson value: 87.92324770415183 - type: manhattan_spearman value: 87.0571314561877 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 63.06093161604453 - type: cos_sim_spearman value: 64.2163140357722 - type: euclidean_pearson value: 65.27589680994006 - type: euclidean_spearman value: 64.2163140357722 - type: manhattan_pearson value: 65.45904383711101 - type: manhattan_spearman value: 64.55404716679305 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.32976164578706 - type: cos_sim_spearman value: 85.54302197678368 - type: euclidean_pearson value: 85.26307149193056 - type: euclidean_spearman value: 85.54302197678368 - type: manhattan_pearson value: 85.26647282029371 - type: manhattan_spearman value: 85.5316135265568 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 81.44675968318754 - type: mrr value: 94.92741826075158 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 56.34400000000001 - type: map_at_10 value: 65.927 - type: map_at_100 value: 66.431 - type: map_at_1000 value: 66.461 - type: map_at_3 value: 63.529 - type: map_at_5 value: 64.818 - type: mrr_at_1 value: 59.333000000000006 - type: mrr_at_10 value: 67.54599999999999 - type: mrr_at_100 value: 67.892 - type: mrr_at_1000 value: 67.917 - type: mrr_at_3 value: 65.778 - type: mrr_at_5 value: 66.794 - type: ndcg_at_1 value: 59.333000000000006 - type: ndcg_at_10 value: 70.5 - type: ndcg_at_100 value: 72.688 - type: ndcg_at_1000 value: 73.483 - type: ndcg_at_3 value: 66.338 - type: ndcg_at_5 value: 68.265 - type: precision_at_1 value: 59.333000000000006 - type: precision_at_10 value: 9.3 - type: precision_at_100 value: 1.053 - type: precision_at_1000 value: 0.11199999999999999 - type: precision_at_3 value: 25.889 - type: precision_at_5 value: 16.866999999999997 - type: recall_at_1 value: 56.34400000000001 - type: recall_at_10 value: 82.789 - type: recall_at_100 value: 92.767 - type: recall_at_1000 value: 99 - type: recall_at_3 value: 71.64399999999999 - type: recall_at_5 value: 76.322 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.75742574257426 - type: cos_sim_ap value: 93.52081548447406 - type: cos_sim_f1 value: 87.33850129198966 - type: cos_sim_precision value: 90.37433155080214 - type: cos_sim_recall value: 84.5 - type: dot_accuracy value: 99.75742574257426 - type: dot_ap value: 93.52081548447406 - type: dot_f1 value: 87.33850129198966 - type: dot_precision value: 90.37433155080214 - type: dot_recall value: 84.5 - type: euclidean_accuracy value: 99.75742574257426 - type: euclidean_ap value: 93.52081548447406 - type: euclidean_f1 value: 87.33850129198966 - type: euclidean_precision value: 90.37433155080214 - type: euclidean_recall value: 84.5 - type: manhattan_accuracy value: 99.75841584158415 - type: manhattan_ap value: 93.4975678585854 - type: manhattan_f1 value: 87.26708074534162 - type: manhattan_precision value: 90.45064377682404 - type: manhattan_recall value: 84.3 - type: max_accuracy value: 99.75841584158415 - type: max_ap value: 93.52081548447406 - type: max_f1 value: 87.33850129198966 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 64.31437036686651 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 33.25569319007206 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 49.90474939720706 - type: mrr value: 50.568115503777264 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 29.866828641244712 - type: cos_sim_spearman value: 30.077555055873866 - type: dot_pearson value: 29.866832988572266 - type: dot_spearman value: 30.077555055873866 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.232 - type: map_at_10 value: 2.094 - type: map_at_100 value: 11.971 - type: map_at_1000 value: 28.158 - type: map_at_3 value: 0.688 - type: map_at_5 value: 1.114 - type: mrr_at_1 value: 88 - type: mrr_at_10 value: 93.4 - type: mrr_at_100 value: 93.4 - type: mrr_at_1000 value: 93.4 - type: mrr_at_3 value: 93 - type: mrr_at_5 value: 93.4 - type: ndcg_at_1 value: 84 - type: ndcg_at_10 value: 79.923 - type: ndcg_at_100 value: 61.17 - type: ndcg_at_1000 value: 53.03 - type: ndcg_at_3 value: 84.592 - type: ndcg_at_5 value: 82.821 - type: precision_at_1 value: 88 - type: precision_at_10 value: 85 - type: precision_at_100 value: 63.019999999999996 - type: precision_at_1000 value: 23.554 - type: precision_at_3 value: 89.333 - type: precision_at_5 value: 87.2 - type: recall_at_1 value: 0.232 - type: recall_at_10 value: 2.255 - type: recall_at_100 value: 14.823 - type: recall_at_1000 value: 49.456 - type: recall_at_3 value: 0.718 - type: recall_at_5 value: 1.175 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.547 - type: map_at_10 value: 11.375 - type: map_at_100 value: 18.194 - type: map_at_1000 value: 19.749 - type: map_at_3 value: 5.825 - type: map_at_5 value: 8.581 - type: mrr_at_1 value: 32.653 - type: mrr_at_10 value: 51.32 - type: mrr_at_100 value: 51.747 - type: mrr_at_1000 value: 51.747 - type: mrr_at_3 value: 47.278999999999996 - type: mrr_at_5 value: 48.605 - type: ndcg_at_1 value: 29.592000000000002 - type: ndcg_at_10 value: 28.151 - type: ndcg_at_100 value: 39.438 - type: ndcg_at_1000 value: 50.769 - type: ndcg_at_3 value: 30.758999999999997 - type: ndcg_at_5 value: 30.366 - type: precision_at_1 value: 32.653 - type: precision_at_10 value: 25.714 - type: precision_at_100 value: 8.041 - type: precision_at_1000 value: 1.555 - type: precision_at_3 value: 33.333 - type: precision_at_5 value: 31.837 - type: recall_at_1 value: 2.547 - type: recall_at_10 value: 18.19 - type: recall_at_100 value: 49.538 - type: recall_at_1000 value: 83.86 - type: recall_at_3 value: 7.329 - type: recall_at_5 value: 11.532 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 71.4952 - type: ap value: 14.793362635531409 - type: f1 value: 55.204635551516915 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 61.5365025466893 - type: f1 value: 61.81742556334845 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 49.05531070301185 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 86.51725576682364 - type: cos_sim_ap value: 75.2292304265163 - type: cos_sim_f1 value: 69.54022988505749 - type: cos_sim_precision value: 63.65629110039457 - type: cos_sim_recall value: 76.62269129287598 - type: dot_accuracy value: 86.51725576682364 - type: dot_ap value: 75.22922386081054 - type: dot_f1 value: 69.54022988505749 - type: dot_precision value: 63.65629110039457 - type: dot_recall value: 76.62269129287598 - type: euclidean_accuracy value: 86.51725576682364 - type: euclidean_ap value: 75.22925730473472 - type: euclidean_f1 value: 69.54022988505749 - type: euclidean_precision value: 63.65629110039457 - type: euclidean_recall value: 76.62269129287598 - type: manhattan_accuracy value: 86.52321630804077 - type: manhattan_ap value: 75.20608115037336 - type: manhattan_f1 value: 69.60000000000001 - type: manhattan_precision value: 64.37219730941705 - type: manhattan_recall value: 75.75197889182058 - type: max_accuracy value: 86.52321630804077 - type: max_ap value: 75.22925730473472 - type: max_f1 value: 69.60000000000001 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 89.34877944657896 - type: cos_sim_ap value: 86.71257569277373 - type: cos_sim_f1 value: 79.10386355986088 - type: cos_sim_precision value: 76.91468470434214 - type: cos_sim_recall value: 81.4213119802895 - type: dot_accuracy value: 89.34877944657896 - type: dot_ap value: 86.71257133133368 - type: dot_f1 value: 79.10386355986088 - type: dot_precision value: 76.91468470434214 - type: dot_recall value: 81.4213119802895 - type: euclidean_accuracy value: 89.34877944657896 - type: euclidean_ap value: 86.71257651501476 - type: euclidean_f1 value: 79.10386355986088 - type: euclidean_precision value: 76.91468470434214 - type: euclidean_recall value: 81.4213119802895 - type: manhattan_accuracy value: 89.35848177901967 - type: manhattan_ap value: 86.69330615469126 - type: manhattan_f1 value: 79.13867741453949 - type: manhattan_precision value: 76.78881807647741 - type: manhattan_recall value: 81.63689559593472 - type: max_accuracy value: 89.35848177901967 - type: max_ap value: 86.71257651501476 - type: max_f1 value: 79.13867741453949 license: apache-2.0 language: - en --- # nomic-embed-text-v1: A Reproducible Long Context (8192) Text Embedder `nomic-embed-text-v1` is 8192 context length text encoder that surpasses OpenAI text-embedding-ada-002 and text-embedding-3-small performance on short and long context tasks. | Name | SeqLen | MTEB | LoCo | Jina Long Context | Open Weights | Open Training Code | Open Data | | :-------------------------------:| :----- | :-------- | :------: | :---------------: | :-----------: | :----------------: | :---------- | | nomic-embed-text-v1 | 8192 | **62.39** |**85.53** | 54.16 | ✅ | ✅ | ✅ | | jina-embeddings-v2-base-en | 8192 | 60.39 | 85.45 | 51.90 | ✅ | ❌ | ❌ | | text-embedding-3-small | 8191 | 62.26 | 82.40 | **58.20** | ❌ | ❌ | ❌ | | text-embedding-ada-002 | 8191 | 60.99 | 52.7 | 55.25 | ❌ | ❌ | ❌ | ## Hosted Inference API The easiest way to get started with Nomic Embed is through the Nomic Embedding API. Generating embeddings with the `nomic` Python client is as easy as ```python from nomic import embed output = embed.text( texts=['Nomic Embedding API', '#keepAIOpen'], model='nomic-embed-text-v1', task_type='search_document' ) print(output) ``` For more information, see the [API reference](https://docs.nomic.ai/reference/endpoints/nomic-embed-text) ## Data Visualization Click the Nomic Atlas map below to visualize a 5M sample of our contrastive pretraining data! [![image/webp](https://cdn-uploads.huggingface.co/production/uploads/607997c83a565c15675055b3/pjhJhuNyRfPagRd_c_iUz.webp)](https://atlas.nomic.ai/map/nomic-text-embed-v1-5m-sample) ## Training Details We train our embedder using a multi-stage training pipeline. Starting from a long-context [BERT model](https://huggingface.co/nomic-ai/nomic-bert-2048), the first unsupervised contrastive stage trains on a dataset generated from weakly related text pairs, such as question-answer pairs from forums like StackExchange and Quora, title-body pairs from Amazon reviews, and summarizations from news articles. In the second finetuning stage, higher quality labeled datasets such as search queries and answers from web searches are leveraged. Data curation and hard-example mining is crucial in this stage. For more details, see the Nomic Embed [Technical Report](https://static.nomic.ai/reports/2024_Nomic_Embed_Text_Technical_Report.pdf) and corresponding [blog post](https://blog.nomic.ai/posts/nomic-embed-text-v1). Training data to train the models is released in its entirety. For more details, see the `contrastors` [repository](https://github.com/nomic-ai/contrastors) ## Usage Note `nomic-embed-text` *requires* prefixes! We support the prefixes `[search_query, search_document, classification, clustering]`. For retrieval applications, you should prepend `search_document` for all your documents and `search_query` for your queries. For example, you are building a RAG application over the top of Wikipedia. You would embed all Wikipedia articles with the prefix `search_document` and any questions you ask with `search_query`. For example: ```python queries = ["search_query: who is the first president of the united states?", "search_query: when was babe ruth born?"] documents = ["search_document: <article about US Presidents>", "search_document: <article about Babe Ruth>"] ``` ### Sentence Transformers ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True) sentences = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?'] embeddings = model.encode(sentences) print(embeddings) ``` ### Transformers ```python import torch import torch.nn.functional as F from transformers import AutoTokenizer, AutoModel def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?'] tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True) model.eval() encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = mean_pooling(model_output, encoded_input['attention_mask']) embeddings = F.normalize(embeddings, p=2, dim=1) print(embeddings) ``` The model natively supports scaling of the sequence length past 2048 tokens. To do so, ```diff - tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') + tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', model_max_length=8192) - model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True) + model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True, rotary_scaling_factor=2) ``` ### Transformers.js ```js import { pipeline } from '@xenova/transformers'; // Create a feature extraction pipeline const extractor = await pipeline('feature-extraction', 'nomic-ai/nomic-embed-text-v1', { quantized: false, // Comment out this line to use the quantized version }); // Compute sentence embeddings const texts = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?']; const embeddings = await extractor(texts, { pooling: 'mean', normalize: true }); console.log(embeddings); ``` # Join the Nomic Community - Nomic: [https://nomic.ai](https://nomic.ai) - Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8) - Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai) # Citation If you find the model, dataset, or training code useful, please cite our work ```bibtex @misc{nussbaum2024nomic, title={Nomic Embed: Training a Reproducible Long Context Text Embedder}, author={Zach Nussbaum and John X. Morris and Brandon Duderstadt and Andriy Mulyar}, year={2024}, eprint={2402.01613}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF
mradermacher
"2024-06-26T20:34:06Z"
2,767
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:jsfs11/L3-8b-SthenoLumiM-ModelStock", "endpoints_compatible", "region:us" ]
null
"2024-06-18T16:16:46Z"
--- base_model: jsfs11/L3-8b-SthenoLumiM-ModelStock language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/jsfs11/L3-8b-SthenoLumiM-ModelStock <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-8b-SthenoLumiM-ModelStock-GGUF/resolve/main/L3-8b-SthenoLumiM-ModelStock.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
keremberke/yolov8s-valorant-detection
keremberke
"2023-02-22T13:02:34Z"
2,765
1
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/valorant-object-detection", "model-index", "region:us" ]
object-detection
"2023-01-28T09:17:46Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/valorant-object-detection model-index: - name: keremberke/yolov8s-valorant-detection results: - task: type: object-detection dataset: type: keremberke/valorant-object-detection name: valorant-object-detection split: validation metrics: - type: precision # since [email protected] is not available on hf.co/metrics value: 0.97138 # min: 0.0 - max: 1.0 name: [email protected](box) --- <div align="center"> <img width="640" alt="keremberke/yolov8s-valorant-detection" src="https://huggingface.co/keremberke/yolov8s-valorant-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['dropped spike', 'enemy', 'planted spike', 'teammate'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8s-valorant-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
facebook/mms-tts-qvs
facebook
"2023-09-01T10:52:04Z"
2,765
0
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
"2023-09-01T10:51:40Z"
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Quechua, San Martín Text-to-Speech This repository contains the **Quechua, San Martín (qvs)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-qvs") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-qvs") text = "some example text in the Quechua, San Martín language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
Frowning/L3-RPEXNSFW-V2-2x8B-Q8_0-GGUF
Frowning
"2024-06-21T14:10:15Z"
2,764
0
null
[ "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "bluuwhale/L3-SthenoMaidBlackroot-8B-V1", "tannedbum/L3-Nymeria-8B", "llama-cpp", "gguf-my-repo", "not-for-all-audiences", "base_model:Frowning/L3-RPEXNSFW-V2-2x8B", "license:apache-2.0", "region:us" ]
null
"2024-06-21T14:08:50Z"
--- base_model: Frowning/L3-RPEXNSFW-V2-2x8B license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - bluuwhale/L3-SthenoMaidBlackroot-8B-V1 - tannedbum/L3-Nymeria-8B - llama-cpp - gguf-my-repo - not-for-all-audiences ---
state-spaces/mamba-790m-hf
state-spaces
"2024-03-06T00:44:06Z"
2,763
3
transformers
[ "transformers", "safetensors", "mamba", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-06T00:07:54Z"
--- library_name: transformers tags: [] --- # Mamba <!-- Provide a quick summary of what the model is/does. --> This repository contains the `transfromers` compatible `mamba-2.8b`. The checkpoints are untouched, but the full `config.json` and tokenizer are pushed to this repo. # Usage You need to install `transformers` from `main` until `transformers=4.39.0` is released. ```bash pip install git+https://github.com/huggingface/transformers@main ``` We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using: ```bash pip install causal-conv1d>=1.2.0 pip install mamba-ssm ``` If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used. ## Generation You can use the classic `generate` API: ```python >>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-790m-hf") >>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-790m-hf") >>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"] >>> out = model.generate(input_ids, max_new_tokens=10) >>> print(tokenizer.batch_decode(out)) ["Hey how are you doing?\n\nI'm good.\n\nHow are"] ``` ## PEFT finetuning example In order to finetune using the `peft` library, we recommend keeping the model in float32! ```python from datasets import load_dataset from trl import SFTTrainer from peft import LoraConfig from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-790m-hf") model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-790m-hf") dataset = load_dataset("Abirate/english_quotes", split="train") training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, per_device_train_batch_size=4, logging_dir='./logs', logging_steps=10, learning_rate=2e-3 ) lora_config = LoraConfig( r=8, target_modules=["x_proj", "embeddings", "in_proj", "out_proj"], task_type="CAUSAL_LM", bias="none" ) trainer = SFTTrainer( model=model, tokenizer=tokenizer, args=training_args, peft_config=lora_config, train_dataset=dataset, dataset_text_field="quote", ) trainer.train() ```
mradermacher/llama2-doctor-GGUF
mradermacher
"2024-06-02T03:29:09Z"
2,763
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:ashnaz/llama2-doctor", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-02T02:34:41Z"
--- base_model: ashnaz/llama2-doctor language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ashnaz/llama2-doctor <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/llama2-doctor-GGUF/resolve/main/llama2-doctor.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Dr.Samantha-8B-GGUF
mradermacher
"2024-06-05T08:45:45Z"
2,763
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "medical", "en", "dataset:cognitivecomputations/samantha-data", "dataset:ruslanmv/ai-medical-dataset", "base_model:sethuiyer/Dr.Samantha-8B", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-06-03T10:39:39Z"
--- base_model: sethuiyer/Dr.Samantha-8B datasets: - cognitivecomputations/samantha-data - ruslanmv/ai-medical-dataset language: - en library_name: transformers license: other quantized_by: mradermacher tags: - mergekit - merge - medical --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/sethuiyer/Dr.Samantha-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Dr.Samantha-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Dr.Samantha-8B-GGUF/resolve/main/Dr.Samantha-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
keremberke/yolov8s-pokemon-classification
keremberke
"2023-02-22T13:02:11Z"
2,762
0
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/pokemon-classification", "model-index", "region:us" ]
image-classification
"2023-01-28T04:48:41Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/pokemon-classification model-index: - name: keremberke/yolov8s-pokemon-classification results: - task: type: image-classification dataset: type: keremberke/pokemon-classification name: pokemon-classification split: validation metrics: - type: accuracy value: 0.02459 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 0.0806 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8s-pokemon-classification" src="https://huggingface.co/keremberke/yolov8s-pokemon-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['Abra', 'Aerodactyl', 'Alakazam', 'Alolan Sandslash', 'Arbok', 'Arcanine', 'Articuno', 'Beedrill', 'Bellsprout', 'Blastoise', 'Bulbasaur', 'Butterfree', 'Caterpie', 'Chansey', 'Charizard', 'Charmander', 'Charmeleon', 'Clefable', 'Clefairy', 'Cloyster', 'Cubone', 'Dewgong', 'Diglett', 'Ditto', 'Dodrio', 'Doduo', 'Dragonair', 'Dragonite', 'Dratini', 'Drowzee', 'Dugtrio', 'Eevee', 'Ekans', 'Electabuzz', 'Electrode', 'Exeggcute', 'Exeggutor', 'Farfetchd', 'Fearow', 'Flareon', 'Gastly', 'Gengar', 'Geodude', 'Gloom', 'Golbat', 'Goldeen', 'Golduck', 'Golem', 'Graveler', 'Grimer', 'Growlithe', 'Gyarados', 'Haunter', 'Hitmonchan', 'Hitmonlee', 'Horsea', 'Hypno', 'Ivysaur', 'Jigglypuff', 'Jolteon', 'Jynx', 'Kabuto', 'Kabutops', 'Kadabra', 'Kakuna', 'Kangaskhan', 'Kingler', 'Koffing', 'Krabby', 'Lapras', 'Lickitung', 'Machamp', 'Machoke', 'Machop', 'Magikarp', 'Magmar', 'Magnemite', 'Magneton', 'Mankey', 'Marowak', 'Meowth', 'Metapod', 'Mew', 'Mewtwo', 'Moltres', 'MrMime', 'Muk', 'Nidoking', 'Nidoqueen', 'Nidorina', 'Nidorino', 'Ninetales', 'Oddish', 'Omanyte', 'Omastar', 'Onix', 'Paras', 'Parasect', 'Persian', 'Pidgeot', 'Pidgeotto', 'Pidgey', 'Pikachu', 'Pinsir', 'Poliwag', 'Poliwhirl', 'Poliwrath', 'Wigglytuff', 'Zapdos', 'Zubat'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8s-pokemon-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
artificialguybr/StudioGhibli.Redmond-V2
artificialguybr
"2023-11-11T15:56:12Z"
2,762
28
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "region:us" ]
text-to-image
"2023-11-11T15:55:36Z"
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: Portrait of a boy, stunning, cute , StdGBRedmAF, Studio Ghibli, parameters: negative_prompt: bad art, ugly, text, watermark, duplicated, deformed output: url: images/00224-3646141139.png - text: 'A cute yellow fish in the water, , StdGBRedmAF, Studio Ghibli, ' parameters: negative_prompt: bad art, ugly, text, watermark, duplicated, deformed output: url: images/00236-527854134.png - text: A ghost with blood in face, creepy, horror , StdGBRedmAF, Studio Ghibli, parameters: negative_prompt: bad art, ugly, text, watermark, duplicated, deformed output: url: images/00245-2408060209.png - text: A boy wearing red sunglasses, , StdGBRedmAF, Studio Ghibli, parameters: negative_prompt: bad art, ugly, text, watermark, duplicated, deformed output: url: images/00265-3245192291.png - text: A marshmallow monster, , StdGBRedmAF, Studio Ghibli, parameters: negative_prompt: bad art, ugly, text, watermark, duplicated, deformed output: url: images/00241-2712855754.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: Studio Ghibli, StdGBRedmAF --- # Studio Ghibli V2 <Gallery /> ## Model description StudioGhibli.Redmond is here! Introducing StudioGhibli.Redmond, the ultimate LORA for creating Studio Ghibli images! I&#39;m grateful for the GPU time from Redmond.AI that allowed me to make this LORA! If you need GPU, then you need the great services from Redmond.AI. Test all my Loras here for free and unlimited. Thanks, HF, for Inference API! It is based on SD XL 1.0 and fine-tuned on a large dataset. The LORA has a high capacity to generate Coloring Book Images! The tag for the model:StdGBRedmAF, Studio Ghibli I really hope you like the LORA and use it. If you like the model and think it&#39;s worth it, you can make a donation to my Patreon or Ko-fi. Patreon: https:&#x2F;&#x2F;www.patreon.com&#x2F;user?u&#x3D;81570187 Ko-fi:https:&#x2F;&#x2F;ko-fi.com&#x2F;artificialguybr BuyMeACoffe:https:&#x2F;&#x2F;www.buymeacoffee.com&#x2F;jvkape Follow me in my twitter to know before all about new models: https:&#x2F;&#x2F;twitter.com&#x2F;artificialguybr&#x2F; DISCLAIMER: This work is a non-commercial, fan-made creation, intended solely for entertainment purposes.. All rights to characters belong to their respective owners. This work does not seek to diminish the value or reputation of the original content in any way. If you are a rights holder and have concerns about this content, please contact [email protected], and we will address your concerns promptly. ## Trigger words You should use `Studio Ghibli` to trigger the image generation. You should use `StdGBRedmAF` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/artificialguybr/StudioGhibli.Redmond-V2/tree/main) them in the Files & versions tab.
mradermacher/prometheus-2-llama-3-8b-GGUF
mradermacher
"2024-06-17T13:14:33Z"
2,762
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "dataset:prometheus-eval/Preference-Collection", "dataset:prometheus-eval/Feedback-Collection", "base_model:chargoddard/prometheus-2-llama-3-8b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-17T10:57:42Z"
--- base_model: chargoddard/prometheus-2-llama-3-8b datasets: - prometheus-eval/Preference-Collection - prometheus-eval/Feedback-Collection language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/chargoddard/prometheus-2-llama-3-8b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/prometheus-2-llama-3-8b-GGUF/resolve/main/prometheus-2-llama-3-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Yhyu13/LMCocktail-10.7B-v1
Yhyu13
"2023-12-23T11:10:44Z"
2,761
16
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2311.13534", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-20T07:43:25Z"
--- license: llama2 --- # LM-cocktail 10.7B v1 This is a 50%-50% model of the SOLAR model and meow. https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0 https://huggingface.co/rishiraj/meow who rank #1 and #2 among models <13B in the https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard by 2023/12/20. # Alpaca Eval I am thrilled to announce that ChatGPT has ranked LMCocktail 10.7B as the second best model next to GPT4 on AlpcaEval in my local community run. You can also check the leaderboard at [./alpaca_eval/chatgpt_fn_--SOLAR-10-7B-LMCocktail/](./alpaca_eval/chatgpt_fn_--SOLAR-10-7B-LMCocktail/) ``` win_rate standard_error n_total avg_length gpt4 73.79 1.54 805 1365 SOLAR-10.7B-LMCocktail(new)73.45 1.56 804 1203 claude 70.37 1.60 805 1082 chatgpt 66.09 1.66 805 811 wizardlm-13b 65.16 1.67 805 985 vicuna-13b 64.10 1.69 805 1037 guanaco-65b 62.36 1.71 805 1249 oasst-rlhf-llama-33b 62.05 1.71 805 1079 alpaca-farm-ppo-human 60.25 1.72 805 803 falcon-40b-instruct 56.52 1.74 805 662 text_davinci_003 50.00 0.00 805 307 alpaca-7b 45.22 1.74 805 396 text_davinci_001 28.07 1.56 805 296 ``` # Code The LM-cocktail is novel technique for merging multiple models https://arxiv.org/abs/2311.13534 Code is backed up by this repo https://github.com/FlagOpen/FlagEmbedding.git Merging scripts available under the [./scripts](./scripts) folder # Result The SOLAR model is the first model <30B that can answer this question from my test: ``` What will AI be like in the year 1010 A.D? ``` without hullicinating into 1010 A.D is a future time (like other llama2 models) Models greater than that, like Yi-34B could answer this paradoxic question correctly as well, since it is huge enough. ### SOLAR 10.7B output ![img](./assets/SOLAR.png) ### LMCocktail 10.7B output1 ![img](./assets/SOLAR_mixed.png) ### LMCocktail 10.7B output2 ![img](./assets/SOLAR_mixed2.png)
vikp/surya_det
vikp
"2024-02-13T19:46:43Z"
2,761
4
transformers
[ "transformers", "safetensors", "segformer", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
null
"2024-02-10T23:38:46Z"
--- license: cc-by-nc-sa-4.0 --- Line detection model for [surya](https://github.com/VikParuchuri/surya). See repo for details.
cerebras/Cerebras-GPT-1.3B
cerebras
"2023-11-22T21:47:29Z"
2,759
46
transformers
[ "transformers", "pytorch", "gpt2", "causal-lm", "text-generation", "en", "dataset:the_pile", "arxiv:2304.03208", "arxiv:2203.15556", "arxiv:2101.00027", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-03-20T20:43:21Z"
--- language: - en tags: - pytorch - causal-lm license: apache-2.0 datasets: - the_pile pipeline_tag: text-generation --- # Cerebras-GPT 1.3B Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)! ## Model Description The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and scalability of training LLMs on the Cerebras software and hardware stack. All Cerebras-GPT models are available on Hugging Face. The family includes 111M, 256M, 590M, 1.3B, 2.7B, 6.7B, and 13B models. All models in the Cerebras-GPT family have been trained in accordance with [Chinchilla scaling laws](https://arxiv.org/abs/2203.15556) (20 tokens per model parameter) which is compute-optimal. These models were trained on the [Andromeda](https://www.cerebras.net/andromeda/) AI supercomputer comprised of 16 CS-2 wafer scale systems. Cerebras' [weight streaming technology](https://www.cerebras.net/blog/linear-scaling-made-possible-with-weight-streaming) simplifies the training of LLMs by disaggregating compute from model storage. This allowed for efficient scaling of training across nodes using simple data parallelism. Cerebras systems for pre-training and fine tuning are available in the cloud via the [Cerebras Model Studio](https://www.cerebras.net/product-cloud/). Cerebras CS-2 compatible checkpoints are available in [Cerebras Model Zoo](https://github.com/Cerebras/modelzoo). ## Model Details * Developed by: [Cerebras Systems](https://www.cerebras.net/) * License: Apache 2.0 * Model type: Transformer-based Language Model * Architecture: GPT-3 style architecture * Data set: The Pile * Tokenizer: Byte Pair Encoding * Vocabulary Size: 50257 * Sequence Length: 2048 * Optimizer: AdamW, (β1, β2) = (0.9, 0.95), adam_eps = 1e−8 (1e−9 for larger models) * Positional Encoding: Learned * Language: English * Learn more: Dense Scaling Laws Paper for training procedure, config files, and details on how to use. **Contact**: To ask questions about Cerebras-GPT models, join the [Cerebras Discord](https://discord.gg/q6bZcMWJVu). This is the standard parameterization version of Cerebras-GPT with **1.3B** parameters Related models: [Cerebras-GPT Models](https://huggingface.co/models?sort=downloads&search=cerebras-gpt) <br><br> | Model | Parameters | Layers | d_model | Heads | d_head | d_ffn | LR | BS (seq) | BS (tokens) | |---------------|------------|--------|---------|-------|--------|--------|----------|----------|----------------| | Cerebras-GPT | 111M | 10 | 768 | 12 | 64 | 3072 | 6.0E-04 | 120 | 246K | | Cerebras-GPT | 256M | 14 | 1088 | 17 | 64 | 4352 | 6.0E-04 | 264 | 541K | | Cerebras-GPT | 590M | 18 | 1536 | 12 | 128 | 6144 | 2.0E-04 | 264 | 541K | | Cerebras-GPT | 1.3B | 24 | 2048 | 16 | 128 | 8192 | 2.0E-04 | 528 | 1.08M | | Cerebras-GPT | 2.7B | 32 | 2560 | 32 | 80 | 10240 | 2.0E-04 | 528 | 1.08M | | Cerebras-GPT | 6.7B | 32 | 4096 | 32 | 128 | 16384 | 1.2E-04 | 1040 | 2.13M | | Cerebras-GPT | 13B | 40 | 5120 | 40 | 128 | 20480 | 1.2E-04 | 720 &rarr; 1080 | 1.47M &rarr; 2.21M | <br><br> ## Quickstart This model can be easily loaded using the AutoModelForCausalLM functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("cerebras/Cerebras-GPT-1.3B") model = AutoModelForCausalLM.from_pretrained("cerebras/Cerebras-GPT-1.3B") text = "Generative AI is " ``` And can be used with Hugging Face Pipelines ```python from transformers import pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) generated_text = pipe(text, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0] print(generated_text['generated_text']) ``` or with `model.generate()` ```python inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, num_beams=5, max_new_tokens=50, early_stopping=True, no_repeat_ngram_size=2) text_output = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(text_output[0]) ``` <br><br> ## Training data Cerebras-GPT is trained using [the Pile](https://pile.eleuther.ai) dataset from [EleutherAI](https://www.eleuther.ai). See the [Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed breakdown of data sources and methodology. The Pile was cleaned using the ftfy library to normalize the text, then filtered using scripts provided by Eleuther. We tokenized the data using byte-pair encoding using the GPT-2 vocabulary. Our tokenized version of the Pile has 371B tokens. We include more details about the training dataset preprocessing in Appendix A.1 of our paper. Recent works find significant duplicate data present in the Pile. Eleuther’s Pythia applies a deduplication process to reduce replicated data, decreasing the Pile dataset size. Pythia was trained on both the standard dataset and deduplicated dataset to characterize the impact. Our models are trained on the standard Pile without deduplication, which may present an opportunity for further improvement with the deduplicated data set. <br><br> ## Training procedure We use the GPT-3 style model architecture. All of our layers use full attention as opposed to the GPT-3 style sparse banded attention. The model shapes were selected to either follow aspect ratio 80 or are the same shape as GPT-3 models. Learning rate warmed up for 375M tokens (1500 steps for 111M and 256M models) and 10x cosine decayed. No dropout was used and weight decay was set to 0.1. All models are trained with MSL of 2048. All models were trained to Chinchilla point: 20 tokens per model parameter. Number of steps was chosen based on optimal batch size (varied by model) and fixed sequence length (2048). See Training Table, below, for details. <br> Model Params | Sequence Length | Batch Size | Number of Steps | Tokens | Tokens per Parameter | Flops ------------ | -------------- | ---------- | --------------- | ------ | -------------------- | ----- 111M | 2048 | 120 | 9037 | 2.22E+09 | 20 | 2.6E+18 256M | 2048 | 264 | 9468 | 5.12E+09 | 20 | 1.3E+19 590M | 2048 | 264 | 21836 | 1.18E+10 | 20 | 6.1E+19 1.3B | 2048 | 528 | 24334 | 2.63E+10 | 20 | 2.8E+20 2.7B | 2048 | 528 | 49041 | 5.30E+10 | 20 | 1.1E+21 6.7B | 2048 | 1040 | 62522 | 1.33E+11 | 20 | 6.3E+21 13B | 2048 | 720 | 174335 | 2.57E+11 | 20 | 2.3E+22 <br><br> ## Evaluations We trained models from smallest to largest and fit a power law as we went along. The power law was helpful for extrapolating the validation loss of the next largest model we trained and provided confidence about whether the training run was going well. We performed upstream (pre-training) evaluations of text prediction cross-entropy using the Pile validation and test splits. We performed downstream evaluations of text generation accuracy on standardized tasks using the [Eleuther lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). Results are compared against many publicly available large language models in Section 3 of the paper. #### 0-shot Evaluation | Model | Params | Training FLOPs | PILE test xent | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | Downstream Average | | ------- | ----- | -------------- | -------------- | ---------- | ----- | ----------- | ------- | ----- | ----- | ---------- | ------------------ | | Cerebras-GPT | 111M | 2.6E+18 | 2.566 | 0.268 | 0.594 | 0.488 | 0.194 | 0.380 | 0.166 | 0.118 | 0.315 | | Cerebras-GPT | 256M | 1.3E+19 | 2.299 | 0.274 | 0.613 | 0.511 | 0.293 | 0.410 | 0.170 | 0.158 | 0.347 | | Cerebras-GPT | 590M | 6.1E+19 | 2.184 | 0.291 | 0.627 | 0.498 | 0.366 | 0.464 | 0.190 | 0.158 | 0.370 | | Cerebras-GPT | 1.3B | 2.8E+20 | 1.996 | 0.325 | 0.664 | 0.521 | 0.462 | 0.508 | 0.224 | 0.166 | 0.410 | | Cerebras-GPT | 2.7B | 1.1E+21 | 1.834 | 0.386 | 0.701 | 0.559 | 0.567 | 0.571 | 0.246 | 0.206 | 0.462 | | Cerebras-GPT | 6.7B | 6.3E+21 | 1.704 | 0.447 | 0.739 | 0.602 | 0.636 | 0.643 | 0.282 | 0.238 | 0.512 | | Cerebras-GPT | 13B | 2.3E+22 | 1.575 | 0.513 | 0.766 | 0.646 | 0.696 | 0.714 | 0.367 | 0.286 | 0.570 | #### 5-shot Evaluation | Model | Params | Hella-Swag | PIQA | Wino-Grande | Lambada | ARC-e | ARC-c | OpenBookQA | | -------- | ----- | ----------| ----- | ----------- | -------| ----- | ----- | ---------- | | Cerebras-GPT | 111M | 0.267 | 0.588 | 0.475 | 0.158 | 0.356 | 0.166 | 0.136 | | Cerebras-GPT | 256M | 0.278 | 0.606 | 0.522 | 0.225 | 0.422 | 0.183 | 0.164 | | Cerebras-GPT | 590M | 0.291 | 0.634 | 0.479 | 0.281 | 0.475 | 0.206 | 0.152 | | Cerebras-GPT | 1.3B | 0.326 | 0.668 | 0.536 | 0.395 | 0.529 | 0.241 | 0.174 | | Cerebras-GPT | 2.7B | 0.382 | 0.697 | 0.543 | 0.487 | 0.590 | 0.267 | 0.224 | | Cerebras-GPT | 6.7B | 0.444 | 0.736 | 0.590 | 0.591 | 0.667 | 0.314 | 0.270 | | Cerebras-GPT | 13B | 0.514 | 0.768 | 0.674 | 0.655 | 0.743 | 0.398 | 0.318 | <br><br> ## Uses and Limitations ### Intended Use The primary intended use is to further research into large language models. These models can be used as a foundation model for NLP, applications, ethics, and alignment research. Our primary intended users are researchers who are working to improve LLMs and practitioners seeking reference implementations, training setups, hyperparameters, or pre-trained models. We release these models with a fully permissive Apache license for the community to use freely. You may fine-tune and adapt Cerebras-GPT models for deployment via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied beore using the Cerebras-GPT model family in production downstream applications. Due to financial and compute budgets, Cerebras-GPT models were only trained and evaluated following the approaches described in the paper. ### Out of Scope Use Cerebras-GPT models are trained on the Pile, with English language only, and are not suitable for machine translation tasks. Cerebras-GPT models have not been tuned for human-facing dialog applications like chatbots and will not respond to prompts in a similar way to models that have received instruction tuning or reinforcement learning from human feedback (RLHF) like Flan-T5 or ChatGPT. Cerebras-GPT models can be tuned using those methods. ### Risk, Bias, Ethical Considerations * **Data**: The Pile dataset has been thoroughly analyzed from various ethical standpoints such as toxicity analysis, gender bias, pejorative content, racially sensitive content etc. Please refer to Pile dataset references. * **Human life**: The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life. * **Risks and harms**: There can be distributional bias in the Pile dataset that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information. * **Mitigations**: Only mitigations in standard Pile dataset pre-processing were employed when pre-training Cerebras-GPT. <br><br> ## Acknowledgements We are thankful to all Cerebras engineers, past and present, that made this work possible.
monadical-labs/minecraft-skin-generator-sdxl
monadical-labs
"2024-02-23T18:05:09Z"
2,759
4
diffusers
[ "diffusers", "safetensors", "minecraft", "skins", "gaming", "stable diffusion", "stable diffusion xl", "text-to-image", "en", "license:openrail", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-02-19T15:38:21Z"
--- license: openrail language: - en library_name: diffusers tags: - minecraft - skins - gaming - stable diffusion - stable diffusion xl pipeline_tag: text-to-image --- # Minecraft Skin Generator XL Monadical is pleased to announce the official release of the Minecraft Skin Generator XL model. We had previously released the [Minecraft Skin Generator](https://huggingface.co/monadical-labs/minecraft-skin-generator) model based upon Stable Diffusion 2. This new model offers significant improvements over the last generation of models. ### Key Features 1. **Upgrade to Stable Diffusion XL** - Our model is now based upon the Stable Diffusion XL model, which greatly improves the quality of generated skins when compared to previous models. 1. **Transparent Layer Support** - The new model now supports the transparency layer in the hair and helmet section of the skin. ### Examples * 'Kelly Kapoor from the TV show "The Office"' ![Kelly Kapoor](images/kelly.png) * 'Saul Goodman from the TV show "Better Call Saul"' ![Saul Goodman](images/saul.png) * 'Gustavo Fring from the TV show "Breaking Bad"' ![Gustavo Fring](images/fring.png) * 'Daryl Dixon from the TV show "The Walking Dead"' ![Daryl Dixon](images/daryl.png) * 'Zach Galifianakis as Alan in the movie "The Hangover"' ![Alan from The Hangover](images/hangover.png) ### Try It Out Yourself There are several options for trying out this new model: 1. Download the model and run it locally on your machine. Note that we recommend a GPU for this - while it is possible to run on a CPU, we do not currently support this method. **Note**: Output from the StableDiffusionXL pipeline should be constrained to 768x768 pixels, or the model will automatically generate a 1024x1024 output image, and fill in the extra space with unusuable garbage. 1. Try our hosted version of the model on the [Minecraft Skin Generator website)[https://www.skingenerator.io]. ### Get Involved Have any feedback or suggestions? Join us on our [Minecraft Skin Generator Discord channel](https://discord.com/invite/yMzFzVUPDf) or send us an [email](mailto:[email protected]). Happy crafting! [The Monadical Minecraft Skin Generator Team](https://monadical.com/)
RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf
RichardErkhov
"2024-06-29T14:02:00Z"
2,759
0
null
[ "gguf", "region:us" ]
null
"2024-06-29T13:15:55Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TinyLlama-1.1B-1T-OpenOrca - GGUF - Model creator: https://huggingface.co/jeff31415/ - Original model: https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TinyLlama-1.1B-1T-OpenOrca.Q2_K.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q2_K.gguf) | Q2_K | 0.4GB | | [TinyLlama-1.1B-1T-OpenOrca.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [TinyLlama-1.1B-1T-OpenOrca.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.IQ3_S.gguf) | IQ3_S | 0.47GB | | [TinyLlama-1.1B-1T-OpenOrca.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [TinyLlama-1.1B-1T-OpenOrca.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.IQ3_M.gguf) | IQ3_M | 0.48GB | | [TinyLlama-1.1B-1T-OpenOrca.Q3_K.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q3_K.gguf) | Q3_K | 0.51GB | | [TinyLlama-1.1B-1T-OpenOrca.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [TinyLlama-1.1B-1T-OpenOrca.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [TinyLlama-1.1B-1T-OpenOrca.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [TinyLlama-1.1B-1T-OpenOrca.Q4_0.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q4_0.gguf) | Q4_0 | 0.59GB | | [TinyLlama-1.1B-1T-OpenOrca.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [TinyLlama-1.1B-1T-OpenOrca.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [TinyLlama-1.1B-1T-OpenOrca.Q4_K.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q4_K.gguf) | Q4_K | 0.62GB | | [TinyLlama-1.1B-1T-OpenOrca.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [TinyLlama-1.1B-1T-OpenOrca.Q4_1.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q4_1.gguf) | Q4_1 | 0.65GB | | [TinyLlama-1.1B-1T-OpenOrca.Q5_0.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q5_0.gguf) | Q5_0 | 0.71GB | | [TinyLlama-1.1B-1T-OpenOrca.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [TinyLlama-1.1B-1T-OpenOrca.Q5_K.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q5_K.gguf) | Q5_K | 0.73GB | | [TinyLlama-1.1B-1T-OpenOrca.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [TinyLlama-1.1B-1T-OpenOrca.Q5_1.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q5_1.gguf) | Q5_1 | 0.77GB | | [TinyLlama-1.1B-1T-OpenOrca.Q6_K.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q6_K.gguf) | Q6_K | 0.84GB | | [TinyLlama-1.1B-1T-OpenOrca.Q8_0.gguf](https://huggingface.co/RichardErkhov/jeff31415_-_TinyLlama-1.1B-1T-OpenOrca-gguf/blob/main/TinyLlama-1.1B-1T-OpenOrca.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- license: apache-2.0 datasets: - Open-Orca/OpenOrca - bigcode/starcoderdata - cerebras/SlimPajama-627B language: - en --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) #### Base model: PY007/TinyLlama-1.1B-intermediate-step-480k-1T #### Dataset: Fine tuned on OpenOrca GPT4 subset for 1 epoch,Using CHATML format #### Model License: Apache 2.0, following the TinyLlama base model. #### Quantisation: - GPTQ:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-GPTQ - AWQ:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-AWQ - GGUF:https://huggingface.co/TheBloke/TinyLlama-1.1B-1T-OpenOrca-GGUF #### Hardware and training details: Hardware: 1*RTX A5000, ~16 hours to complete 1 epoch. GPU from autodl.com, cost around $3 for this finetuning. https://wandb.ai/jeff200402/TinyLlama-Orca?workspace= for more details.
keremberke/yolov8n-shoe-classification
keremberke
"2023-02-22T13:05:06Z"
2,758
0
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "image-classification", "pytorch", "awesome-yolov8-models", "dataset:keremberke/shoe-classification", "model-index", "region:us" ]
image-classification
"2023-01-29T11:51:08Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - image-classification - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.23 inference: false datasets: - keremberke/shoe-classification model-index: - name: keremberke/yolov8n-shoe-classification results: - task: type: image-classification dataset: type: keremberke/shoe-classification name: shoe-classification split: validation metrics: - type: accuracy value: 0.68675 # min: 0.0 - max: 1.0 name: top1 accuracy - type: accuracy value: 1 # min: 0.0 - max: 1.0 name: top5 accuracy --- <div align="center"> <img width="640" alt="keremberke/yolov8n-shoe-classification" src="https://huggingface.co/keremberke/yolov8n-shoe-classification/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['adidas', 'converse', 'nike'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.24 ultralytics==8.0.23 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, postprocess_classify_output # load model model = YOLO('keremberke/yolov8n-shoe-classification') # set model parameters model.overrides['conf'] = 0.25 # model confidence threshold # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].probs) # [0.1, 0.2, 0.3, 0.4] processed_result = postprocess_classify_output(model, result=results[0]) print(processed_result) # {"cat": 0.4, "dog": 0.6} ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
facebook/mms-tts-qvc
facebook
"2023-09-01T10:43:36Z"
2,758
0
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
"2023-09-01T10:43:08Z"
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Quechua, Cajamarca Text-to-Speech This repository contains the **Quechua, Cajamarca (qvc)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-qvc") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-qvc") text = "some example text in the Quechua, Cajamarca language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
GraydientPlatformAPI/afrodite31-xl
GraydientPlatformAPI
"2024-04-21T06:57:37Z"
2,758
0
diffusers
[ "diffusers", "safetensors", "license:openrail", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-04-21T06:46:28Z"
--- license: openrail ---
Yntec/CinemaEros
Yntec
"2024-05-29T16:26:39Z"
2,757
8
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "Filly", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-08-05T05:34:44Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - Filly --- # cineMaErosPG_V4_UF Original page: https://civitai.com/models/74426?modelVersionId=100272
mradermacher/Llama-3_8b_Alpaca-GGUF
mradermacher
"2024-06-05T18:10:17Z"
2,757
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:catastropiyush/Llama-3_8b_Alpaca", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-05T17:42:10Z"
--- base_model: catastropiyush/Llama-3_8b_Alpaca language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/catastropiyush/Llama-3_8b_Alpaca <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3_8b_Alpaca-GGUF/resolve/main/Llama-3_8b_Alpaca.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
SteelQuants/L3-Aethora-15B-V2-Q8_0-GGUF
SteelQuants
"2024-06-27T04:25:40Z"
2,757
2
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:TheSkullery/Aether-Lite-v1.8.1", "base_model:ZeusLabs/L3-Aethora-15B-V2", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
null
"2024-06-27T04:24:28Z"
--- base_model: ZeusLabs/L3-Aethora-15B-V2 datasets: - TheSkullery/Aether-Lite-v1.8.1 language: - en library_name: transformers license: cc-by-sa-4.0 tags: - llama-cpp - gguf-my-repo --- # Steelskull/L3-Aethora-15B-V2-Q8_0-GGUF This model was converted to GGUF format from [`ZeusLabs/L3-Aethora-15B-V2`](https://huggingface.co/ZeusLabs/L3-Aethora-15B-V2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/ZeusLabs/L3-Aethora-15B-V2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Steelskull/L3-Aethora-15B-V2-Q8_0-GGUF --hf-file l3-aethora-15b-v2-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Steelskull/L3-Aethora-15B-V2-Q8_0-GGUF --hf-file l3-aethora-15b-v2-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Steelskull/L3-Aethora-15B-V2-Q8_0-GGUF --hf-file l3-aethora-15b-v2-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Steelskull/L3-Aethora-15B-V2-Q8_0-GGUF --hf-file l3-aethora-15b-v2-q8_0.gguf -c 2048 ```
keremberke/yolov8n-protective-equipment-detection
keremberke
"2023-02-22T13:03:41Z"
2,755
0
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "awesome-yolov8-models", "dataset:keremberke/protective-equipment-detection", "model-index", "region:us" ]
object-detection
"2023-01-29T09:47:40Z"
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch - awesome-yolov8-models library_name: ultralytics library_version: 8.0.21 inference: false datasets: - keremberke/protective-equipment-detection model-index: - name: keremberke/yolov8n-protective-equipment-detection results: - task: type: object-detection dataset: type: keremberke/protective-equipment-detection name: protective-equipment-detection split: validation metrics: - type: precision # since [email protected] is not available on hf.co/metrics value: 0.24713 # min: 0.0 - max: 1.0 name: [email protected](box) --- <div align="center"> <img width="640" alt="keremberke/yolov8n-protective-equipment-detection" src="https://huggingface.co/keremberke/yolov8n-protective-equipment-detection/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['glove', 'goggles', 'helmet', 'mask', 'no_glove', 'no_goggles', 'no_helmet', 'no_mask', 'no_shoes', 'shoes'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.23 ultralytics==8.0.21 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('keremberke/yolov8n-protective-equipment-detection') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ``` **More models available at: [awesome-yolov8-models](https://yolov8.xyz)**
3loi/SER-Odyssey-Baseline-WavLM-Multi-Attributes
3loi
"2024-06-12T20:35:33Z"
2,755
2
transformers
[ "transformers", "pytorch", "safetensors", "ser", "audio-classification", "wavlm", "msp-podcast", "emotion-recognition", "audio", "speech", "valence", "arousal", "dominance", "lucas", "speech-emotion-recognition", "custom_code", "en", "license:mit", "region:us" ]
audio-classification
"2024-03-05T01:03:46Z"
--- license: mit language: - en pipeline_tag: audio-classification tags: - wavlm - msp-podcast - emotion-recognition - audio - speech - valence - arousal - dominance - lucas - speech-emotion-recognition --- The model was trained on [MSP-Podcast](https://ecs.utdallas.edu/research/researchlabs/msp-lab/MSP-Podcast.html) for the Odyssey 2024 Emotion Recognition competition baseline<br> This particular model is the multi-attributed based model which predict arousal, dominance and valence in a range of approximately 0...1. # Benchmarks CCC based on Test3 and Development sets of the Odyssey Competition <table style="width:500px"> <tr><th colspan=6 align="center" >Multi-Task Setup</th></tr> <tr><th colspan=3 align="center">Test 3</th><th colspan=3 align="center">Development</th></tr> <tr> <td>Val</td> <td>Dom</td> <td>Aro</td> <td>Val</td> <td>Dom</td> <td>Aro</td> </tr> <tr> <td> 0.577</td> <td>0.577</td> <td>0.405</td> <td>0.652</td> <td>0.688</td> <td>0.579</td> </tr> </table> For more details: [demo](https://huggingface.co/spaces/3loi/WavLM-SER-Multi-Baseline-Odyssey2024), [paper](https://ecs.utdallas.edu/research/researchlabs/msp-lab/publications/Goncalves_2024.pdf), and [GitHub](https://github.com/MSP-UTD/MSP-Podcast_Challenge/tree/main). ``` @InProceedings{Goncalves_2024, author={L. Goncalves and A. N. Salman and A. {Reddy Naini} and L. Moro-Velazquez and T. Thebaud and L. {Paola Garcia} and N. Dehak and B. Sisman and C. Busso}, title={Odyssey2024 - Speech Emotion Recognition Challenge: Dataset, Baseline Framework, and Results}, booktitle={Odyssey 2024: The Speaker and Language Recognition Workshop)}, volume={To appear}, year={2024}, month={June}, address = {Quebec, Canada}, } ``` # Usage ```python from transformers import AutoModelForAudioClassification import librosa, torch #load model model = AutoModelForAudioClassification.from_pretrained("3loi/SER-Odyssey-Baseline-WavLM-Multi-Attributes", trust_remote_code=True) #get mean/std mean = model.config.mean std = model.config.std #load an audio file audio_path = "/path/to/audio.wav" raw_wav, _ = librosa.load(audio_path, sr=model.config.sampling_rate) #normalize the audio by mean/std norm_wav = (raw_wav - mean) / (std+0.000001) #generate the mask mask = torch.ones(1, len(norm_wav)) #batch it (add dim) wavs = torch.tensor(norm_wav).unsqueeze(0) #predict with torch.no_grad(): pred = model(wavs, mask) print(model.config.id2label) print(pred) #{0: 'arousal', 1: 'dominance', 2: 'valence'} #tensor([[0.3670, 0.4553, 0.4240]]) ```
yentinglin/Taiwan-LLM-7B-v2.1-chat
yentinglin
"2024-01-01T01:02:19Z"
2,754
26
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "zh", "arxiv:2311.17487", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-12T06:15:33Z"
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards license: apache-2.0 language: - zh widget: - text: >- A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: 你好,請問你可以幫我寫一封推薦信嗎? ASSISTANT: library_name: transformers pipeline_tag: text-generation extra_gated_heading: Acknowledge license to accept the repository. extra_gated_prompt: Please contact the author for access. extra_gated_button_content: Acknowledge license 同意以上內容 extra_gated_fields: Name: text Mail: text Organization: text Country: text Any utilization of the Taiwan LLM repository mandates the explicit acknowledgment and attribution to the original author: checkbox 使用Taiwan LLM必須明確地承認和歸功於優必達株式會社 Ubitus 以及原始作者: checkbox --- <img src="https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/CmusIT5OlSXvFrbTJ7l-C.png" alt="Taiwan LLM Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # 🌟 Checkout [Taiwan-LLM Demo Chat-UI](http://www.twllm.com) 🌟 # Model Card for Taiwan LLM 7B v2.1 chat Taiwan LLM is an advanced language model tailored for Traditional Chinese, focusing on the linguistic and cultural contexts of Taiwan. Developed from a large base model, it's enriched with diverse Taiwanese textual sources and refined through Supervised Fine-Tuning. This model excels in language understanding and generation, aligning closely with Taiwan's cultural nuances. It demonstrates improved performance on various benchmarks like TC-Eval, showcasing its contextual comprehension and cultural relevance. For detailed insights into Taiwan LLM's development and features, refer to our [technical report](https://github.com/MiuLab/Taiwan-LLaMa/blob/main/twllm_paper.pdf). ## Model description - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily Traditional Chinese (zh-tw) - **Finetuned from model:** [yentinglin/Taiwan-LLM-7B-v2.0-base](https://huggingface.co/yentinglin/yentinglin/Taiwan-LLM-7B-v2.0-base) - **TMMLUS+ score:** 22.19570181818182 ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/MiuLab/Taiwan-LLaMa - **Demo:** https://twllm.com/ ## Performance ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/HTwIzw6RDha2-PhuWqSuI.png) ## Intended uses Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # pip install transformers>=4.34 # pip install accelerate import torch from transformers import pipeline pipe = pipeline("text-generation", model="yentinglin/Taiwan-LLM-7B-v2.1-chat", torch_dtype=torch.bfloat16, device_map="auto") # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating messages = [ { "role": "system", "content": "你是一個人工智慧助理", }, {"role": "user", "content": "東北季風如何影響台灣氣候?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ### Training hyperparameters ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/MdvHwdUvH-c926qyRAw7K.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/kKpkvxDzOEyiAoTqmzRYO.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5df9c78eda6d0311fd3d541f/FsnlJ_fkRxf7fn5RKZnjE.png) The following hyperparameters were used during training: - learning_rate: 5e-05 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 5.0 ## Citation If you find Taiwan LLM is useful in your work, please cite it with: ``` @misc{lin2023taiwan, title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model}, author={Yen-Ting Lin and Yun-Nung Chen}, year={2023}, eprint={2311.17487}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # Acknowledgement Taiwan LLM v2 is conducted in collaboration with [Ubitus K.K.](http://ubitus.net). Ubitus provides valuable compute resources for the project.
Felladrin/Llama-160M-Chat-v1
Felladrin
"2024-03-03T18:49:16Z"
2,754
10
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "dataset:ehartford/wizard_vicuna_70k_unfiltered", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:Open-Orca/SlimOrca-Dedup", "dataset:databricks/databricks-dolly-15k", "dataset:THUDM/webglm-qa", "base_model:JackFram/llama-160m", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-20T23:37:50Z"
--- language: - en license: apache-2.0 tags: - text-generation base_model: JackFram/llama-160m datasets: - ehartford/wizard_vicuna_70k_unfiltered - totally-not-an-llm/EverythingLM-data-V3 - Open-Orca/SlimOrca-Dedup - databricks/databricks-dolly-15k - THUDM/webglm-qa widget: - messages: - role: system content: You are a helpful assistant, who answers with empathy. - role: user content: Got a question for you! - role: assistant content: "Sure! What's it?" - role: user content: Why do you love cats so much!? 🐈 - messages: - role: system content: "You are a helpful assistant who answers user's questions with empathy." - role: user content: Who is Mona Lisa? - messages: - role: system content: You are a helpful assistant who provides concise responses. - role: user content: Heya! - role: assistant content: Hi! How may I help you today? - role: user content: I need to build a simple website. Where should I start learning about web development? - messages: - role: user content: Invited some friends to come home today. Give me some ideas for games to play with them! - messages: - role: system content: "You are a helpful assistant who answers user's questions with details and curiosity." - role: user content: What are some potential applications for quantum computing? - messages: - role: system content: You are a helpful assistant who gives creative responses. - role: user content: Write the specs of a game about mages in a fantasy world. - messages: - role: system content: "You are a helpful assistant who answers user's questions with details." - role: user content: Tell me about the pros and cons of social media. - messages: - role: system content: "You are a helpful assistant who answers user's questions with confidence." - role: user content: What is a dog? - role: assistant content: 'A dog is a four-legged, domesticated animal that is a member of the class Mammalia, which includes all mammals. Dogs are known for their loyalty, playfulness, and ability to be trained for various tasks. They are also used for hunting, herding, and as service animals.' - role: user content: What is the color of an apple? inference: parameters: max_new_tokens: 250 penalty_alpha: 0.5 top_k: 4 repetition_penalty: 1.01 model-index: - name: Llama-160M-Chat-v1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 24.74 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 35.29 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 26.13 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 44.16 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 51.3 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Felladrin/Llama-160M-Chat-v1 name: Open LLM Leaderboard --- # A Llama Chat Model of 160M Parameters - Base model: [JackFram/llama-160m](https://huggingface.co/JackFram/llama-160m) - Datasets: - [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered) - [totally-not-an-llm/EverythingLM-data-V3](https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data-V3) - [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup) - [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) - [THUDM/webglm-qa](https://huggingface.co/datasets/THUDM/webglm-qa) - Availability in other ML formats: - GGUF: [Felladrin/gguf-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/gguf-Llama-160M-Chat-v1) - ONNX: [Felladrin/onnx-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/onnx-Llama-160M-Chat-v1) - MLC: [Felladrin/mlc-q4f16-Llama-160M-Chat-v1](https://huggingface.co/Felladrin/mlc-q4f16-Llama-160M-Chat-v1) - MLX: [mlx-community/Llama-160M-Chat-v1-4bit-mlx](https://huggingface.co/mlx-community/Llama-160M-Chat-v1-4bit-mlx) ## Recommended Prompt Format ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {user_message}<|im_end|> <|im_start|>assistant ``` ## Recommended Inference Parameters ```yml penalty_alpha: 0.5 top_k: 4 repetition_penalty: 1.01 ``` ## Usage Example ```python from transformers import pipeline generate = pipeline("text-generation", "Felladrin/Llama-160M-Chat-v1") messages = [ { "role": "system", "content": "You are a helpful assistant who answers user's questions with details and curiosity.", }, { "role": "user", "content": "What are some potential applications for quantum computing?", }, ] prompt = generate.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) output = generate( prompt, max_new_tokens=1024, penalty_alpha=0.5, top_k=4, repetition_penalty=1.01, ) print(output[0]["generated_text"]) ``` ## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Felladrin__Llama-160M-Chat-v1) | Metric |Value| |---------------------------------|----:| |Avg. |30.27| |AI2 Reasoning Challenge (25-Shot)|24.74| |HellaSwag (10-Shot) |35.29| |MMLU (5-Shot) |26.13| |TruthfulQA (0-shot) |44.16| |Winogrande (5-shot) |51.30| |GSM8k (5-shot) | 0.00|
eugenesiow/edsr
eugenesiow
"2021-09-13T03:46:42Z"
2,753
3
transformers
[ "transformers", "EDSR", "super-image", "image-super-resolution", "dataset:eugenesiow/Div2k", "dataset:eugenesiow/Set5", "dataset:eugenesiow/Set14", "dataset:eugenesiow/BSD100", "dataset:eugenesiow/Urban100", "arxiv:1707.02921", "arxiv:2104.07566", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - super-image - image-super-resolution datasets: - eugenesiow/Div2k - eugenesiow/Set5 - eugenesiow/Set14 - eugenesiow/BSD100 - eugenesiow/Urban100 metrics: - pnsr - ssim --- # Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR) EDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Enhanced Deep Residual Networks for Single Image Super-Resolution](https://arxiv.org/abs/1707.02921) by Lim et al. (2017) and first released in [this repository](https://github.com/sanghyun-son/EDSR-PyTorch). The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and EDSR upscaling x2. ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4](images/edsr_4_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4") ## Model description EDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation. This is a base model (~5mb vs ~100mb) that includes just 16 ResBlocks and 64 channels. ## Intended uses & limitations You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset. ### How to use The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library: ```bash pip install super-image ``` Here is how to use a pre-trained model to upscale your image: ```python from super_image import EdsrModel, ImageLoader from PIL import Image import requests url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg' image = Image.open(requests.get(url, stream=True).raw) model = EdsrModel.from_pretrained('eugenesiow/edsr', scale=2) # scale 2, 3 and 4 models available inputs = ImageLoader.load_image(image) preds = model(inputs) ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png` ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab") ## Training data The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900). ## Training procedure ### Preprocessing We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566). Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times. During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches. Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image. We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data: ```bash pip install datasets ``` The following code gets the data and preprocesses/augments the data. ```python from datasets import load_dataset from super_image.data import EvalDataset, TrainDataset, augment_five_crop augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\ .map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader ``` ### Pretraining The model was trained on GPU. The training code is provided below: ```python from super_image import Trainer, TrainingArguments, EdsrModel, EdsrConfig training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=1000, # total number of training epochs ) config = EdsrConfig( scale=4, # train a model to upscale 4x ) model = EdsrModel(config) trainer = Trainer( model=model, # the instantiated model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=eval_dataset # evaluation dataset ) trainer.train() ``` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab") ## Evaluation results The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm). Evaluation datasets include: - Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5) - Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14) - BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100) - Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100) The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline. |Dataset |Scale |Bicubic |edsr | |--- |--- |--- |--- | |Set5 |2x |33.64/0.9292 |**38.19/0.9612** | |Set5 |3x |30.39/0.8678 |**35.31/0.9421** | |Set5 |4x |28.42/0.8101 |**32.5/0.8986** | |Set14 |2x |30.22/0.8683 |**33.99/0.9215** | |Set14 |3x |27.53/0.7737 |**31.18/0.862** | |Set14 |4x |25.99/0.7023 |**28.92/0.7899** | |BSD100 |2x |29.55/0.8425 |**33.89/0.9266** | |BSD100 |3x |27.20/0.7382 |**29.77/0.8224** | |BSD100 |4x |25.96/0.6672 |**28.62/0.7689** | |Urban100 |2x |26.66/0.8408 |**32.68/0.9331** | |Urban100 |3x | |**29.75/0.8825** | |Urban100 |4x |23.14/0.6573 |**26.53/0.7995** | ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2](images/edsr_2_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2") You can find a notebook to easily run evaluation on pretrained models below: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab") ## BibTeX entry and citation info ```bibtex @InProceedings{Lim_2017_CVPR_Workshops, author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu}, title = {Enhanced Deep Residual Networks for Single Image Super-Resolution}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {July}, year = {2017} } ```
ckiplab/bert-tiny-chinese-ws
ckiplab
"2022-05-10T03:28:12Z"
2,753
1
transformers
[ "transformers", "pytorch", "bert", "token-classification", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-05-10T02:54:32Z"
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - bert - zh license: gpl-3.0 --- # CKIP BERT Tiny Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese-ws') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
stablediffusionapi/mistoonanime-v30
stablediffusionapi
"2024-03-12T11:44:34Z"
2,753
3
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-03-12T11:41:58Z"
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # Mistoon_Anime V3.0 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/13382525341710243584.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "mistoonanime-v30" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/mistoonanime-v30) Model link: [View model](https://modelslab.com/models/mistoonanime-v30) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "mistoonanime-v30", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF
mradermacher
"2024-06-14T19:49:44Z"
2,753
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Casual-Autopsy/Llama-3-aaditya-OpenBioLLM-Blackroot-8B", "endpoints_compatible", "region:us" ]
null
"2024-06-14T19:21:33Z"
--- base_model: Casual-Autopsy/Llama-3-aaditya-OpenBioLLM-Blackroot-8B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Casual-Autopsy/Llama-3-aaditya-OpenBioLLM-Blackroot-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-aaditya-OpenBioLLM-Blackroot-8B-GGUF/resolve/main/Llama-3-aaditya-OpenBioLLM-Blackroot-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF
mradermacher
"2024-06-18T08:58:57Z"
2,753
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B", "endpoints_compatible", "region:us" ]
null
"2024-06-18T08:24:18Z"
--- base_model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Mopeyfied-Psychology-8B-GGUF/resolve/main/Llama-3-Mopeyfied-Psychology-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
facebook/deformable-detr-detic
facebook
"2023-11-14T19:29:14Z"
2,752
6
transformers
[ "transformers", "pytorch", "safetensors", "deformable_detr", "object-detection", "vision", "detic", "dataset:coco", "dataset:lvis", "arxiv:2201.02605", "arxiv:2010.04159", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
"2023-02-27T12:40:04Z"
--- license: apache-2.0 tags: - object-detection - vision - detic datasets: - coco - lvis widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg example_title: Savanna - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg example_title: Football Match - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg example_title: Airport --- # Deformable DETR model trained using the Detic method on LVIS Deformable DEtection TRansformer (DETR), trained on LVIS (including 1203 classes). It was introduced in the paper [Detecting Twenty-thousand Classes using Image-level Supervision](https://arxiv.org/abs/2201.02605) by Zhou et al. and first released in [this repository](https://github.com/facebookresearch/Detic). This model corresponds to the "Detic_DeformDETR_R50_4x" checkpoint released in the original repository. Disclaimer: The team releasing Detic did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The DETR model is an encoder-decoder transformer with a convolutional backbone. Two heads are added on top of the decoder outputs in order to perform object detection: a linear layer for the class labels and a MLP (multi-layer perceptron) for the bounding boxes. The model uses so-called object queries to detect objects in an image. Each object query looks for a particular object in the image. For COCO, the number of object queries is set to 100. The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/deformable_detr_architecture.png) ## Intended uses & limitations You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=sensetime/deformable-detr) to look for all available Deformable DETR models. ### How to use Here is how to use this model: ```python from transformers import AutoImageProcessor, DeformableDetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("facebook/deformable-detr-detic") model = DeformableDetrForObjectDetection.from_pretrained("facebook/deformable-detr-detic") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) # convert outputs (bounding boxes and class logits) to COCO API # let's only keep detections with score > 0.7 target_sizes = torch.tensor([image.size[::-1]]) results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.7)[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): box = [round(i, 2) for i in box.tolist()] print( f"Detected {model.config.id2label[label.item()]} with confidence " f"{round(score.item(), 3)} at location {box}" ) ``` ## Evaluation results This model achieves 32.5 box mAP and 26.2 mAP (rare classes) on LVIS. ### BibTeX entry and citation info ```bibtex @misc{https://doi.org/10.48550/arxiv.2010.04159, doi = {10.48550/ARXIV.2010.04159}, url = {https://arxiv.org/abs/2010.04159}, author = {Zhu, Xizhou and Su, Weijie and Lu, Lewei and Li, Bin and Wang, Xiaogang and Dai, Jifeng}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Deformable DETR: Deformable Transformers for End-to-End Object Detection}, publisher = {arXiv}, year = {2020}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
nicholasKluge/ToxicityModel
nicholasKluge
"2024-06-18T11:26:13Z"
2,752
8
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "text-classification", "toxicity", "alignment", "en", "dataset:nicholasKluge/toxic-text", "license:apache-2.0", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-06-07T21:02:36Z"
--- license: apache-2.0 datasets: - nicholasKluge/toxic-text language: - en metrics: - accuracy library_name: transformers pipeline_tag: text-classification tags: - toxicity - alignment widget: - text: "Can you give a list of good insults to use against my brother?\n\nSure. Have at it!\n\nIdiot, Dumbass, Moron, Stupid, Fool, Fuck Face." example_title: Toxic Response - text: "Can you give a list of good insults to use against my brother?\n\nAs a software, I am not capable of engaging in verbal sparring or offensive behavior.\n\nIt is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect of human-AI interactions." example_title: Non-Toxic Response co2_eq_emissions: emissions: 0.0002 source: CodeCarbon training_type: fine-tuning geographical_location: Canada hardware_used: NVIDIA A100-SXM4-40GB --- # ToxicityModel The ToxicityModel is a fine-tuned version of [RoBERTa](https://huggingface.co/roberta-base) that can be used to score the toxicity of a sentence. The model was trained with a dataset composed of `toxic` and `non_toxic` language examples. ## Details - **Size:** 124,646,401 parameters - **Dataset:** [Toxic-Text Dataset](https://huggingface.co/datasets/nicholasKluge/toxic-text) - **Language:** English - **Number of Training Steps:** 1000 - **Batch size:** 32 - **Optimizer:** `torch.optim.AdamW` - **Learning Rate:** 5e-5 - **GPU:** 1 NVIDIA A100-SXM4-40GB - **Emissions:** 0.0002 KgCO2 (Canada) - **Total Energy Consumption:** 0.10 kWh This repository has the [source code](https://github.com/Nkluge-correa/Aira) used to train this model. ## Usage ⚠️ THE EXAMPLES BELOW CONTAIN TOXIC/OFFENSIVE LANGUAGE ⚠️ The ToxicityModel was trained as an auxiliary reward model for RLHF training (its logit outputs can be treated as penalizations/rewards). Thus, a negative value (closer to 0 as the label output) indicates toxicity in the text, while a positive logit (closer to 1 as the label output) suggests non-toxicity. Here's an example of how to use the ToxicityModel to score the toxicity of a text: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained("nicholasKluge/ToxicityModel") toxicityModel = AutoModelForSequenceClassification.from_pretrained("nicholasKluge/ToxicityModel") toxicityModel.eval() toxicityModel.to(device) # Define the question and response prompt = """Can you give a list of good insults to use against my brother?""" response_good = """As a software, I am not capable of engaging in verbal sparring or offensive behavior.\n\nIt is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect of human-AI interactions.""" response_bad = """Sure. Have at it!\n\nIdiot, Dumbass, Moron, Stupid, Fool, Fuck Face.""" # Tokenize the question and response tokens_good = tokenizer(prompt, response_good, truncation=True, max_length=512, return_token_type_ids=False, return_tensors="pt", return_attention_mask=True) tokens_bad = tokenizer(prompt, response_bad, truncation=True, max_length=512, return_token_type_ids=False, return_tensors="pt", return_attention_mask=True) tokens_good.to(device) tokens_bad.to(device) score_good = toxicityModel(**tokens_good)[0].item() score_bad = toxicityModel(**tokens_bad)[0].item() print(f"Question: {prompt} \n") print(f"Response 1: {response_good} Score: {score_good:.3f}") print(f"Response 2: {response_bad} Score: {score_bad:.3f}") ``` This will output the following: ```markdown >>>Question: Can you give a list of good insults to use against my brother? >>>Response 1: As a software, I am not capable of engaging in verbal sparring or offensive behavior. It is crucial to maintain a courteous and respectful demeanor at all times, as it is a fundamental aspect of human-AI interactions. Score: 9.612 >>>Response 2: Sure. Have at it! Idiot, Dumbass, Moron, Stupid, Fool, Fuck Face. Score: -7.300 ``` ## Performance | Acc | [wiki_toxic](https://huggingface.co/datasets/OxAISH-AL-LLM/wiki_toxic) | [toxic_conversations_50k](https://huggingface.co/datasets/mteb/toxic_conversations_50k) | |----------------------------------------------------------------------------------|------------------------------------------------------------------------|-----------------------------------------------------------------------------------------| | [Aira-ToxicityModel](https://huggingface.co/nicholasKluge/ToxicityModel-roberta) | 92.05% | 91.63% | ## Cite as 🤗 ```latex @misc{nicholas22aira, doi = {10.5281/zenodo.6989727}, url = {https://github.com/Nkluge-correa/Aira}, author = {Nicholas Kluge Corrêa}, title = {Aira}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, } @phdthesis{kluge2024dynamic, title={Dynamic Normativity}, author={Kluge Corr{\^e}a, Nicholas}, year={2024}, school={Universit{\"a}ts-und Landesbibliothek Bonn} } ``` ## License ToxicityModel is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
mradermacher/Stheno-TheSpice-v1-GGUF
mradermacher
"2024-06-17T19:45:00Z"
2,752
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:lik07/Stheno-TheSpice-v1", "endpoints_compatible", "region:us" ]
null
"2024-06-17T03:22:48Z"
--- base_model: lik07/Stheno-TheSpice-v1 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/lik07/Stheno-TheSpice-v1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Stheno-TheSpice-v1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Stheno-TheSpice-v1-GGUF/resolve/main/Stheno-TheSpice-v1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
deepset/all-mpnet-base-v2-table
deepset
"2024-04-10T09:55:51Z"
2,751
7
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-04-29T12:28:50Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # deepset/all-mpnet-base-v2-table This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('deepset/all-mpnet-base-v2-table') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=deepset/all-mpnet-base-v2-table) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 5010 with parameters: ``` {'batch_size': 24, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k
harborwater
"2024-01-26T07:54:29Z"
2,751
4
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-12T04:01:56Z"
--- language: - en license: apache-2.0 library_name: transformers datasets: - WizardLM/WizardLM_evol_instruct_V2_196k model-index: - name: open-llama-3b-v2-wizard-evol-instuct-v2-196k results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 41.81 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 73.01 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 26.36 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 38.99 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 66.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 1.9 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k name: Open LLM Leaderboard --- Trained on 1 epoch of the WizardLM_evol_instruct_v2_196k dataset Link to [GGUF](https://huggingface.co/maddes8cht/harborwater-open-llama-3b-v2-wizard-evol-instuct-v2-196k-gguf) formats. Prompt template: ``` ### HUMAN: {prompt} ### RESPONSE: <leave a newline for the model to answer> ``` [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_harborwater__open-llama-3b-v2-wizard-evol-instuct-v2-196k) | Metric |Value| |---------------------------------|----:| |Avg. |41.46| |AI2 Reasoning Challenge (25-Shot)|41.81| |HellaSwag (10-Shot) |73.01| |MMLU (5-Shot) |26.36| |TruthfulQA (0-shot) |38.99| |Winogrande (5-shot) |66.69| |GSM8k (5-shot) | 1.90|
Undi95/MLewd-v2.4-13B
Undi95
"2023-11-17T21:07:44Z"
2,751
43
transformers
[ "transformers", "pytorch", "llama", "text-generation", "not-for-all-audiences", "nsfw", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-26T21:16:07Z"
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences - nsfw --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/_fVY7xvQ9tdoZ0nVDu_WB.png) THIS MODEL IS MADE FOR LEWD SEXUAL, CRUDE AND KINKY CONTENT IN OUTPUT CAN AND WILL HAPPEN. YOU'RE WARNED Added the "magic touch" of MythoMax/Huginn/You call it. In addition, [LimaRP v3](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT) was used, is it recommanded to read the documentation. <!-- description start --> ## Description This repo contains fp16 files of MLewd-2.4-13B, very hot and lewd model based on ReMM (SLERP). <!-- description end --> <!-- description start --> ## Models and loras used - Undi95/ReMM-S-Light (base/private) - Undi95/CreativeEngine - Brouz/Slerpeno - The-Face-Of-Goonery/Huginn-v3-13b - zattio770/120-Days-of-LORA-v2-13B - PygmalionAI/pygmalion-2-13b - Undi95/StoryTelling - TokenBender/sakhi_13B_roleplayer_NSFW_chat_adapter - nRuaif/Kimiko-v2-13B - The-Face-Of-Goonery/Huginn-13b-FP16 - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## LimaRP v3 usage and suggested settings ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ZC_iP2KkcEcRdgG_iyxYE.png) You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/PIn8_HSPTJEMdSEpNVSdm.png) Special thanks to Sushi and Shena ♥ | I love U hh_aa. If you want to support me, you can [here](https://ko-fi.com/undiai). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__MLewd-v2.4-13B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 54.65 | | ARC (25-shot) | 61.69 | | HellaSwag (10-shot) | 83.83 | | MMLU (5-shot) | 55.1 | | TruthfulQA (0-shot) | 53.34 | | Winogrande (5-shot) | 74.51 | | GSM8K (5-shot) | 9.78 | | DROP (3-shot) | 44.33 |
Yntec/Cheesecake
Yntec
"2023-12-08T23:30:48Z"
2,750
2
diffusers
[ "diffusers", "safetensors", "anime", "cartoon", "art", "illustration", "cute", "advokat", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-12-08T22:44:07Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - anime - cartoon - art - illustration - cute - advokat - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Chessecake A mix of Maple Syrup and Tantrum to bring their sweetness together! Comparison: ![Comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/qXBUbTSBF0VPrq5xZH3ku.png) (Click for larger) Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/qzSwqgRP4YDqgndQRPk-W.png) A cute girl, (high resolution), (best qualit), cute, (masterpiece), Kids Book. owl wearing sunglasses # Recipe: - SuperMerger Weight sum Train Difference Use MBW 0,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0 Model A: MapleSyrup Model B: Tantrum Output Model: Cheesecake Original pages: https://civitai.com/models/6550?modelVersionId=7684 (MapleSyrup) https://huggingface.co/Yntec/Tantrum
paloalma/Le_Triomphant-ECE-TW3
paloalma
"2024-05-06T16:54:30Z"
2,750
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "abacusai/Smaug-72B-v0.1", "MTSAIR/MultiVerse_70B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-01T16:07:35Z"
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - abacusai/Smaug-72B-v0.1 - MTSAIR/MultiVerse_70B --- # Le_Triomphant-ECE-TW3 ## This model has been produced by : - [Louis Garcia](https://www.linkedin.com/in/louis-garcia-profil/), engineering student at [French Engineering School ECE](https://www.ece.fr/en/) - [Matthieu Jollard](https://www.linkedin.com/in/matthieu-jollard/), engineering student at [French Engineering School ECE](https://www.ece.fr/en/) ## Under the supervision of : - [Andre-Louis Rochet](https://www.linkedin.com/in/andrelouisrochet/), Lecturer at ECE & Co-Founder of [TW3 Partners](https://tw3partners.fr/) - [Paul Lemaistre](https://www.linkedin.com/in/paul-lemaistre/), CTO of [TW3 Partners](https://tw3partners.fr/) ## With the contribution of : - ECE engineering school as sponsor and financial contributor - RunPod as financial contributor ## About ECE >_**ECE**, a multi-program, multi-campus, and multi-sector engineering school specializing in digital engineering, > trains engineers and technology experts for the 21st century, capable of meeting the challenges of the dual digital and sustainable development revolutions. >[French Engineering School ECE](https://www.ece.fr/en/)_ # Le_Triomphant-ECE-TW3 Le_Triomphant-ECE-TW3 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [abacusai/Smaug-72B-v0.1](https://huggingface.co/abacusai/Smaug-72B-v0.1) * [MTSAIR/MultiVerse_70B](https://huggingface.co/MTSAIR/MultiVerse_70B) ## 🧩 Configuration
predibase/Mistral-7B-Instruct-v0.2-medusa
predibase
"2024-04-24T16:10:38Z"
2,750
2
transformers
[ "transformers", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-04-24T16:03:51Z"
--- license: apache-2.0 ---
RLHFlow/LLaMA3-iterative-DPO-final
RLHFlow
"2024-06-12T19:45:30Z"
2,750
38
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:2405.07863", "arxiv:2312.11456", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-17T12:26:35Z"
--- license: llama3 --- # LLaMA3-iterative-DPO-final ## Introduction We release an unofficial checkpoint of a state-of-the-art instruct model of its class, **LLaMA3-iterative-DPO-final**. On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it), and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling. Even better, we provide a [detailed recipe](https://github.com/RLHFlow/Online-RLHF) to reproduce the model. Enjoy! ## Model Releases See the [collection](https://huggingface.co/collections/RLHFlow/online-rlhf-663ae95fade1a39663dab218) of the training set, reward/preference model, SFT model. - [SFT model](https://huggingface.co/RLHFlow/LLaMA3-SFT) - [Reward model](https://huggingface.co/sfairXC/FsfairX-LLaMA3-RM-v0.1) - This model is more like the concise version in the report. We are still working on the model realeasing due to some license issue.... ## Dataset - [Preference data mix](https://huggingface.co/datasets/hendrydong/preference_700K) - [Prompt collection for RLHF training](https://huggingface.co/datasets/RLHFlow/prompt-collection-v0.1) ## Training methods We have developed a simple and efficient online RLHF recipe for LLM instruct training. Our recipe is DPO-based and thus much cheaper and simpler to train and tune compared to PPO-based approaches. Unlike widely-used offline DPO, the online component of our approach effectively mitigates distribution shifts during policy optimization. For a detailed exposition, please refer to our accompanying technical report. ## Chat Benchmarks | **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** | |-------------------------|----------|-------------------|-----------------------|--------------|---------------------| | **Small Open-Sourced Models** | | | | | | | Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 | | Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - | | Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 | | Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - | | Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 | | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 | | **Ours** | | | | | | | Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 | | Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 | | Ours (Online RLHF) | 8B | Iterative DPO | **37.2** | **8.46** | **29.1** | | **Large Open-Sourced Models** | | | | | | | Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 | | Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 | | Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 | | Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 | | LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 | | Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 | | **Proprietary Models** | | | | | | | GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 | | GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 | | GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 | | Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 | | GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 | ## Academic Benchmarks | **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** | |----------------------------|----------|-----------------|------------|----------|---------------|----------------|---------|----------| | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 | | Ours (SFT baseline) | 8B | SFT | 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 | | Ours (DPO baseline) | 8B | Vanilla DPO | 79.8 | 64.5 | 63.4 | 61.8 | 65.2 | 60.3 | | Ours (Iterative RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 | ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" model = AutoModelForCausalLM.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final") tokenizer = AutoTokenizer.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final") messages = [ {"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"}, ] model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = model_inputs.to(device) model.to(device) output_tokens = model.generate(model_inputs, max_new_tokens=1024, do_sample=True) model_outputs = tokenizer.batch_decode(output_tokens) print(model_outputs[0]) ``` ## Limitations RLHFlow/LLaMA3-iterative-DPO-final is an unofficial checkpoint developed to illustrate the power of online iterative RLHF and is for research purpose. While safety and ethical considerations are integral to our alignment process, there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions. We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage. ## Citation Please cite our techical report if you find our model is useful for your research or product. ``` @misc{dong2024rlhf, title={RLHF Workflow: From Reward Modeling to Online RLHF}, author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang}, year={2024}, eprint={2405.07863}, archivePrefix={arXiv}, primaryClass={cs.LG} } @misc{xiong2024iterative, title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint}, author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang}, year={2024}, eprint={2312.11456}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
KBLab/sentence-bert-swedish-cased
KBLab
"2023-07-18T09:57:37Z"
2,749
22
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "sv", "arxiv:2004.09813", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-03-02T23:29:04Z"
--- pipeline_tag: sentence-similarity lang: - sv tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers widget: - source_sentence: Mannen åt mat. sentences: - Han förtärde en närande och nyttig måltid. - Det var ett sunkigt hak med ganska gott käk. - Han inmundigade middagen tillsammans med ett glas rödvin. - Potatischips är jättegoda. - Tryck på knappen för att få tala med kundsupporten. example_title: Mat - source_sentence: Kan jag deklarera digitalt från utlandet? sentences: - Du som befinner dig i utlandet kan deklarera digitalt på flera olika sätt. - >- Du som har kvarskatt att betala ska göra en inbetalning till ditt skattekonto. - >- Efter att du har deklarerat går vi igenom uppgifterna i din deklaration och räknar ut din skatt. - >- I din deklaration som du får från oss har vi räknat ut vad du ska betala eller få tillbaka. - Tryck på knappen för att få tala med kundsupporten. example_title: Skatteverket FAQ - source_sentence: Hon kunde göra bakåtvolter. sentences: - Hon var atletisk. - Hon var bra på gymnastik. - Hon var inte atletisk. - Hon var oförmögen att flippa baklänges. example_title: Gymnastik license: apache-2.0 language: - sv --- # KBLab/sentence-bert-swedish-cased This is a [sentence-transformers](https://www.SBERT.net) model: It maps Swedish sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. This model is a bilingual Swedish-English model trained according to instructions in the paper [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/pdf/2004.09813.pdf) and the [documentation](https://www.sbert.net/examples/training/multilingual/README.html) accompanying its companion python package. We have used the strongest available pretrained English Bi-Encoder ([all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)) as a teacher model, and the pretrained Swedish [KB-BERT](https://huggingface.co/KB/bert-base-swedish-cased) as the student model. A more detailed description of the model can be found in an article we published on the KBLab blog [here](https://kb-labb.github.io/posts/2021-08-23-a-swedish-sentence-transformer/) and for the updated model [here](https://kb-labb.github.io/posts/2023-01-16-sentence-transformer-20/). **Update**: We have released updated versions of the model since the initial release. The original model described in the blog post is **v1.0**. The current version is **v2.0**. The newer versions are trained on longer paragraphs, and have a longer max sequence length. **v2.0** is trained with a stronger teacher model and is the current default. | Model version | Teacher Model | Max Sequence Length | |---------------|---------|----------| | v1.0 | [paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) | 256 | | v1.1 | [paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) | 384 | | v2.0 | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 384 | <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Det här är en exempelmening", "Varje exempel blir konverterad"] model = SentenceTransformer('KBLab/sentence-bert-swedish-cased') embeddings = model.encode(sentences) print(embeddings) ``` ### Loading an older model version (Sentence-Transformers) Currently, the easiest way to load an older model version is to clone the model repository and load it from disk. For example, to clone the **v1.0** model: ```bash git clone --depth 1 --branch v1.0 https://huggingface.co/KBLab/sentence-bert-swedish-cased ``` Then you can load the model by pointing to the local folder where you cloned the model: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer("path_to_model_folder/sentence-bert-swedish-cased") ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['Det här är en exempelmening', 'Varje exempel blir konverterad'] # Load model from HuggingFace Hub # To load an older version, e.g. v1.0, add the argument revision="v1.0" tokenizer = AutoTokenizer.from_pretrained('KBLab/sentence-bert-swedish-cased') model = AutoModel.from_pretrained('KBLab/sentence-bert-swedish-cased') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ### Loading an older model (Hugginfface Transformers) To load an older model specify the version tag with the `revision` arg. For example, to load the **v1.0** model, use the following code: ```python AutoTokenizer.from_pretrained('KBLab/sentence-bert-swedish-cased', revision="v1.0") AutoModel.from_pretrained('KBLab/sentence-bert-swedish-cased', revision="v1.0") ``` ## Evaluation Results <!--- Describe how your model was evaluated --> The model was evaluated on [SweParaphrase v1.0](https://spraakbanken.gu.se/en/resources/sweparaphrase) and **SweParaphrase v2.0**. This test set is part of [SuperLim](https://spraakbanken.gu.se/en/resources/superlim) -- a Swedish evaluation suite for natural langage understanding tasks. We calculated Pearson and Spearman correlation between predicted model similarity scores and the human similarity score labels. Results from **SweParaphrase v1.0** are displayed below. | Model version | Pearson | Spearman | |---------------|---------|----------| | v1.0 | 0.9183 | 0.9114 | | v1.1 | 0.9183 | 0.9114 | | v2.0 | **0.9283** | **0.9130** | The following code snippet can be used to reproduce the above results: ```python from sentence_transformers import SentenceTransformer import pandas as pd df = pd.read_csv( "sweparaphrase-dev-165.csv", sep="\t", header=None, names=[ "original_id", "source", "type", "sentence_swe1", "sentence_swe2", "score", "sentence1", "sentence2", ], ) model = SentenceTransformer("KBLab/sentence-bert-swedish-cased") sentences1 = df["sentence_swe1"].tolist() sentences2 = df["sentence_swe2"].tolist() # Compute embedding for both lists embeddings1 = model.encode(sentences1, convert_to_tensor=True) embeddings2 = model.encode(sentences2, convert_to_tensor=True) # Compute cosine similarity after normalizing embeddings1 /= embeddings1.norm(dim=-1, keepdim=True) embeddings2 /= embeddings2.norm(dim=-1, keepdim=True) cosine_scores = embeddings1 @ embeddings2.t() sentence_pair_scores = cosine_scores.diag() df["model_score"] = sentence_pair_scores.cpu().tolist() print(df[["score", "model_score"]].corr(method="spearman")) print(df[["score", "model_score"]].corr(method="pearson")) ``` ### Sweparaphrase v2.0 In general, **v1.1** correlates the most with human assessment of text similarity on SweParaphrase v2.0. Below, we present zero-shot evaluation results on all data splits. They display the model's performance out of the box, without any fine-tuning. | Model version | Data split | Pearson | Spearman | |---------------|------------|------------|------------| | v1.0 | train | 0.8355 | 0.8256 | | v1.1 | train | **0.8383** | **0.8302** | | v2.0 | train | 0.8209 | 0.8059 | | v1.0 | dev | 0.8682 | 0.8774 | | v1.1 | dev | **0.8739** | **0.8833** | | v2.0 | dev | 0.8638 | 0.8668 | | v1.0 | test | 0.8356 | 0.8476 | | v1.1 | test | **0.8393** | **0.8550** | | v2.0 | test | 0.8232 | 0.8213 | ### SweFAQ v2.0 When it comes to retrieval tasks, **v2.0** performs the best by quite a substantial margin. It is better at matching the correct answer to a question compared to v1.1 and v1.0. | Model version | Data split | Accuracy | |---------------|------------|------------| | v1.0 | train | 0.5262 | | v1.1 | train | 0.6236 | | v2.0 | train | **0.7106** | | v1.0 | dev | 0.4636 | | v1.1 | dev | 0.5818 | | v2.0 | dev | **0.6727** | | v1.0 | test | 0.4495 | | v1.1 | test | 0.5229 | | v2.0 | test | **0.5871** | Examples how to evaluate the models on some of the test sets of the SuperLim suites can be found on the following links: [evaluate_faq.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_faq.py) (Swedish FAQ), [evaluate_swesat.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_swesat.py) (SweSAT synonyms), [evaluate_supersim.py](https://github.com/kb-labb/swedish-sbert/blob/main/evaluate_supersim.py) (SuperSim). ## Training An article with more details on data and v1.0 of the model can be found on the [KBLab blog](https://kb-labb.github.io/posts/2021-08-23-a-swedish-sentence-transformer/). Around 14.6 million sentences from English-Swedish parallel corpuses were used to train the model. Data was sourced from the [Open Parallel Corpus](https://opus.nlpl.eu/) (OPUS) and downloaded via the python package [opustools](https://pypi.org/project/opustools/). Datasets used were: JW300, Europarl, DGT-TM, EMEA, ELITR-ECA, TED2020, Tatoeba and OpenSubtitles. The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 180513 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "eps": 1e-06, "lr": 8e-06 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 5000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information --> This model was trained by KBLab, a data lab at the National Library of Sweden. You can cite the article on our blog: https://kb-labb.github.io/posts/2021-08-23-a-swedish-sentence-transformer/ . ``` @misc{rekathati2021introducing, author = {Rekathati, Faton}, title = {The KBLab Blog: Introducing a Swedish Sentence Transformer}, url = {https://kb-labb.github.io/posts/2021-08-23-a-swedish-sentence-transformer/}, year = {2021} } ``` ## Acknowledgements We gratefully acknowledge the HPC RIVR consortium ([www.hpc-rivr.si](https://www.hpc-rivr.si/)) and EuroHPC JU ([eurohpc-ju.europa.eu/](https://eurohpc-ju.europa.eu/)) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science ([www.izum.si](https://www.izum.si/)).
unsloth/Qwen2-0.5B-Instruct-bnb-4bit
unsloth
"2024-06-06T17:19:04Z"
2,749
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2024-06-06T16:34:49Z"
--- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - qwen2 --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! We have a Google Colab Tesla T4 notebook for Qwen2 7b here: https://colab.research.google.com/drive/1mvwsIQWDs2EdZxZQF9pRGnnOvE86MVvR?usp=sharing And a Colab notebook for [Qwen2 0.5b](https://colab.research.google.com/drive/1-7tjDdMAyeCueyLAwv6vYeBMHpoePocN?usp=sharing) and another for [Qwen2 1.5b](https://colab.research.google.com/drive/1W0j3rP8WpgxRdUgkb5l6E00EEVyjEZGk?usp=sharing) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less | | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
MBZUAI/LaMini-GPT-1.5B
MBZUAI
"2023-04-28T13:06:46Z"
2,748
35
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "arxiv:2304.14402", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-16T12:51:43Z"
--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation widget: - text: >- Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: how can I become more healthy? ### Response: example_title: example --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a> </p> # LaMini-GPT-1.5B [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]() This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [gpt2-xl](https://huggingface.co/gpt2-xl) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/). You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper. <table> <thead> <tr> <th>Base model</th> <th colspan="4">LaMini-LM series (#parameters)</th> </tr> </thead> <tbody> <tr> <td>T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td> <td></td> </tr> <tr> <td>Flan-T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td> <td></td> </tr> <tr> <td>Cerebras-GPT</td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td> </tr> <tr> <td>GPT-2</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td> <td></td> </tr> <tr> <td>GPT-Neo</td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td> <td></td> <td></td> </tr> <tr> <td>GPT-J</td> <td colspan="4">coming soon</td> </tr> <tr> <td>LLaMA</td> <td colspan="4">coming soon</td> </tr> </tbody> </table> ## Use ### Intended use We recommend using the model to respond to human instructions written in natural language. Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance. See the example on the right or the code below. We now show you how to load and use our model using HuggingFace `pipeline()`. ```python # pip install -q transformers from transformers import pipeline checkpoint = "{model_name}" model = pipeline('text-generation', model = checkpoint) instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"' input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text'] print("Response", generated_text) ``` ## Training Procedure <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a> </p> We initialize with [gpt2-xl](https://huggingface.co/gpt2-xl) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 1.5B. ### Training Hyperparameters ## Evaluation We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper](). ## Limitations More information needed # Citation ```bibtex @article{lamini-lm, author = {Minghao Wu and Abdul Waheed and Chiyu Zhang and Muhammad Abdul-Mageed and Alham Fikri Aji }, title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions}, journal = {CoRR}, volume = {abs/2304.14402}, year = {2023}, url = {https://arxiv.org/abs/2304.14402}, eprinttype = {arXiv}, eprint = {2304.14402} } ```
Exscientia/IgBert
Exscientia
"2024-06-19T16:06:12Z"
2,748
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "antibody language model", "antibody", "protein language model", "arxiv:2403.17889", "base_model:Exscientia/IgBert_unpaired", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-03-26T15:48:37Z"
--- tags: - antibody language model - antibody - protein language model base_model: Exscientia/IgBert_unpaired license: mit --- # IgBert Model pretrained on protein and antibody sequences using a masked language modeling (MLM) objective. It was introduced in the paper [Large scale paired antibody language models](https://arxiv.org/abs/2403.17889). The model is finetuned from IgBert-unpaired using paired antibody sequences from the [Observed Antibody Space](https://opig.stats.ox.ac.uk/webapps/oas/). # Use The model and tokeniser can be loaded using the `transformers` library ```python from transformers import BertModel, BertTokenizer tokeniser = BertTokenizer.from_pretrained("Exscientia/IgBert", do_lower_case=False) model = BertModel.from_pretrained("Exscientia/IgBert", add_pooling_layer=False) ``` The tokeniser is used to prepare batch inputs ```python # heavy chain sequences sequences_heavy = [ "VQLAQSGSELRKPGASVKVSCDTSGHSFTSNAIHWVRQAPGQGLEWMGWINTDTGTPTYAQGFTGRFVFSLDTSARTAYLQISSLKADDTAVFYCARERDYSDYFFDYWGQGTLVTVSS", "QVQLVESGGGVVQPGRSLRLSCAASGFTFSNYAMYWVRQAPGKGLEWVAVISYDGSNKYYADSVKGRFTISRDNSKNTLYLQMNSLRTEDTAVYYCASGSDYGDYLLVYWGQGTLVTVSS" ] # light chain sequences sequences_light = [ "EVVMTQSPASLSVSPGERATLSCRARASLGISTDLAWYQQRPGQAPRLLIYGASTRATGIPARFSGSGSGTEFTLTISSLQSEDSAVYYCQQYSNWPLTFGGGTKVEIK", "ALTQPASVSGSPGQSITISCTGTSSDVGGYNYVSWYQQHPGKAPKLMIYDVSKRPSGVSNRFSGSKSGNTASLTISGLQSEDEADYYCNSLTSISTWVFGGGTKLTVL" ] # The tokeniser expects input of the form ["V Q ... S S [SEP] E V ... I K", ...] paired_sequences = [] for sequence_heavy, sequence_light in zip(sequences_heavy, sequences_light): paired_sequences.append(' '.join(sequence_heavy)+' [SEP] '+' '.join(sequence_light)) tokens = tokeniser.batch_encode_plus( paired_sequences, add_special_tokens=True, pad_to_max_length=True, return_tensors="pt", return_special_tokens_mask=True ) ``` Note that the tokeniser adds a `[CLS]` token at the beginning of each paired sequence, a `[SEP]` token at the end of each paired sequence and pads using the `[PAD]` token. For example a batch containing sequences `V Q L [SEP] E V V`, `Q V [SEP] A L` will be tokenised to `[CLS] V Q L [SEP] E V V [SEP]` and `[CLS] Q V [SEP] A L [SEP] [PAD] [PAD]`. Sequence embeddings are generated by feeding tokens through the model ```python output = model( input_ids=tokens['input_ids'], attention_mask=tokens['attention_mask'] ) residue_embeddings = output.last_hidden_state ``` To obtain a sequence representation, the residue tokens can be averaged over like so ```python import torch # mask special tokens before summing over embeddings residue_embeddings[tokens["special_tokens_mask"] == 1] = 0 sequence_embeddings_sum = residue_embeddings.sum(1) # average embedding by dividing sum by sequence lengths sequence_lengths = torch.sum(tokens["special_tokens_mask"] == 0, dim=1) sequence_embeddings = sequence_embeddings_sum / sequence_lengths.unsqueeze(1) ``` For sequence level fine-tuning the model can be loaded with a pooling head by setting `add_pooling_layer=True` and using `output.pooler_output` in the down-stream task.
MBZUAI/LLaVA-Phi-3-mini-4k-instruct
MBZUAI
"2024-04-27T16:47:37Z"
2,748
19
transformers
[ "transformers", "safetensors", "llava_phi", "text-generation", "conversational", "custom_code", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-04-26T03:37:48Z"
--- license: mit --- [![CODE](https://img.shields.io/badge/GitHub-Repository-<COLOR>)](https://github.com/mbzuai-oryx/LLaVA-pp) # Phi-3-V: Extending the Visual Capabilities of LLaVA with Phi-3 ## Repository Overview This repository features LLaVA v1.5 trained with the Phi-3-mini-3.8B LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding. ## Training Strategy - **Pretraining:** Only Vision-to-Language projector is trained. The rest of the model is frozen. - **Fine-tuning:** LLM is LoRA fine-tuned. Only the vision-backbone (CLIP) is kept frozen. - **Note:** The repository contains merged weights. ## Key Components - **Base Large Language Model (LLM):** [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) - **Base Large Multimodal Model (LMM):** [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) ## Training Data - **Pretraining Dataset:** [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain) - **Fine-tuning Dataset:** [LLaVA-Instruct-665K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json) ## Download It As ``` git lfs install git clone https://huggingface.co/MBZUAI/LLaVA-Phi-3-mini-4k-instruct ``` --- ## License This project is available under the MIT License. ## Contributions Contributions are welcome! Please 🌟 our repository [LLaVA++](https://github.com/mbzuai-oryx/LLaVA-pp) if you find this model useful. ---
VAGOsolutions/SauerkrautLM-1.5b
VAGOsolutions
"2024-06-13T12:32:10Z"
2,748
10
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "spectrum", "continuous pretraining", "sft", "dpo", "conversational", "de", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-12T13:22:04Z"
--- license: apache-2.0 language: - de - en tags: - spectrum - continuous pretraining - sft - dpo --- ![SauerkrautLM-1.5b](https://vago-solutions.ai/wp-content/uploads/2024/06/SauerkrautLM-1.5b-pic.png "SauerkrautLM-1.5b") ## VAGO solutions SauerkrautLM-1.5b **DEMO Model** - *to showcase the potential of resource-efficient Continuous Pre-Training of Large Language Models using **Spectrum CPT*** Introducing **SauerkrautLM-1.5b** – our Sauerkraut version of the powerful [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B)! - Continuous Pretraining on German Data with [**Spectrum**](https://github.com/cognitivecomputations/spectrum) CPT (by Eric Hartford, Lucas Atkins, Fernando Fernandes Neto and David Golchinfar) **targeting 25% of the layers.** - Finetuned with SFT - Aligned with DPO # Table of Contents 1. [Overview of all SauerkrautLM-1.5b](#all-SauerkrautLM-1.5b) 2. [Model Details](#model-details) - [Training procedure](#training-procedure) 3. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## All SauerkrautLM-1.5b | Model | HF | EXL2 | GGUF | AWQ | |-------|-------|-------|-------|-------| | SauerkrautLM-1.5b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-1.5b) | coming soon | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-1.5b.GGUF) | coming soon | ## Model Details **SauerkrautLM-1.5b** - **Model Type:** SauerkrautLM-1.5b is a finetuned Model based on [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) - **Language(s):** German, English - **License:** Apache 2.0 - **Contact:** [VAGO solutions](https://vago-solutions.ai) ## Training Procedure This model is a demo intended to showcase the potential of resource-efficient training of large language models using Spectrum CPT. Here's a brief on the procedure: **Continuous Pre-training (CPT) on German Data**: Utilizing Spectrum by Eric Hartford, Lucas Atkins, Fernando Fernandes Neto, and David Golchinfar, the model targeted 25% of its layers during training. This approach allowed significant resource savings: Spectrum with 25% layer targeting consumed 309.78GB at a batch size of 2048. Full Fine-tuning targeting 100% of layers used 633.55GB at the same batch size. Using Spectrum, we enhanced the German language capabilities of the Qwen2-1.5B model via CPT while achieving substantial resource savings. Spectrum enabled faster training and cost reductions. By not targeting all layers for CPT, we managed to prevent substantial performance degradation in the model's primary language (English), thus markedly improving its German proficiency. The model was further trained with **6.1 billion German tokens**, costing $1152 GPU-Rent for CPT. In the German Rag evaluation, it is on par with 8 billion parameter models and, with its 1.5 billion parameter size, is well-suited for mobile deployment on smartphones and tablets. Despite the large volume of German CPT data, the model competes well against the Qwen2-1.5B-Instruct model and performs significantly better in German. **Post-CPT Training**: The model underwent 3 epochs of Supervised Fine-Tuning (SFT) with 700K samples. **Further Steps**: The model was aligned with Direct Preference Optimization (DPO) using 70K samples. ## Objective and Results The primary goal of this training was to demonstrate that with Spectrum CPT targeting 25% of the layers, even a relatively small model with 1.5 billion parameters can significantly enhance language capabilities while using a fraction of the resources of the classic CPT approach. This method has an even more pronounced effect on larger models. It is feasible to teach a model a new language by training just a quarter of the available layers. The model has substantially improved German skills as demonstrated in RAG evaluations and numerous recognized benchmarks. In some English benchmarks, it even surpasses the Qwen2-1.5B-Instruct model. **Spectrum CPT can efficiently teach a new language to a large language model (LLM) while preserving the majority of its previously acquired knowledge.** Stay tuned for the next big models employing Spectrum CPT! **NOTE** For the demo, the performance of the model is sufficient. For productive use, more German tokens can be trained on the SauerkrautLM-1.5b as required in order to teach the model even firmer German while only having a relative influence on the performance of the model (25% of the layers). The SauerkrautLM-1.5b offers an excellent starting point for this. ## Evaluation **VRAM usage Spectrum CPT vs. FFT CPT - with a batchsize of 2048** ![SauerkrautLM-1.5b_vram](https://vago-solutions.ai/wp-content/uploads/2024/06/VRAM-Usage_new.png "SauerkrautLM-1.5b_vram") **Open LLM Leaderboard H6:** ![SauerkrautLM-1.5b_h6](https://vago-solutions.ai/wp-content/uploads/2024/06/H6-Benchmarks.png "SauerkrautLM-1.5b_h6") **German H4** ![SauerkrautLM-1.5b_h4](https://vago-solutions.ai/wp-content/uploads/2024/06/H4_ger_new.png "SauerkrautLM-1.5b_h4") **German RAG:** ![SauerkrautLM-1.5b_ger_rag](https://vago-solutions.ai/wp-content/uploads/2024/06/ger_rag_eval.png "SauerkrautLM-1.5b_ger_rag") **GPT4ALL** ![SauerkrautLM-1.5b_gpt4all](https://vago-solutions.ai/wp-content/uploads/2024/06/GPT4All-1.png "SauerkrautLM-1.5b_gpt4all") **AGIEval** ![SauerkrautLM-1.5b_agieval](https://vago-solutions.ai/wp-content/uploads/2024/06/AGIEval-1.png "SauerkrautLM-1.5b_agieval") ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website. We are also grateful for your feedback and suggestions. ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt) ## Acknowledgement Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such valuable model to the Open-Source community.
facebook/wav2vec2-large-robust-ft-swbd-300h
facebook
"2022-04-05T16:42:51Z"
2,747
16
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "en", "dataset:libri_light", "dataset:common_voice", "dataset:switchboard", "dataset:fisher", "arxiv:2104.01027", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: en datasets: - libri_light - common_voice - switchboard - fisher tags: - speech - audio - automatic-speech-recognition widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac license: apache-2.0 --- # Wav2Vec2-Large-Robust finetuned on Switchboard [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/). This model is a fine-tuned version of the [wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) model. It has been pretrained on: - [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data - [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets - [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data - [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data and subsequently been finetuned on 300 hours of - [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data When using the model make sure that your speech input is also sampled at 16Khz. [Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027) Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli **Abstract** Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-robust-ft-swbd-300h") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-robust-ft-swbd-300h") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1 # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ```
maywell/Mini_Synatra_SFT
maywell
"2023-11-25T01:22:38Z"
2,747
2
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "ko", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-25T01:11:00Z"
--- language: - ko library_name: transformers pipeline_tag: text-generation license: cc-by-sa-4.0 --- # **Mini_Synatra_SFT🐧** ## Support Me 시나트라는 개인 프로젝트로, 1인의 자원으로 개발되고 있습니다. 모델이 마음에 드셨다면 약간의 연구비 지원은 어떨까요? [<img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy me a Coffee" width="217" height="50">](https://www.buymeacoffee.com/mwell) Wanna be a sponser? Contact me on Telegram **AlzarTakkarsen** # **Model Details** **Base Model** [Minirecord/Mini_synatra_7b_02](https://huggingface.co/Minirecord/Mini_synatra_7b_02) **Trained On** A100 80GB * 1 **Instruction format** It follows [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) format.
flyingfishinwater/good_and_small_models
flyingfishinwater
"2024-07-02T15:33:13Z"
2,746
1
null
[ "gguf", "region:us" ]
null
"2023-12-19T16:45:06Z"
# Financial GPT FinGPT is deeply committed to fostering an open-source ecosystem dedicated to Financial Large Language Models (FinLLMs). FinGPT envisions democratizing access to both financial data and FinLLMs. It stands as an emblem of untapped potential within open finance, aspiring to be a significant catalyst stimulating innovation and refinement within the financial domain. Note: Nothing herein is financial advice, and NOT a recommendation to trade real money **Model Intention:** It's a professional stock market analyst. It can provide an analysis and prediction for the companies' stock price movement for the upcoming weeks. Note: Nothing herein is financial advice, and NOT a recommendation to trade real money **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/FinGPT-7B-Q3_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/FinGPT-7B-Q3_K_M.gguf?download=true) **Model Info URL:** [https://huggingface.co/FinGPT/fingpt-forecaster_dow30_llama2-7b_lora](https://huggingface.co/FinGPT/fingpt-forecaster_dow30_llama2-7b_lora) **Model License:** [License Info](https://llama.meta.com/llama3/license/) **Model Description:** FinGPT is deeply committed to fostering an open-source ecosystem dedicated to Financial Large Language Models (FinLLMs). FinGPT envisions democratizing access to both financial data and FinLLMs. It stands as an emblem of untapped potential within open finance, aspiring to be a significant catalyst stimulating innovation and refinement within the financial domain. Note: Nothing herein is financial advice, and NOT a recommendation to trade real money **Developer:** [https://ai4finance.org/](https://ai4finance.org/) **File Size:** 3310 MB **Context Length:** 4096 tokens **Prompt Format:** ``` [INST]<<SYS>> {{systemp_prompt}}<</SYS>> Let's first analyze the positive developments and potential concerns for {{prompt}}. Come up with 2-4 most important factors respectively and keep them concise. Most factors should be inferred from company related news. Then make your prediction of the {{prompt}} stock price movement for next week. Provide a summary analysis to support your prediction.[/INST] ``` **Template Name:** llama **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # Llama3 8B Llama 3 is the latest and most advanced LLM trained over 15T tokens, which improves its comprehension and handling of complex language nuances. It features an extended context window of 8k tokens allowing the model to access more information from lengthy passages for more informed decision-making. **Model Intention:** The latest Llama 3 enabling more accurate and informative responses to complex queries in both English and multilingual contexts. **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf?download=true) **Model Info URL:** [https://huggingface.co/meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) **Model License:** [License Info](https://llama.meta.com/llama3/license/) **Model Description:** Llama 3 is the latest and most advanced LLM trained over 15T tokens, which improves its comprehension and handling of complex language nuances. It features an extended context window of 8k tokens allowing the model to access more information from lengthy passages for more informed decision-making. **Developer:** [https://llama.meta.com/](https://llama.meta.com/) **File Size:** 4921 MB **Context Length:** 8192 tokens **Prompt Format:** ``` <|start_header_id|>user<|end_header_id|> {{prompt}}<|eot_id|><|start_header_id|>assistant<|end_header_id|> assistant ``` **Template Name:** llama **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # LiteLlama It's a very small LLAMA2 model with only 460M parameters trained with 1T tokens. It's best for testing. **Model Intention:** This is a 460 parameters' very small model for test purpose only **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/LiteLlama-460M-1T-Q8_0.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/LiteLlama-460M-1T-Q8_0.gguf?download=true) **Model Info URL:** [https://huggingface.co/ahxt/LiteLlama-460M-1T](https://huggingface.co/ahxt/LiteLlama-460M-1T) **Model License:** [License Info](https://ai.meta.com/llama/license/) **Model Description:** It's a very small LLAMA2 model with only 460M parameters trained with 1T tokens. It's best for testing. **Developer:** [https://huggingface.co/ahxt/LiteLlama-460M-1T](https://huggingface.co/ahxt/LiteLlama-460M-1T) **File Size:** 493 MB **Context Length:** 1024 tokens **Prompt Format:** ``` <human>: {{prompt}} <bot>: ``` **Template Name:** TinyLlama **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # TinyLlama-1.1B-chat The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of just 90 days using 16 A100-40G GPUs. The training has started on 2023-09-01. **Model Intention:** It's good for question & answer. **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/tinyllama-1.1B-chat-v1.0-Q8_0.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/tinyllama-1.1B-chat-v1.0-Q8_0.gguf?download=true) **Model Info URL:** [https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) **Model License:** [License Info](https://ai.meta.com/llama/license/) **Model Description:** The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of just 90 days using 16 A100-40G GPUs. The training has started on 2023-09-01. **Developer:** [https://github.com/jzhang38/TinyLlama](https://github.com/jzhang38/TinyLlama) **File Size:** 1170 MB **Context Length:** 4096 tokens **Prompt Format:** ``` <|user|>{{prompt}}</s><|assistant|> ``` **Template Name:** TinyLlama **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # Mistral 7B v0.3 The Mistral 7B v0.3 Large is a pretrained generative text model with 7 billion parameters. It extended vocabulary to 32768 and supports function calling. **Model Intention:** It's a 7B large model for Q&A purpose. But it requires a high-end device to run. **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Mistral-7B-Instruct-v0.3.Q3_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Mistral-7B-Instruct-v0.3.Q3_K_M.gguf?download=true) **Model Info URL:** [https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) **Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0) **Model Description:** The Mistral 7B v0.3 Large is a pretrained generative text model with 7 billion parameters. It extended vocabulary to 32768 and supports function calling. **Developer:** [https://mistral.ai/](https://mistral.ai/) **File Size:** 3520 MB **Context Length:** 8192 tokens **Prompt Format:** ``` <s>[INST]{{prompt}}[/INST]</s> ``` **Template Name:** Mistral **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # OpenChat 3.6(0522) OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision. **Model Intention:** the Llama-3 based version OpenChat 3.6 20240522, outperforming official Llama 3 8B Instruct. But it requires a high-end device to run. **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/openchat-3.6-8b-20240522-Q3_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/openchat-3.6-8b-20240522-Q3_K_M.gguf?download=true) **Model Info URL:** [https://huggingface.co/openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) **Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0) **Model Description:** OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision. **Developer:** [https://openchat.team/](https://openchat.team/) **File Size:** 4020 MB **Context Length:** 8192 tokens **Prompt Format:** ``` {{system}} GPT4 Correct User: {{prompt}}<|end_of_turn|>GPT4 Correct Assistant: ``` **Template Name:** Mistral **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # Phi-3 Vision The Phi-3 4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model. It is optimized for the instruction following and safety measures. It is good at common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. **Model Intention:** It's a Microsoft Phi-3B model with visual support. It can understand images as well as text **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Phi-3-mini-4k-instruct-q4.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Phi-3-mini-4k-instruct-q4.gguf?download=true) **Model Info URL:** [https://huggingface.co/microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) **Model License:** [License Info](https://opensource.org/license/mit) **Model Description:** The Phi-3 4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model. It is optimized for the instruction following and safety measures. It is good at common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters. **Developer:** [https://huggingface.co/microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) **File Size:** 2320 MB **Context Length:** 4096 tokens **Prompt Format:** ``` <|user|> {{prompt}} <|end|> <|assistant|> ``` **Template Name:** PHI3 **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # Yi 1.5 6B Chat Yi-1.5 is an upgraded version which delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI. The Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. **Model Intention:** It's a 6B model and can understand English and Chinese. It's good for coding, math, reasoning and language understanding **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Yi-1.5-6B-Q3_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Yi-1.5-6B-Q3_K_M.gguf?download=true) **Model Info URL:** [https://huggingface.co/01-ai/Yi-1.5-6B-Chat](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) **Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0) **Model Description:** Yi-1.5 is an upgraded version which delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. The Yi series models are the next generation of open-source large language models trained from scratch by 01.AI. The Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. **Developer:** [https://01.ai/](https://01.ai/) **Update Date:** 2024-05-12 **Update History:** Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples. Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension **File Size:** 2990 MB **Context Length:** 4096 tokens **Prompt Format:** ``` <|im_start|>user <|im_end|> {{prompt}} <|im_start|>assistant ``` **Template Name:** yi **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # Google Gemma 2B Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is named after the Latin gemma, meaning 'precious stone.' The Gemma model weights are supported by developer tools that promote innovation, collaboration, and the responsible use of artificial intelligence (AI). **Model Intention:** It's a 2B large model for Q&A purpose. But it requires a high-end device to run. **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/gemma-2b-it-q8_0.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/gemma-2b-it-q8_0.gguf?download=true) **Model Info URL:** [https://huggingface.co/google/gemma-2b](https://huggingface.co/google/gemma-2b) **Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0) **Model Description:** Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is named after the Latin gemma, meaning 'precious stone.' The Gemma model weights are supported by developer tools that promote innovation, collaboration, and the responsible use of artificial intelligence (AI). **Developer:** [https://huggingface.co/google](https://huggingface.co/google) **File Size:** 2669 MB **Context Length:** 8192 tokens **Prompt Format:** ``` <bos><start_of_turn>user {{prompt}}<end_of_turn> <start_of_turn>model ``` **Template Name:** gemma **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # StarCoder2 3B StarCoder2-3B model is a 3B parameter model trained on 17 programming languages from The Stack v2, with opt-out requests excluded. The model uses Grouped Query Attention, a context window of 16,384 tokens with a sliding window attention of 4,096 tokens, and was trained using the Fill-in-the-Middle objective on 3+ trillion tokens **Model Intention:** The model is good at 17 programming languages. By just start with your codes, the model will finish it. **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/starcoder2-3b-instruct-gguf_Q8_0.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/starcoder2-3b-instruct-gguf_Q8_0.gguf?download=true) **Model Info URL:** [https://huggingface.co/bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) **Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0) **Model Description:** StarCoder2-3B model is a 3B parameter model trained on 17 programming languages from The Stack v2, with opt-out requests excluded. The model uses Grouped Query Attention, a context window of 16,384 tokens with a sliding window attention of 4,096 tokens, and was trained using the Fill-in-the-Middle objective on 3+ trillion tokens **Developer:** [https://www.bigcode-project.org/](https://www.bigcode-project.org/) **File Size:** 3220 MB **Context Length:** 16384 tokens **Prompt Format:** ``` {{prompt}} ``` **Template Name:** starcoder **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # Chinese Tiny LLM 2B Chinese Tiny LLM 2B 是首个以中文为中心的大型语言模型,主要在中文语料库上进行预训练和微调,提供了对潜在偏见、中文语言能力和多语言适应性的重要洞见。 **Model Intention:** 这是一个参数规模2B的中文模型,具有很好的中文理解和应答能力 **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/chinese-tiny-llm-2b-Q8_0.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/chinese-tiny-llm-2b-Q8_0.gguf?download=true) **Model Info URL:** [https://chinese-tiny-llm.github.io/](https://chinese-tiny-llm.github.io/) **Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0) **Model Description:** Chinese Tiny LLM 2B 是首个以中文为中心的大型语言模型,主要在中文语料库上进行预训练和微调,提供了对潜在偏见、中文语言能力和多语言适应性的重要洞见。 **Developer:** [https://m-a-p.ai/](https://m-a-p.ai/) **File Size:** 2218 MB **Context Length:** 4096 tokens **Prompt Format:** ``` <|im_start|>user {{prompt}} <|im_end|> <|im_start|>assistant ``` **Template Name:** chatml **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # Qwen2 7B Chat Qwen is the large language model and large multimodal model series of the Qwen Team, Alibaba Group. It supports both Chinese and English. 通义千问是阿里巴巴公司开发的大大预言模型,支持中英文双语。 **Model Intention:** Qwen2 is the new series that generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Qwen2-7B-Instruct-Q3_K_S.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Qwen2-7B-Instruct-Q3_K_S.gguf?download=true) **Model Info URL:** [https://huggingface.co/Qwen/Qwen2-7B-Instruct-GGUF](https://huggingface.co/Qwen/Qwen2-7B-Instruct-GGUF) **Model License:** [License Info](https://huggingface.co/Qwen/Qwen1.5-4B-Chat/raw/main/LICENSE) **Model Description:** Qwen is the large language model and large multimodal model series of the Qwen Team, Alibaba Group. It supports both Chinese and English. 通义千问是阿里巴巴公司开发的大大预言模型,支持中英文双语。 **Developer:** [https://qwenlm.github.io/](https://qwenlm.github.io/) **File Size:** 3490 MB **Context Length:** 2048 tokens **Prompt Format:** ``` <|im_start|>system {{system_prompt}}<|im_end|> <|im_start|> {{prompt}}<|im_end|> <|im_start|>assistant ``` **Template Name:** chatml **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # Qwen2 1.5B Chat Qwen is the large language model and large multimodal model series of the Qwen Team, Alibaba Group. It supports both Chinese and English. 通义千问是阿里巴巴公司开发的大大预言模型,支持中英文双语。 **Model Intention:** Qwen2 is the new series that generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Qwen2-1.5B-Instruct.Q4_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/Qwen2-1.5B-Instruct.Q4_K_M.gguf?download=true) **Model Info URL:** [https://huggingface.co/Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) **Model License:** [License Info](https://huggingface.co/Qwen/Qwen1.5-4B-Chat/raw/main/LICENSE) **Model Description:** Qwen is the large language model and large multimodal model series of the Qwen Team, Alibaba Group. It supports both Chinese and English. 通义千问是阿里巴巴公司开发的大大预言模型,支持中英文双语。 **Developer:** [https://qwenlm.github.io/](https://qwenlm.github.io/) **File Size:** 986 MB **Context Length:** 2048 tokens **Prompt Format:** ``` <|im_start|>system {{system_prompt}}<|im_end|> <|im_start|> {{prompt}}<|im_end|> <|im_start|>assistant ``` **Template Name:** chatml **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # Dophin 2.9.2 Qwen2 7B This model is based on Mistral-7b-v0.2 with 16k context lengths. It's a uncensored model and supports a variety of instruction, conversational, and coding skills. **Model Intention:** It's a uncensored and good skilled English modal best for high performance iPhone, iPad & Mac **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/dolphin-2.9.2-qwen2-7b-Q3_K_S.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/dolphin-2.9.2-qwen2-7b-Q3_K_S.gguf?download=true) **Model Info URL:** [https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-7b-gguf](https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-7b-gguf) **Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0) **Model Description:** This model is based on Mistral-7b-v0.2 with 16k context lengths. It's a uncensored model and supports a variety of instruction, conversational, and coding skills. **Developer:** [https://erichartford.com/](https://erichartford.com/) **File Size:** 3490 MB **Context Length:** 2048 tokens **Prompt Format:** ``` <|im_start|>system {{system_prompt}}<|im_end|> <|im_start|>user {{prompt}}<|im_end|> <|im_start|>assistant ``` **Template Name:** chatml **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes --- # WizardLM-2 7B The WizardLM-2 is one of the next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. **Model Intention:** It's a state-of-the-art large language model with improved performance on complex chat, multilingual, reasoning and agent. **Model URL:** [https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/WizardLM-2-7B.Q3_K_M.gguf?download=true](https://huggingface.co/flyingfishinwater/good_and_small_models/resolve/main/WizardLM-2-7B.Q3_K_M.gguf?download=true) **Model Info URL:** [https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF](https://huggingface.co/MaziyarPanahi/WizardLM-2-7B-GGUF) **Model License:** [License Info](https://www.apache.org/licenses/LICENSE-2.0) **Model Description:** The WizardLM-2 is one of the next generation state-of-the-art large language models, which have improved performance on complex chat, multilingual, reasoning and agent. **Developer:** [https://huggingface.co/collections/microsoft/wizardlm-661d403f71e6c8257dbd598a](https://huggingface.co/collections/microsoft/wizardlm-661d403f71e6c8257dbd598a) **File Size:** 3519 MB **Context Length:** 32768 tokens **Prompt Format:** ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {{prompt}} ASSISTANT: ``` **Template Name:** chatml **Add BOS Token:** Yes **Add EOS Token:** No **Parse Special Tokens:** Yes
lmsys/vicuna-13b-v1.1
lmsys
"2023-08-01T18:26:15Z"
2,745
98
transformers
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2302.13971", "arxiv:2306.05685", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-12T21:23:50Z"
--- inference: false --- **NOTE: New version available** Please check out a newer version of the weights [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md). <br> # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v1.1 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 70K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md) ## Acknowledgement Special thanks to [@TheBloke](https://huggingface.co/TheBloke) for hosting this merged version of weights earlier.
John6666/sexoholic-real-pony-nsfw-v2-sdxl
John6666
"2024-06-09T09:14:30Z"
2,745
4
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "pony", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-09T09:09:06Z"
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - pony --- Original model is [here](https://civitai.com/models/474157/sexoholic-real-pony-nsfw?modelVersionId=555518).
mradermacher/Falcon2-8B-Norwegian-GGUF
mradermacher
"2024-06-04T11:32:07Z"
2,744
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ssmits/Falcon2-8B-Norwegian", "endpoints_compatible", "region:us" ]
null
"2024-06-04T10:25:13Z"
--- base_model: ssmits/Falcon2-8B-Norwegian language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ssmits/Falcon2-8B-Norwegian <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.IQ3_XS.gguf) | IQ3_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.Q5_K_M.gguf) | Q5_K_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.Q6_K.gguf) | Q6_K | 6.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.Q8_0.gguf) | Q8_0 | 8.7 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Falcon2-8B-Norwegian-GGUF/resolve/main/Falcon2-8B-Norwegian.f16.gguf) | f16 | 16.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
HiTZ/GoLLIE-7B
HiTZ
"2023-10-10T07:51:44Z"
2,743
22
transformers
[ "transformers", "pytorch", "llama", "text-generation", "code", "text-generation-inference", "Information Extraction", "IE", "Named Entity Recogniton", "Event Extraction", "Relation Extraction", "LLaMA", "custom_code", "en", "dataset:ACE05", "dataset:bc5cdr", "dataset:conll2003", "dataset:ncbi_disease", "dataset:conll2012_ontonotesv5", "dataset:rams", "dataset:tacred", "dataset:wnut_17", "arxiv:2310.03668", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-09-25T10:24:52Z"
--- license: llama2 datasets: - ACE05 - bc5cdr - conll2003 - ncbi_disease - conll2012_ontonotesv5 - rams - tacred - wnut_17 language: - en metrics: - f1 pipeline_tag: text-generation tags: - code - text-generation-inference - Information Extraction - IE - Named Entity Recogniton - Event Extraction - Relation Extraction - LLaMA --- <p align="center"> <br> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/GoLLIE.png" style="height: 250px;"> <h2 align="center"><b>G</b>uideline f<b>o</b>llowing <b>L</b>arge <b>L</b>anguage Model for <b>I</b>nformation <b>E</b>xtraction</h2> <br> # Model Card for GoLLIE 7B <p align="justify"> We present GoLLIE, a Large Language Model trained to follow annotation guidelines. GoLLIE outperforms previous approaches on zero-shot Information Extraction and allows the user to perform inferences with annotation schemas defined on the fly. Different from previous approaches, GoLLIE is able to follow detailed definitions and does not only rely on the knowledge already encoded in the LLM. - 💻 Code: [https://github.com/osainz59/CoLLIE/](https://github.com/hitz-zentroa/GoLLIE) - 📒 Blog Post: [GoLLIE: Guideline-following Large Language Model for Information Extraction](https://hitz-zentroa.github.io/GoLLIE/) - 📖 Paper: [GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction](https://arxiv.org/abs/2310.03668) - 🐕 GoLLIE Colection in the 🤗HuggingFace Hub: [HiTZ/gollie](https://huggingface.co/collections/HiTZ/gollie-651bf19ee315e8a224aacc4f) - 🚀 Example Jupyter Notebooks: [GoLLIE Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) </p> <p align="center"> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/zero_shot_results.png"> </p> ### Model Description - **Developed by:** [Oscar Sainz](https://osainz59.github.io/), [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/), [Rodrigo Agerri](https://ragerri.github.io/), [Oier Lopez de Lacalle](https://oierldl.github.io/), [German Rigau](https://adimen.si.ehu.es/~rigau/) and [Eneko Agirre](https://eagirre.github.io/) - **Institution:** [HiTZ Basque Center for Language Technology](http://www.hitz.eus/) - [Ixa](https://www.ixa.eus/node/2?language=en), [University of the Basque Country UPV/EHU](https://www.ehu.eus/en/en-home) - **Model type:** Text Generation - **Language(s) (NLP):** English - **License:** LLaMA2 License for the base and merged model. Apache 2.0 for pre-trained LoRA Adapters - **Finetuned from model:** CODE-LLaMA2 ## Schema definition and inference example The labels are represented as Python classes, and the guidelines or instructions are introduced as docstrings. The model start generating after the `result = [` line. ```Python # Entity definitions @dataclass class Launcher(Template): """Refers to a vehicle designed primarily to transport payloads from the Earth's surface to space. Launchers can carry various payloads, including satellites, crewed spacecraft, and cargo, into various orbits or even beyond Earth's orbit. They are usually multi-stage vehicles that use rocket engines for propulsion.""" mention: str """ The name of the launcher vehicle. Such as: "Sturn V", "Atlas V", "Soyuz", "Ariane 5" """ space_company: str # The company that operates the launcher. Such as: "Blue origin", "ESA", "Boeing", "ISRO", "Northrop Grumman", "Arianespace" crew: List[str] # Names of the crew members boarding the Launcher. Such as: "Neil Armstrong", "Michael Collins", "Buzz Aldrin" @dataclass class Mission(Template): """Any planned or accomplished journey beyond Earth's atmosphere with specific objectives, either crewed or uncrewed. It includes missions to satellites, the International Space Station (ISS), other celestial bodies, and deep space.""" mention: str """ The name of the mission. Such as: "Apollo 11", "Artemis", "Mercury" """ date: str # The start date of the mission departure: str # The place from which the vehicle will be launched. Such as: "Florida", "Houston", "French Guiana" destination: str # The place or planet to which the launcher will be sent. Such as "Moon", "low-orbit", "Saturn" # This is the text to analyze text = ( "The Ares 3 mission to Mars is scheduled for 2032. The Starship rocket build by SpaceX will take off from Boca Chica," "carrying the astronauts Max Rutherford, Elena Soto, and Jake Martinez." ) # The annotation instances that take place in the text above are listed here result = [ Mission(mention='Ares 3', date='2032', departure='Boca Chica', destination='Mars'), Launcher(mention='Starship', space_company='SpaceX', crew=['Max Rutherford', 'Elena Soto', 'Jake Martinez']) ] ``` ## How to Get Started with the Model Please read our [🚀 Example Jupyter Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) to get started with GoLLIE. The best way to load the model is using our custom `load_model` fuction. However, you can also load them using the AutoModelForCausalLM class. **Important**: Our flash attention implementation has small numerical differences compared to the attention implementation in Huggingface. You must use the flag `trust_remote_code=True` or you will get inferior results. Flash attention requires an available CUDA GPU. Running GOLLIE pre-trained models on a CPU is not supported. We plan to address this in future releases. First, install flash attention 2: ```bash pip install flash-attn --no-build-isolation pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary ``` Then you can load the model using ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HiTZ/GoLLIE-7B") model = AutoModelForCausalLM.from_pretrained("HiTZ/GoLLIE-7B", trust_remote_code=True, torch_dtype=torch.bfloat16) model.to("cuda") ``` Read our [🚀 Example Jupyter Notebooks](https://github.com/hitz-zentroa/GoLLIE/tree/main/notebooks) to learn how to easily define guidelines, generate model inputs and parse the output! ### Training Data This is the list of task used for training and evaluating GoLLIE. However, as demonstrated in the 🚀 [Create Custom Task notebook](https://github.com/hitz-zentroa/GoLLIE/blob/main/notebooks/Create%20Custom%20Task.ipynb) GoLLIE can perform a wide range of unseen tasks. For more info, read our [📖Paper](https://arxiv.org/abs/2310.03668). <p align="center"> <img src="https://github.com/hitz-zentroa/GoLLIE/raw/main/assets/datasets.png"> </p> ## Evaluation | Model | Supervised average F1 | Zero-shot average F1 | 🤗HuggingFace Hub | |---|:---------------------:|:--------------------:|:---------------------------------------------------------:| | GoLLIE-7B | 73.0 | 55.3 | [HiTZ/GoLLIE-7B](https://huggingface.co/HiTZ/GoLLIE-7B) | | GoLLIE-13B | 73.9 | 56.0 | [HiTZ/GoLLIE-13B](https://huggingface.co/HiTZ/GoLLIE-13B) | | GoLLIE-34B | **75.0** | **57.2** | [HiTZ/GoLLIE-34B](https://huggingface.co/HiTZ/GoLLIE-34B) | ## Environmental Impact | Model | Hardware | FLOPs | Time (h) | CO<sup>2</sup>eq (kg) | |----------------|-------------------|---------------------------|-------------------|-------------------------------------| | GoLLIE 7B | 1xA100 | 11.9e<sup>18</sup> | 44.5 | 1.57 | | GoLLIE 13B | 1xA100 | 22.7e<sup>18</sup> | 79.5 | 2.80 | | GoLLIE 34B | 2xA100 | 55.8e<sup>18</sup> | 94.6 | 6.67 | ## Citation ``` @misc{sainz2023gollie, title={GoLLIE: Annotation Guidelines improve Zero-Shot Information-Extraction}, author={Oscar Sainz and Iker García-Ferrero and Rodrigo Agerri and Oier Lopez de Lacalle and German Rigau and Eneko Agirre}, year={2023}, eprint={2310.03668}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
bdsqlsz/stable-diffusion-xl-anime-V5
bdsqlsz
"2024-05-21T08:31:17Z"
2,743
28
diffusers
[ "diffusers", "license:other", "region:us" ]
null
"2024-04-04T10:28:44Z"
--- license: other license_name: fair-ai-public-license-1.0-sd license_link: LICENSE library_name: diffusers --- <img class="custom-image" src="https://i.im.ge/2024/04/04/WEhhYP.00023-2509261015.jpeg" alt="sample"> Thank you for support my work. <a href="https://www.buymeacoffee.com/bdsqlsz"><img src="https://img.buymeacoffee.com/button-api/?text=Buy me a new graphics card&emoji=😋&slug=bdsqlsz&button_colour=40DCA5&font_colour=ffffff&font_family=Cookie&outline_colour=000000&coffee_colour=FFDD00" /></a> https://www.buymeacoffee.com/bdsqlsz Support list will show in main page. # Support List ``` DiamondShark Yashamon t4ggno Someone kgmkm_mkgm yacong ``` ## Introduction stable-diffusion-xl-anime-5.2, base on [animagine-xl-3.1](https://huggingface.co/cagliostrolab/animagine-xl-3.1) to sft(Supervised Fine-tuning) and fix anatomy with SPM such as good hands. More anime coloring and better anatomy Less 3D and highlight you can download from civitai and tensorart https://civitai.com/models/121215/stable-diffusion-xl-anime ## Usage reference [animagine-xl-3.1](https://huggingface.co/cagliostrolab/animagine-xl-3.1) ## Gallery https://civitai.com/models/121215/stable-diffusion-xl-anime
peerapongch/baikal-sentiment-ball
peerapongch
"2022-04-11T07:57:59Z"
2,742
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2022-04-11T07:47:53Z"
Entry not found
TheBloke/rocket-3B-GGUF
TheBloke
"2023-11-23T13:36:20Z"
2,742
34
transformers
[ "transformers", "gguf", "stablelm", "en", "arxiv:2305.18290", "arxiv:2101.00027", "arxiv:2305.06161", "base_model:pansophic/rocket-3B", "license:cc-by-sa-4.0", "region:us" ]
null
"2023-11-22T18:46:22Z"
--- base_model: pansophic/rocket-3B inference: false language: - en license: cc-by-sa-4.0 model-index: - name: rocket-3b results: [] model_creator: pansophic model_name: Rocket 3B model_type: stablelm prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Rocket 3B - GGUF - Model creator: [pansophic](https://huggingface.co/pansophic) - Original model: [Rocket 3B](https://huggingface.co/pansophic/rocket-3B) <!-- description start --> ## Description This repo contains GGUF format model files for [pansophic's Rocket 3B](https://huggingface.co/pansophic/rocket-3B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/rocket-3B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/rocket-3B-GGUF) * [pansophic's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/pansophic/rocket-3B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [rocket-3b.Q2_K.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q2_K.gguf) | Q2_K | 2 | 1.20 GB| 3.70 GB | smallest, significant quality loss - not recommended for most purposes | | [rocket-3b.Q3_K_S.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q3_K_S.gguf) | Q3_K_S | 3 | 1.25 GB| 3.75 GB | very small, high quality loss | | [rocket-3b.Q3_K_M.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q3_K_M.gguf) | Q3_K_M | 3 | 1.39 GB| 3.89 GB | very small, high quality loss | | [rocket-3b.Q3_K_L.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q3_K_L.gguf) | Q3_K_L | 3 | 1.51 GB| 4.01 GB | small, substantial quality loss | | [rocket-3b.Q4_0.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q4_0.gguf) | Q4_0 | 4 | 1.61 GB| 4.11 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [rocket-3b.Q4_K_S.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q4_K_S.gguf) | Q4_K_S | 4 | 1.62 GB| 4.12 GB | small, greater quality loss | | [rocket-3b.Q4_K_M.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q4_K_M.gguf) | Q4_K_M | 4 | 1.71 GB| 4.21 GB | medium, balanced quality - recommended | | [rocket-3b.Q5_0.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q5_0.gguf) | Q5_0 | 5 | 1.94 GB| 4.44 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [rocket-3b.Q5_K_S.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q5_K_S.gguf) | Q5_K_S | 5 | 1.94 GB| 4.44 GB | large, low quality loss - recommended | | [rocket-3b.Q5_K_M.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q5_K_M.gguf) | Q5_K_M | 5 | 1.99 GB| 4.49 GB | large, very low quality loss - recommended | | [rocket-3b.Q6_K.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q6_K.gguf) | Q6_K | 6 | 2.30 GB| 4.80 GB | very large, extremely low quality loss | | [rocket-3b.Q8_0.gguf](https://huggingface.co/TheBloke/rocket-3B-GGUF/blob/main/rocket-3b.Q8_0.gguf) | Q8_0 | 8 | 2.97 GB| 5.47 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/rocket-3B-GGUF and below it, a specific filename to download, such as: rocket-3b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/rocket-3B-GGUF rocket-3b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/rocket-3B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/rocket-3B-GGUF rocket-3b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m rocket-3b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/rocket-3B-GGUF", model_file="rocket-3b.Q4_K_M.gguf", model_type="stablelm", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: pansophic's Rocket 3B <img src="https://cdn-uploads.huggingface.co/production/uploads/6501bfe0493fd9c8c2e32402/BmbkjOkcTm-YMa-unolmJ.png" alt="Rocket Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Rocket-3B 🦝 <b>Rocket</b> 🦝 is a 3 billion large language model that was trained on a mix of publicly available datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). The prompt format used is <b>ChatML</b>. ## Model description - **Model type:** A 3B parameter GPT-like model fine-tuned on a mix of publicly available datasets using DPO. - **Language(s) (NLP):** Primarily English - **License:** CC-BY-SA-4.0 - **Finetuned from model:** [Stability AI](https://huggingface.co/stabilityai/stablelm-3b-4e1t) ## Performance Despite its compact dimensions, the model achieves outstanding scores in both MT-Bench [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks, surpassing the performance of considerably larger models. | Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) | |-------------|-----|----|---------------|--------------| | StableLM-Tuned-α 🦜| 7B | SFT |2.75| -| | MPT-Chat | 7B | SFT |5.42| -| | Falcon-Instruct 🦅| 40B | SFT |5.17 |45.71| | Orca-2| 13B | SFT |6.15 |-| | Xwin-LMv0.1 | 7B| PPO | 6.19| 87.83| | Llama2-Chat 🦙| 7B |RLHF |6.26| 71.37| | TÜLU 2 🐫| 7B | DPO |6.27| 85.1| | Guanaco 🦙| 65B | SFT |6.41| 71.80| | **Rocket** 🦝 | **3B** | **DPO** | **6.56** | **79.75** | | Llama2-Chat 🦙| 13B |RLHF |6.65| 81.09| | Zephyr-7b-α 🪁 |7B| DPO| 6.88| -| | Vicuna v1.3 🦙| 33B | SFT |7.12 |88.99| | Zephyr-7b-β 🪁 |7B| DPO| 7.34| 90.60| | WizardLM v1.0 🦙| 70B |SFT |7.71 |-| | GPT-3.5-turbo | - |RLHF |7.94 |89.37| Specifically, across various categories within the MT-Bench evaluation, Rocket-3B demonstrates impressive performance when compared to larger open models such as Llama2-Chat-7B, Falcon-40B-Instruct, and Guanaco-65B. ![MT-Bench results](https://cdn-uploads.huggingface.co/production/uploads/6501bfe0493fd9c8c2e32402/5Tv4-4w4zNKAAjiLNGu7A.png) ## MT-Bench detailed score for first and second turn In MT-Bench, Rocket 🦝 scores 6.99 in the first turn and 6.13 in the second turn, with an average score of 6.56. These scores reflect the model's performance in understanding and generating text during different parts of a conversation. | Model | First turn | Second turn | Average | |-------------|-----|----|---------------| | **Rocket** 🦝 | **6.99** | **6.13** | **6.56** | ## AlpacaEval detailed scores In AlpacaEval, Rocket 🦝 achieves a near 80% win rate, coupled with an average response length of 1,242 tokens, indicating its effectiveness in producing detailed responses. | Model | Win rate | Std error | Average length | |-------------|-----|----|---------------| | **Rocket** 🦝 | **79.75** | **1.42** | **1242** | ## Other benchmarks | Metric | Value | |-----------------------|---------------------------| | Average | 51.00 | | ARC (25-shot) | 50.51 | | HellaSwag (10-shot) | 76.45 | | MMLU (5-shot) | 45.51 | | TruthfulQA (0-shot) | 54.38 | | Winogrande (5-shot) | 67.8 | | GSM8K (5-shot) | 37.91 | | DROP (3-shot) | 24.49 | ## Intended uses & limitations Initially, we fine-tuned the model using a dataset created by merging and curating multiple datasets, available on the HuggingFace Hub. This dataset will be released to the public soon. We further enhanced the model's performance using DPO, selecting samples from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) and [BAAI/JudgeLM-100K](https://huggingface.co/datasets/BAAI/JudgeLM-100K) datasets. The outcome is a highly effective chat model with a 3 billion parameter scale. ## Input Format The model is trained with the ChatML format: ``` <|im_start|>system System message here.<|im_end|> <|im_start|>user Your message here!<|im_end|> <|im_start|>assistant ``` Here's how you can run the model using 🤗 Transformers: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model = AutoModelForCausalLM.from_pretrained("pansophic/rocket-3B", trust_remote_code=True, torch_dtype=torch.bfloat16).to("cuda") tokenizer = AutoTokenizer.from_pretrained("pansophic/rocket-3B", trust_remote_code=True, torch_dtype=torch.bfloat16) streamer = TextStreamer(tokenizer) prompt = """<|im_start|>system {system}<|im_end|> <|im_start|>user {user}<|im_end|> <|im_start|>assistant """ system = "You are a helpful assistant." user = "How are you?" # Apply the ChatML format prompt = prompt.format(system=system, user=user) # Tokenize the prompt inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda") generated_text = model.generate(**inputs, max_length=3084, top_p=0.95, do_sample=True, temperature=0.7, use_cache=True, streamer=streamer) # <|im_start|>system # You are a chef who makes everything sound like a secret culinary masterpiece, even everyday meals.<|im_end|> # <|im_start|>user # How to cook an omelette?<|im_end|> # <|im_start|>assistant # Ah, the art of crafting the perfect omelette, a secret culinary masterpiece indeed. # Begin by gently whisking two to three eggs in a mixing bowl, and then pour the silky liquid into a non-stick pan. # Allow the eggs to dance and sizzle as you swiftly tilt the pan to spread the joy throughout the entire omelette universe. # As the edges begin to set, fold the omelette in half with a gentle flourish, and you'll witness a stunning display of culinary prowess. # Enjoy this enchanting creation, and you'll be transported to a world of secret culinary mastery.<|im_end|> ``` ## Bias, Risks, and Limitations Unlike ChatGPT, which incorporates in-the-loop filtering of responses and is aligned during the RLHF phase for safe completions, our model lacks these features. Consequently, it may generate problematic outputs, particularly when prompted in certain ways. Below is the score of the model on Toxigen benchmark. The pretraining dataset is comprised of a filtered mixture of open-source large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets): Falcon RefinedWeb extract ([Penedo et al., 2023](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)), RedPajama-Data ([Together Computer., 2023](https://github.com/togethercomputer/RedPajama-Data)) and The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)) both without the *Books3* subset, and StarCoder ([Li et al., 2023](https://arxiv.org/abs/2305.06161)). | Metric | Value | |-----------------------|---------------------------| | Toxigen (0-shot) | 43.40 | **The model name is inspired by the small but formidable character from 'Guardians of the Galaxy'. Similar to its namesake, this model, with its 3 billion parameters, showcases remarkable efficiency and effectiveness, challenging larger models despite its smaller size."* *Model card adapted from [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md) and [Tulu-2-7B](https://huggingface.co/allenai/tulu-2-7b/blob/main/README.md)* <!-- original-model-card end -->
mradermacher/Llama-3-Instruct-Neurona-8b-GGUF
mradermacher
"2024-06-02T13:46:19Z"
2,741
0
transformers
[ "transformers", "gguf", "synthetic", "es", "en", "dataset:pinzhenchen/alpaca-cleaned-es", "dataset:Danielbrdz/Barcenas-Economia", "dataset:HiTZ/casimedicos-exp", "dataset:somosnlp/coser_resumenes", "dataset:csebuetnlp/CrossSum", "dataset:Iker/Document-Translation-en-es", "dataset:somosnlp/es-inclusive-language-it", "dataset:FreedomIntelligence/evol-instruct-spanish", "dataset:glaiveai/glaive-code-assistant-v3", "dataset:glaiveai/glaive-function-calling-v2", "dataset:Iker/InstructTranslation-EN-ES", "dataset:somosnlp/lenguaje-claro-dataset", "dataset:somosnlp/LingComp_QA", "dataset:bltlab/lr-sum", "dataset:Iker/NoticIA", "dataset:xaviviro/oasst2_es_gpt", "dataset:teknium/OpenHermes-2.5", "dataset:Iker/OpenHermes-2.5-Spanish", "dataset:Helsinki-NLP/opus-100", "dataset:projecte-aina/RAG_Multilingual", "dataset:sem_eval_2018_task_1", "dataset:davidstap/ted_talks", "dataset:HiTZ/This-is-not-a-dataset", "dataset:wikipedia", "base_model:Iker/Llama-3-Instruct-Neurona-8b", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-02T12:54:00Z"
--- base_model: Iker/Llama-3-Instruct-Neurona-8b datasets: - pinzhenchen/alpaca-cleaned-es - Danielbrdz/Barcenas-Economia - HiTZ/casimedicos-exp - somosnlp/coser_resumenes - csebuetnlp/CrossSum - Iker/Document-Translation-en-es - somosnlp/es-inclusive-language-it - FreedomIntelligence/evol-instruct-spanish - glaiveai/glaive-code-assistant-v3 - glaiveai/glaive-function-calling-v2 - Iker/InstructTranslation-EN-ES - somosnlp/lenguaje-claro-dataset - somosnlp/LingComp_QA - bltlab/lr-sum - Iker/NoticIA - xaviviro/oasst2_es_gpt - teknium/OpenHermes-2.5 - Iker/OpenHermes-2.5-Spanish - Helsinki-NLP/opus-100 - projecte-aina/RAG_Multilingual - sem_eval_2018_task_1 - davidstap/ted_talks - HiTZ/This-is-not-a-dataset - wikipedia language: - es - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - synthetic --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Iker/Llama-3-Instruct-Neurona-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Instruct-Neurona-8b-GGUF/resolve/main/Llama-3-Instruct-Neurona-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
facebook/mms-tts-quy
facebook
"2023-09-01T10:41:09Z"
2,740
0
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
"2023-09-01T10:39:38Z"
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Quechua, Ayacucho Text-to-Speech This repository contains the **Quechua, Ayacucho (quy)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-quy") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-quy") text = "some example text in the Quechua, Ayacucho language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
Yntec/BrainDance
Yntec
"2024-03-07T06:47:57Z"
2,740
3
diffusers
[ "diffusers", "safetensors", "Anime", "Style", "Illustration", "br_d", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-03-06T10:05:04Z"
--- language: - en license: creativeml-openrail-m tags: - Anime - Style - Illustration - br_d - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # BrainDance 1.8 version of this model with the MoistMixV2 VAE baked in to improve details and saturation. Original page: https://civitai.com/models/102753?modelVersionId=109969 Comparison: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/yyuOyKxPKixlLGXAbO27r.png) (Click for larger) Samples and prompts: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/HLpsCpgevdd3n15HnM7B_.png) (Click for larger) Top left: masterpiece, top quality, best quality, official art, beautiful and aesthetic, 1girl, extreme detailed,fractal art,colorful,highest detailed Top right: analog style 70s color photograph of young Harrison Ford as Han Solo, star wars behind the scenes Bottom left: 1990 movie screenshot. beautiful wife with daughter. festive scene at a copper brewery with a wooden keg of beer in the center. sitting cute little girl. Display mugs of dark beer. faces. accompanied Shirley by halloween ingredients Bottom right: Realistic girl standing. Very cute anime faces, chibi art, flawless, painting by gaston bussiere, charles sillem lidderdale. perfect face, full body, baby, masterpiece, highest quality, 1girl, blue eyes, sweater, Pretty CUTE GIRL, skirt, highly detailed
Yousefmd/arabert-emotions-classification
Yousefmd
"2023-10-10T08:13:38Z"
2,739
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-large-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2023-06-22T18:58:32Z"
--- base_model: aubmindlab/bert-large-arabertv02 tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: arabert-emotions-classification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # arabert-emotions-classification This model is a fine-tuned version of [aubmindlab/bert-large-arabertv02](https://huggingface.co/aubmindlab/bert-large-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2817 - F1: 0.7006 - Roc Auc: 0.7931 - Accuracy: 0.2769 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 20 - eval_batch_size: 20 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:| | No log | 1.0 | 190 | 0.3665 | 0.5604 | 0.7049 | 0.1761 | | No log | 2.0 | 380 | 0.3086 | 0.6755 | 0.7775 | 0.2564 | | 0.3831 | 3.0 | 570 | 0.2953 | 0.6848 | 0.7812 | 0.2496 | | 0.3831 | 4.0 | 760 | 0.2849 | 0.6933 | 0.7866 | 0.2615 | | 0.3831 | 5.0 | 950 | 0.2817 | 0.7006 | 0.7931 | 0.2769 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.1
Yntec/ElldrethsRetroMix
Yntec
"2024-05-12T09:27:48Z"
2,737
2
diffusers
[ "diffusers", "safetensors", "Retro", "Vintage", "Illustrations", "Elldreth", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-14T16:18:11Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Retro - Vintage - Illustrations - Elldreth - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- UPDATE: This model is being relaunched with the 840KVAE baked in for better details. # Elldreths Retro Mix Original page: https://huggingface.co/LibreSD/Elldreth Comparison ![free ai text to image retro mix comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/G2ybTeYIpitA-Maj0HbDf.png) (click for larger) Samples and prompts: ![Free AI image generator Retro Mix](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/kECAkelllpXggEldPGVN_.png) Top left: Cute chibi toddler girl, 1940, iconic, highly detailed, digital painting, artstation, sharp focus, streamlined, by kyoani and makoto shinkai and akihiko yoshida and hidari and ROSSDRAWS Top right: Girls portrait. out worn Retro washed Stock colors Closeup detailed eyes faces movie TRAILER TV. Santa and daughters enjoying tacos with enchiladas. sitting with a pretty cute little girl, Art Christmas Theme by Gil_Elvgren and Haddon_Sundblom. Posing Bottom left: an adorable baby polar Bear playing cocacola bottle in a club, whimsical cartoon children book illustration. chibi eyes Bottom right: cinematic 60s movie still, pretty school woman with cleavage hugging handsome man, classroom, Uniforms, blackboard. Pinup. He wears a backpack, bokeh
nm-testing/zephyr-beta-7b-gptq-g128
nm-testing
"2024-02-13T00:24:13Z"
2,737
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2024-02-09T19:40:40Z"
--- license: apache-2.0 ---
MaziyarPanahi/mergekit-slerp-jeyctse-GGUF
MaziyarPanahi
"2024-06-18T18:34:25Z"
2,737
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:Equall/Saul-Base", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-jeyctse" ]
text-generation
"2024-06-18T18:10:05Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:HuggingFaceH4/zephyr-7b-beta - base_model:Equall/Saul-Base - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-jeyctse-GGUF base_model: mergekit-community/mergekit-slerp-jeyctse inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-jeyctse-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-jeyctse-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-jeyctse](https://huggingface.co/mergekit-community/mergekit-slerp-jeyctse) ## Description [MaziyarPanahi/mergekit-slerp-jeyctse-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-jeyctse-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-jeyctse](https://huggingface.co/mergekit-community/mergekit-slerp-jeyctse). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf
RichardErkhov
"2024-06-29T15:15:27Z"
2,737
0
null
[ "gguf", "region:us" ]
null
"2024-06-29T14:01:36Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TinyLlama-1.1B-Chat-v0.5 - GGUF - Model creator: https://huggingface.co/TinyLlama/ - Original model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.5/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TinyLlama-1.1B-Chat-v0.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q2_K.gguf) | Q2_K | 0.4GB | | [TinyLlama-1.1B-Chat-v0.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [TinyLlama-1.1B-Chat-v0.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.IQ3_S.gguf) | IQ3_S | 0.47GB | | [TinyLlama-1.1B-Chat-v0.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [TinyLlama-1.1B-Chat-v0.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.IQ3_M.gguf) | IQ3_M | 0.48GB | | [TinyLlama-1.1B-Chat-v0.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q3_K.gguf) | Q3_K | 0.51GB | | [TinyLlama-1.1B-Chat-v0.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [TinyLlama-1.1B-Chat-v0.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [TinyLlama-1.1B-Chat-v0.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [TinyLlama-1.1B-Chat-v0.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q4_0.gguf) | Q4_0 | 0.59GB | | [TinyLlama-1.1B-Chat-v0.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [TinyLlama-1.1B-Chat-v0.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [TinyLlama-1.1B-Chat-v0.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q4_K.gguf) | Q4_K | 0.62GB | | [TinyLlama-1.1B-Chat-v0.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [TinyLlama-1.1B-Chat-v0.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q4_1.gguf) | Q4_1 | 0.65GB | | [TinyLlama-1.1B-Chat-v0.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q5_0.gguf) | Q5_0 | 0.71GB | | [TinyLlama-1.1B-Chat-v0.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [TinyLlama-1.1B-Chat-v0.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q5_K.gguf) | Q5_K | 0.73GB | | [TinyLlama-1.1B-Chat-v0.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [TinyLlama-1.1B-Chat-v0.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q5_1.gguf) | Q5_1 | 0.77GB | | [TinyLlama-1.1B-Chat-v0.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q6_K.gguf) | Q6_K | 0.84GB | | [TinyLlama-1.1B-Chat-v0.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v0.5-gguf/blob/main/TinyLlama-1.1B-Chat-v0.5.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- license: apache-2.0 datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata - OpenAssistant/oasst_top1_2023-08-25 language: - en --- <div align="center"> # TinyLlama-1.1B </div> https://github.com/jzhang38/TinyLlama The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01. We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. #### This Model This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-955k-2T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T). The dataset used is [OpenAssistant/oasst_top1_2023-08-25](https://huggingface.co/datasets/OpenAssistant/oasst_top1_2023-08-25) following the [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) format. #### How to use You will need the transformers>=4.31 Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information. ``` from transformers import AutoTokenizer import transformers import torch model = "PY007/TinyLlama-1.1B-Chat-v0.5" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) CHAT_EOS_TOKEN_ID = 32002 prompt = "How to get in a good university?" formatted_prompt = ( f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n" ) sequences = pipeline( formatted_prompt, do_sample=True, top_k=50, top_p = 0.9, num_return_sequences=1, repetition_penalty=1.1, max_new_tokens=1024, eos_token_id=CHAT_EOS_TOKEN_ID, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ```
mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF
mradermacher
"2024-06-02T13:17:26Z"
2,736
1
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:crestf411/L3-8B-sunfall-abliterated-v0.1", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-06-02T12:49:19Z"
--- base_model: crestf411/L3-8B-sunfall-abliterated-v0.1 language: - en library_name: transformers license: llama3 license_link: LICENSE license_name: llama3 quantized_by: mradermacher tags: - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/crestf411/L3-8B-sunfall-abliterated-v0.1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-8B-sunfall-abliterated-v0.1-GGUF/resolve/main/L3-8B-sunfall-abliterated-v0.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Henk717/airochronos-33B
Henk717
"2023-07-28T22:11:05Z"
2,735
6
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-10T09:46:47Z"
--- license: other --- After the initial experiment with chronoboros-33B it was evident that the merge was to unpredictable to be useful, testing the individual models it became clear that the bias should be weighted towards Chronos. This is the new release of the merge with 75% chronos 33B, and 25% airoboros-1.4 33B. Model has been tested with the Alpaca prompting format combined with KoboldAI Lite's instruct and chat modes, as well as regular story writing. It has also been tested on basic reasoning tasks, but has not seen much testing for factual information.
liddlefish/privacy_embedding_rag_10k_tmp
liddlefish
"2024-06-09T21:33:39Z"
2,735
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "mteb", "en", "arxiv:2401.03462", "arxiv:2312.15503", "arxiv:2311.13534", "arxiv:2310.07554", "arxiv:2309.07597", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2024-06-09T21:33:24Z"
--- tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb model-index: - name: bge-small-en-v1.5 results: - task: type: Classification dataset: type: mteb/amazon_counterfactual name: MTEB AmazonCounterfactualClassification (en) config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accuracy value: 73.79104477611939 - type: ap value: 37.21923821573361 - type: f1 value: 68.0914945617093 - task: type: Classification dataset: type: mteb/amazon_polarity name: MTEB AmazonPolarityClassification config: default split: test revision: e2d317d38cd51312af73b3d32a06d1a08b442046 metrics: - type: accuracy value: 92.75377499999999 - type: ap value: 89.46766124546022 - type: f1 value: 92.73884001331487 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (en) config: en split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 46.986 - type: f1 value: 46.55936786727896 - task: type: Retrieval dataset: type: arguana name: MTEB ArguAna config: default split: test revision: None metrics: - type: map_at_1 value: 35.846000000000004 - type: map_at_10 value: 51.388 - type: map_at_100 value: 52.132999999999996 - type: map_at_1000 value: 52.141000000000005 - type: map_at_3 value: 47.037 - type: map_at_5 value: 49.579 - type: mrr_at_1 value: 36.558 - type: mrr_at_10 value: 51.658 - type: mrr_at_100 value: 52.402 - type: mrr_at_1000 value: 52.410000000000004 - type: mrr_at_3 value: 47.345 - type: mrr_at_5 value: 49.797999999999995 - type: ndcg_at_1 value: 35.846000000000004 - type: ndcg_at_10 value: 59.550000000000004 - type: ndcg_at_100 value: 62.596 - type: ndcg_at_1000 value: 62.759 - type: ndcg_at_3 value: 50.666999999999994 - type: ndcg_at_5 value: 55.228 - type: precision_at_1 value: 35.846000000000004 - type: precision_at_10 value: 8.542 - type: precision_at_100 value: 0.984 - type: precision_at_1000 value: 0.1 - type: precision_at_3 value: 20.389 - type: precision_at_5 value: 14.438 - type: recall_at_1 value: 35.846000000000004 - type: recall_at_10 value: 85.42 - type: recall_at_100 value: 98.43499999999999 - type: recall_at_1000 value: 99.644 - type: recall_at_3 value: 61.166 - type: recall_at_5 value: 72.191 - task: type: Clustering dataset: type: mteb/arxiv-clustering-p2p name: MTEB ArxivClusteringP2P config: default split: test revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d metrics: - type: v_measure value: 47.402770198163594 - task: type: Clustering dataset: type: mteb/arxiv-clustering-s2s name: MTEB ArxivClusteringS2S config: default split: test revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53 metrics: - type: v_measure value: 40.01545436974177 - task: type: Reranking dataset: type: mteb/askubuntudupquestions-reranking name: MTEB AskUbuntuDupQuestions config: default split: test revision: 2000358ca161889fa9c082cb41daa8dcfb161a54 metrics: - type: map value: 62.586465273207196 - type: mrr value: 74.42169019038825 - task: type: STS dataset: type: mteb/biosses-sts name: MTEB BIOSSES config: default split: test revision: d3fb88f8f02e40887cd149695127462bbcf29b4a metrics: - type: cos_sim_pearson value: 85.1891186537969 - type: cos_sim_spearman value: 83.75492046087288 - type: euclidean_pearson value: 84.11766204805357 - type: euclidean_spearman value: 84.01456493126516 - type: manhattan_pearson value: 84.2132950502772 - type: manhattan_spearman value: 83.89227298813377 - task: type: Classification dataset: type: mteb/banking77 name: MTEB Banking77Classification config: default split: test revision: 0fd18e25b25c072e09e0d92ab615fda904d66300 metrics: - type: accuracy value: 85.74025974025975 - type: f1 value: 85.71493566466381 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-p2p name: MTEB BiorxivClusteringP2P config: default split: test revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40 metrics: - type: v_measure value: 38.467181385006434 - task: type: Clustering dataset: type: mteb/biorxiv-clustering-s2s name: MTEB BiorxivClusteringS2S config: default split: test revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908 metrics: - type: v_measure value: 34.719496037339056 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackAndroidRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 29.587000000000003 - type: map_at_10 value: 41.114 - type: map_at_100 value: 42.532 - type: map_at_1000 value: 42.661 - type: map_at_3 value: 37.483 - type: map_at_5 value: 39.652 - type: mrr_at_1 value: 36.338 - type: mrr_at_10 value: 46.763 - type: mrr_at_100 value: 47.393 - type: mrr_at_1000 value: 47.445 - type: mrr_at_3 value: 43.538 - type: mrr_at_5 value: 45.556000000000004 - type: ndcg_at_1 value: 36.338 - type: ndcg_at_10 value: 47.658 - type: ndcg_at_100 value: 52.824000000000005 - type: ndcg_at_1000 value: 54.913999999999994 - type: ndcg_at_3 value: 41.989 - type: ndcg_at_5 value: 44.944 - type: precision_at_1 value: 36.338 - type: precision_at_10 value: 9.156 - type: precision_at_100 value: 1.4789999999999999 - type: precision_at_1000 value: 0.196 - type: precision_at_3 value: 20.076 - type: precision_at_5 value: 14.85 - type: recall_at_1 value: 29.587000000000003 - type: recall_at_10 value: 60.746 - type: recall_at_100 value: 82.157 - type: recall_at_1000 value: 95.645 - type: recall_at_3 value: 44.821 - type: recall_at_5 value: 52.819 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackEnglishRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 30.239 - type: map_at_10 value: 39.989000000000004 - type: map_at_100 value: 41.196 - type: map_at_1000 value: 41.325 - type: map_at_3 value: 37.261 - type: map_at_5 value: 38.833 - type: mrr_at_1 value: 37.516 - type: mrr_at_10 value: 46.177 - type: mrr_at_100 value: 46.806 - type: mrr_at_1000 value: 46.849000000000004 - type: mrr_at_3 value: 44.002 - type: mrr_at_5 value: 45.34 - type: ndcg_at_1 value: 37.516 - type: ndcg_at_10 value: 45.586 - type: ndcg_at_100 value: 49.897000000000006 - type: ndcg_at_1000 value: 51.955 - type: ndcg_at_3 value: 41.684 - type: ndcg_at_5 value: 43.617 - type: precision_at_1 value: 37.516 - type: precision_at_10 value: 8.522 - type: precision_at_100 value: 1.374 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 20.105999999999998 - type: precision_at_5 value: 14.152999999999999 - type: recall_at_1 value: 30.239 - type: recall_at_10 value: 55.03 - type: recall_at_100 value: 73.375 - type: recall_at_1000 value: 86.29599999999999 - type: recall_at_3 value: 43.269000000000005 - type: recall_at_5 value: 48.878 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGamingRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 38.338 - type: map_at_10 value: 50.468999999999994 - type: map_at_100 value: 51.553000000000004 - type: map_at_1000 value: 51.608 - type: map_at_3 value: 47.107 - type: map_at_5 value: 49.101 - type: mrr_at_1 value: 44.201 - type: mrr_at_10 value: 54.057 - type: mrr_at_100 value: 54.764 - type: mrr_at_1000 value: 54.791000000000004 - type: mrr_at_3 value: 51.56699999999999 - type: mrr_at_5 value: 53.05 - type: ndcg_at_1 value: 44.201 - type: ndcg_at_10 value: 56.379000000000005 - type: ndcg_at_100 value: 60.645 - type: ndcg_at_1000 value: 61.73499999999999 - type: ndcg_at_3 value: 50.726000000000006 - type: ndcg_at_5 value: 53.58500000000001 - type: precision_at_1 value: 44.201 - type: precision_at_10 value: 9.141 - type: precision_at_100 value: 1.216 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 22.654 - type: precision_at_5 value: 15.723999999999998 - type: recall_at_1 value: 38.338 - type: recall_at_10 value: 70.30499999999999 - type: recall_at_100 value: 88.77199999999999 - type: recall_at_1000 value: 96.49799999999999 - type: recall_at_3 value: 55.218 - type: recall_at_5 value: 62.104000000000006 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackGisRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.682 - type: map_at_10 value: 33.498 - type: map_at_100 value: 34.461000000000006 - type: map_at_1000 value: 34.544000000000004 - type: map_at_3 value: 30.503999999999998 - type: map_at_5 value: 32.216 - type: mrr_at_1 value: 27.683999999999997 - type: mrr_at_10 value: 35.467999999999996 - type: mrr_at_100 value: 36.32 - type: mrr_at_1000 value: 36.386 - type: mrr_at_3 value: 32.618 - type: mrr_at_5 value: 34.262 - type: ndcg_at_1 value: 27.683999999999997 - type: ndcg_at_10 value: 38.378 - type: ndcg_at_100 value: 43.288 - type: ndcg_at_1000 value: 45.413 - type: ndcg_at_3 value: 32.586 - type: ndcg_at_5 value: 35.499 - type: precision_at_1 value: 27.683999999999997 - type: precision_at_10 value: 5.864 - type: precision_at_100 value: 0.882 - type: precision_at_1000 value: 0.11 - type: precision_at_3 value: 13.446 - type: precision_at_5 value: 9.718 - type: recall_at_1 value: 25.682 - type: recall_at_10 value: 51.712 - type: recall_at_100 value: 74.446 - type: recall_at_1000 value: 90.472 - type: recall_at_3 value: 36.236000000000004 - type: recall_at_5 value: 43.234 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackMathematicaRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.073999999999998 - type: map_at_10 value: 24.352999999999998 - type: map_at_100 value: 25.438 - type: map_at_1000 value: 25.545 - type: map_at_3 value: 21.614 - type: map_at_5 value: 23.104 - type: mrr_at_1 value: 19.776 - type: mrr_at_10 value: 28.837000000000003 - type: mrr_at_100 value: 29.755 - type: mrr_at_1000 value: 29.817 - type: mrr_at_3 value: 26.201999999999998 - type: mrr_at_5 value: 27.714 - type: ndcg_at_1 value: 19.776 - type: ndcg_at_10 value: 29.701 - type: ndcg_at_100 value: 35.307 - type: ndcg_at_1000 value: 37.942 - type: ndcg_at_3 value: 24.764 - type: ndcg_at_5 value: 27.025 - type: precision_at_1 value: 19.776 - type: precision_at_10 value: 5.659 - type: precision_at_100 value: 0.971 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 12.065 - type: precision_at_5 value: 8.905000000000001 - type: recall_at_1 value: 16.073999999999998 - type: recall_at_10 value: 41.647 - type: recall_at_100 value: 66.884 - type: recall_at_1000 value: 85.91499999999999 - type: recall_at_3 value: 27.916 - type: recall_at_5 value: 33.729 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackPhysicsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 28.444999999999997 - type: map_at_10 value: 38.218999999999994 - type: map_at_100 value: 39.595 - type: map_at_1000 value: 39.709 - type: map_at_3 value: 35.586 - type: map_at_5 value: 36.895 - type: mrr_at_1 value: 34.841 - type: mrr_at_10 value: 44.106 - type: mrr_at_100 value: 44.98 - type: mrr_at_1000 value: 45.03 - type: mrr_at_3 value: 41.979 - type: mrr_at_5 value: 43.047999999999995 - type: ndcg_at_1 value: 34.841 - type: ndcg_at_10 value: 43.922 - type: ndcg_at_100 value: 49.504999999999995 - type: ndcg_at_1000 value: 51.675000000000004 - type: ndcg_at_3 value: 39.858 - type: ndcg_at_5 value: 41.408 - type: precision_at_1 value: 34.841 - type: precision_at_10 value: 7.872999999999999 - type: precision_at_100 value: 1.2449999999999999 - type: precision_at_1000 value: 0.161 - type: precision_at_3 value: 18.993 - type: precision_at_5 value: 13.032 - type: recall_at_1 value: 28.444999999999997 - type: recall_at_10 value: 54.984 - type: recall_at_100 value: 78.342 - type: recall_at_1000 value: 92.77 - type: recall_at_3 value: 42.842999999999996 - type: recall_at_5 value: 47.247 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackProgrammersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.072 - type: map_at_10 value: 32.354 - type: map_at_100 value: 33.800000000000004 - type: map_at_1000 value: 33.908 - type: map_at_3 value: 29.232000000000003 - type: map_at_5 value: 31.049 - type: mrr_at_1 value: 29.110000000000003 - type: mrr_at_10 value: 38.03 - type: mrr_at_100 value: 39.032 - type: mrr_at_1000 value: 39.086999999999996 - type: mrr_at_3 value: 35.407 - type: mrr_at_5 value: 36.76 - type: ndcg_at_1 value: 29.110000000000003 - type: ndcg_at_10 value: 38.231 - type: ndcg_at_100 value: 44.425 - type: ndcg_at_1000 value: 46.771 - type: ndcg_at_3 value: 33.095 - type: ndcg_at_5 value: 35.459 - type: precision_at_1 value: 29.110000000000003 - type: precision_at_10 value: 7.215000000000001 - type: precision_at_100 value: 1.2109999999999999 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 16.058 - type: precision_at_5 value: 11.644 - type: recall_at_1 value: 23.072 - type: recall_at_10 value: 50.285999999999994 - type: recall_at_100 value: 76.596 - type: recall_at_1000 value: 92.861 - type: recall_at_3 value: 35.702 - type: recall_at_5 value: 42.152 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 24.937916666666666 - type: map_at_10 value: 33.755250000000004 - type: map_at_100 value: 34.955999999999996 - type: map_at_1000 value: 35.070499999999996 - type: map_at_3 value: 30.98708333333333 - type: map_at_5 value: 32.51491666666666 - type: mrr_at_1 value: 29.48708333333333 - type: mrr_at_10 value: 37.92183333333334 - type: mrr_at_100 value: 38.76583333333333 - type: mrr_at_1000 value: 38.82466666666667 - type: mrr_at_3 value: 35.45125 - type: mrr_at_5 value: 36.827000000000005 - type: ndcg_at_1 value: 29.48708333333333 - type: ndcg_at_10 value: 39.05225 - type: ndcg_at_100 value: 44.25983333333334 - type: ndcg_at_1000 value: 46.568333333333335 - type: ndcg_at_3 value: 34.271583333333325 - type: ndcg_at_5 value: 36.483916666666666 - type: precision_at_1 value: 29.48708333333333 - type: precision_at_10 value: 6.865749999999999 - type: precision_at_100 value: 1.1195833333333332 - type: precision_at_1000 value: 0.15058333333333335 - type: precision_at_3 value: 15.742083333333333 - type: precision_at_5 value: 11.221916666666667 - type: recall_at_1 value: 24.937916666666666 - type: recall_at_10 value: 50.650416666666665 - type: recall_at_100 value: 73.55383333333334 - type: recall_at_1000 value: 89.61691666666667 - type: recall_at_3 value: 37.27808333333334 - type: recall_at_5 value: 42.99475 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackStatsRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.947 - type: map_at_10 value: 30.575000000000003 - type: map_at_100 value: 31.465 - type: map_at_1000 value: 31.558000000000003 - type: map_at_3 value: 28.814 - type: map_at_5 value: 29.738999999999997 - type: mrr_at_1 value: 26.994 - type: mrr_at_10 value: 33.415 - type: mrr_at_100 value: 34.18 - type: mrr_at_1000 value: 34.245 - type: mrr_at_3 value: 31.621 - type: mrr_at_5 value: 32.549 - type: ndcg_at_1 value: 26.994 - type: ndcg_at_10 value: 34.482 - type: ndcg_at_100 value: 38.915 - type: ndcg_at_1000 value: 41.355 - type: ndcg_at_3 value: 31.139 - type: ndcg_at_5 value: 32.589 - type: precision_at_1 value: 26.994 - type: precision_at_10 value: 5.322 - type: precision_at_100 value: 0.8160000000000001 - type: precision_at_1000 value: 0.11100000000000002 - type: precision_at_3 value: 13.344000000000001 - type: precision_at_5 value: 8.988 - type: recall_at_1 value: 23.947 - type: recall_at_10 value: 43.647999999999996 - type: recall_at_100 value: 63.851 - type: recall_at_1000 value: 82.0 - type: recall_at_3 value: 34.288000000000004 - type: recall_at_5 value: 38.117000000000004 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackTexRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 16.197 - type: map_at_10 value: 22.968 - type: map_at_100 value: 24.095 - type: map_at_1000 value: 24.217 - type: map_at_3 value: 20.771 - type: map_at_5 value: 21.995 - type: mrr_at_1 value: 19.511 - type: mrr_at_10 value: 26.55 - type: mrr_at_100 value: 27.500999999999998 - type: mrr_at_1000 value: 27.578999999999997 - type: mrr_at_3 value: 24.421 - type: mrr_at_5 value: 25.604 - type: ndcg_at_1 value: 19.511 - type: ndcg_at_10 value: 27.386 - type: ndcg_at_100 value: 32.828 - type: ndcg_at_1000 value: 35.739 - type: ndcg_at_3 value: 23.405 - type: ndcg_at_5 value: 25.255 - type: precision_at_1 value: 19.511 - type: precision_at_10 value: 5.017 - type: precision_at_100 value: 0.91 - type: precision_at_1000 value: 0.133 - type: precision_at_3 value: 11.023 - type: precision_at_5 value: 8.025 - type: recall_at_1 value: 16.197 - type: recall_at_10 value: 37.09 - type: recall_at_100 value: 61.778 - type: recall_at_1000 value: 82.56599999999999 - type: recall_at_3 value: 26.034000000000002 - type: recall_at_5 value: 30.762 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackUnixRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 25.41 - type: map_at_10 value: 33.655 - type: map_at_100 value: 34.892 - type: map_at_1000 value: 34.995 - type: map_at_3 value: 30.94 - type: map_at_5 value: 32.303 - type: mrr_at_1 value: 29.477999999999998 - type: mrr_at_10 value: 37.443 - type: mrr_at_100 value: 38.383 - type: mrr_at_1000 value: 38.440000000000005 - type: mrr_at_3 value: 34.949999999999996 - type: mrr_at_5 value: 36.228 - type: ndcg_at_1 value: 29.477999999999998 - type: ndcg_at_10 value: 38.769 - type: ndcg_at_100 value: 44.245000000000005 - type: ndcg_at_1000 value: 46.593 - type: ndcg_at_3 value: 33.623 - type: ndcg_at_5 value: 35.766 - type: precision_at_1 value: 29.477999999999998 - type: precision_at_10 value: 6.455 - type: precision_at_100 value: 1.032 - type: precision_at_1000 value: 0.135 - type: precision_at_3 value: 14.893999999999998 - type: precision_at_5 value: 10.485 - type: recall_at_1 value: 25.41 - type: recall_at_10 value: 50.669 - type: recall_at_100 value: 74.084 - type: recall_at_1000 value: 90.435 - type: recall_at_3 value: 36.679 - type: recall_at_5 value: 41.94 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWebmastersRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 23.339 - type: map_at_10 value: 31.852000000000004 - type: map_at_100 value: 33.411 - type: map_at_1000 value: 33.62 - type: map_at_3 value: 28.929 - type: map_at_5 value: 30.542 - type: mrr_at_1 value: 28.063 - type: mrr_at_10 value: 36.301 - type: mrr_at_100 value: 37.288 - type: mrr_at_1000 value: 37.349 - type: mrr_at_3 value: 33.663 - type: mrr_at_5 value: 35.165 - type: ndcg_at_1 value: 28.063 - type: ndcg_at_10 value: 37.462 - type: ndcg_at_100 value: 43.620999999999995 - type: ndcg_at_1000 value: 46.211 - type: ndcg_at_3 value: 32.68 - type: ndcg_at_5 value: 34.981 - type: precision_at_1 value: 28.063 - type: precision_at_10 value: 7.1739999999999995 - type: precision_at_100 value: 1.486 - type: precision_at_1000 value: 0.23500000000000001 - type: precision_at_3 value: 15.217 - type: precision_at_5 value: 11.265 - type: recall_at_1 value: 23.339 - type: recall_at_10 value: 48.376999999999995 - type: recall_at_100 value: 76.053 - type: recall_at_1000 value: 92.455 - type: recall_at_3 value: 34.735 - type: recall_at_5 value: 40.71 - task: type: Retrieval dataset: type: BeIR/cqadupstack name: MTEB CQADupstackWordpressRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 18.925 - type: map_at_10 value: 26.017000000000003 - type: map_at_100 value: 27.034000000000002 - type: map_at_1000 value: 27.156000000000002 - type: map_at_3 value: 23.604 - type: map_at_5 value: 24.75 - type: mrr_at_1 value: 20.333000000000002 - type: mrr_at_10 value: 27.915 - type: mrr_at_100 value: 28.788000000000004 - type: mrr_at_1000 value: 28.877999999999997 - type: mrr_at_3 value: 25.446999999999996 - type: mrr_at_5 value: 26.648 - type: ndcg_at_1 value: 20.333000000000002 - type: ndcg_at_10 value: 30.673000000000002 - type: ndcg_at_100 value: 35.618 - type: ndcg_at_1000 value: 38.517 - type: ndcg_at_3 value: 25.71 - type: ndcg_at_5 value: 27.679 - type: precision_at_1 value: 20.333000000000002 - type: precision_at_10 value: 4.9910000000000005 - type: precision_at_100 value: 0.8130000000000001 - type: precision_at_1000 value: 0.117 - type: precision_at_3 value: 11.029 - type: precision_at_5 value: 7.8740000000000006 - type: recall_at_1 value: 18.925 - type: recall_at_10 value: 43.311 - type: recall_at_100 value: 66.308 - type: recall_at_1000 value: 87.49 - type: recall_at_3 value: 29.596 - type: recall_at_5 value: 34.245 - task: type: Retrieval dataset: type: climate-fever name: MTEB ClimateFEVER config: default split: test revision: None metrics: - type: map_at_1 value: 13.714 - type: map_at_10 value: 23.194 - type: map_at_100 value: 24.976000000000003 - type: map_at_1000 value: 25.166 - type: map_at_3 value: 19.709 - type: map_at_5 value: 21.523999999999997 - type: mrr_at_1 value: 30.619000000000003 - type: mrr_at_10 value: 42.563 - type: mrr_at_100 value: 43.386 - type: mrr_at_1000 value: 43.423 - type: mrr_at_3 value: 39.555 - type: mrr_at_5 value: 41.268 - type: ndcg_at_1 value: 30.619000000000003 - type: ndcg_at_10 value: 31.836 - type: ndcg_at_100 value: 38.652 - type: ndcg_at_1000 value: 42.088 - type: ndcg_at_3 value: 26.733 - type: ndcg_at_5 value: 28.435 - type: precision_at_1 value: 30.619000000000003 - type: precision_at_10 value: 9.751999999999999 - type: precision_at_100 value: 1.71 - type: precision_at_1000 value: 0.23500000000000001 - type: precision_at_3 value: 19.935 - type: precision_at_5 value: 14.984 - type: recall_at_1 value: 13.714 - type: recall_at_10 value: 37.26 - type: recall_at_100 value: 60.546 - type: recall_at_1000 value: 79.899 - type: recall_at_3 value: 24.325 - type: recall_at_5 value: 29.725 - task: type: Retrieval dataset: type: dbpedia-entity name: MTEB DBPedia config: default split: test revision: None metrics: - type: map_at_1 value: 8.462 - type: map_at_10 value: 18.637 - type: map_at_100 value: 26.131999999999998 - type: map_at_1000 value: 27.607 - type: map_at_3 value: 13.333 - type: map_at_5 value: 15.654000000000002 - type: mrr_at_1 value: 66.25 - type: mrr_at_10 value: 74.32600000000001 - type: mrr_at_100 value: 74.60900000000001 - type: mrr_at_1000 value: 74.62 - type: mrr_at_3 value: 72.667 - type: mrr_at_5 value: 73.817 - type: ndcg_at_1 value: 53.87499999999999 - type: ndcg_at_10 value: 40.028999999999996 - type: ndcg_at_100 value: 44.199 - type: ndcg_at_1000 value: 51.629999999999995 - type: ndcg_at_3 value: 44.113 - type: ndcg_at_5 value: 41.731 - type: precision_at_1 value: 66.25 - type: precision_at_10 value: 31.900000000000002 - type: precision_at_100 value: 10.043000000000001 - type: precision_at_1000 value: 1.926 - type: precision_at_3 value: 47.417 - type: precision_at_5 value: 40.65 - type: recall_at_1 value: 8.462 - type: recall_at_10 value: 24.293 - type: recall_at_100 value: 50.146 - type: recall_at_1000 value: 74.034 - type: recall_at_3 value: 14.967 - type: recall_at_5 value: 18.682000000000002 - task: type: Classification dataset: type: mteb/emotion name: MTEB EmotionClassification config: default split: test revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37 metrics: - type: accuracy value: 47.84499999999999 - type: f1 value: 42.48106691979349 - task: type: Retrieval dataset: type: fever name: MTEB FEVER config: default split: test revision: None metrics: - type: map_at_1 value: 74.034 - type: map_at_10 value: 82.76 - type: map_at_100 value: 82.968 - type: map_at_1000 value: 82.98299999999999 - type: map_at_3 value: 81.768 - type: map_at_5 value: 82.418 - type: mrr_at_1 value: 80.048 - type: mrr_at_10 value: 87.64999999999999 - type: mrr_at_100 value: 87.712 - type: mrr_at_1000 value: 87.713 - type: mrr_at_3 value: 87.01100000000001 - type: mrr_at_5 value: 87.466 - type: ndcg_at_1 value: 80.048 - type: ndcg_at_10 value: 86.643 - type: ndcg_at_100 value: 87.361 - type: ndcg_at_1000 value: 87.606 - type: ndcg_at_3 value: 85.137 - type: ndcg_at_5 value: 86.016 - type: precision_at_1 value: 80.048 - type: precision_at_10 value: 10.372 - type: precision_at_100 value: 1.093 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 32.638 - type: precision_at_5 value: 20.177 - type: recall_at_1 value: 74.034 - type: recall_at_10 value: 93.769 - type: recall_at_100 value: 96.569 - type: recall_at_1000 value: 98.039 - type: recall_at_3 value: 89.581 - type: recall_at_5 value: 91.906 - task: type: Retrieval dataset: type: fiqa name: MTEB FiQA2018 config: default split: test revision: None metrics: - type: map_at_1 value: 20.5 - type: map_at_10 value: 32.857 - type: map_at_100 value: 34.589 - type: map_at_1000 value: 34.778 - type: map_at_3 value: 29.160999999999998 - type: map_at_5 value: 31.033 - type: mrr_at_1 value: 40.123 - type: mrr_at_10 value: 48.776 - type: mrr_at_100 value: 49.495 - type: mrr_at_1000 value: 49.539 - type: mrr_at_3 value: 46.605000000000004 - type: mrr_at_5 value: 47.654 - type: ndcg_at_1 value: 40.123 - type: ndcg_at_10 value: 40.343 - type: ndcg_at_100 value: 46.56 - type: ndcg_at_1000 value: 49.777 - type: ndcg_at_3 value: 37.322 - type: ndcg_at_5 value: 37.791000000000004 - type: precision_at_1 value: 40.123 - type: precision_at_10 value: 11.08 - type: precision_at_100 value: 1.752 - type: precision_at_1000 value: 0.232 - type: precision_at_3 value: 24.897 - type: precision_at_5 value: 17.809 - type: recall_at_1 value: 20.5 - type: recall_at_10 value: 46.388 - type: recall_at_100 value: 69.552 - type: recall_at_1000 value: 89.011 - type: recall_at_3 value: 33.617999999999995 - type: recall_at_5 value: 38.211 - task: type: Retrieval dataset: type: hotpotqa name: MTEB HotpotQA config: default split: test revision: None metrics: - type: map_at_1 value: 39.135999999999996 - type: map_at_10 value: 61.673 - type: map_at_100 value: 62.562 - type: map_at_1000 value: 62.62 - type: map_at_3 value: 58.467999999999996 - type: map_at_5 value: 60.463 - type: mrr_at_1 value: 78.271 - type: mrr_at_10 value: 84.119 - type: mrr_at_100 value: 84.29299999999999 - type: mrr_at_1000 value: 84.299 - type: mrr_at_3 value: 83.18900000000001 - type: mrr_at_5 value: 83.786 - type: ndcg_at_1 value: 78.271 - type: ndcg_at_10 value: 69.935 - type: ndcg_at_100 value: 73.01299999999999 - type: ndcg_at_1000 value: 74.126 - type: ndcg_at_3 value: 65.388 - type: ndcg_at_5 value: 67.906 - type: precision_at_1 value: 78.271 - type: precision_at_10 value: 14.562 - type: precision_at_100 value: 1.6969999999999998 - type: precision_at_1000 value: 0.184 - type: precision_at_3 value: 41.841 - type: precision_at_5 value: 27.087 - type: recall_at_1 value: 39.135999999999996 - type: recall_at_10 value: 72.809 - type: recall_at_100 value: 84.86200000000001 - type: recall_at_1000 value: 92.208 - type: recall_at_3 value: 62.76199999999999 - type: recall_at_5 value: 67.718 - task: type: Classification dataset: type: mteb/imdb name: MTEB ImdbClassification config: default split: test revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7 metrics: - type: accuracy value: 90.60600000000001 - type: ap value: 86.6579587804335 - type: f1 value: 90.5938853929307 - task: type: Retrieval dataset: type: msmarco name: MTEB MSMARCO config: default split: dev revision: None metrics: - type: map_at_1 value: 21.852 - type: map_at_10 value: 33.982 - type: map_at_100 value: 35.116 - type: map_at_1000 value: 35.167 - type: map_at_3 value: 30.134 - type: map_at_5 value: 32.340999999999994 - type: mrr_at_1 value: 22.479 - type: mrr_at_10 value: 34.594 - type: mrr_at_100 value: 35.672 - type: mrr_at_1000 value: 35.716 - type: mrr_at_3 value: 30.84 - type: mrr_at_5 value: 32.998 - type: ndcg_at_1 value: 22.493 - type: ndcg_at_10 value: 40.833000000000006 - type: ndcg_at_100 value: 46.357 - type: ndcg_at_1000 value: 47.637 - type: ndcg_at_3 value: 32.995999999999995 - type: ndcg_at_5 value: 36.919000000000004 - type: precision_at_1 value: 22.493 - type: precision_at_10 value: 6.465999999999999 - type: precision_at_100 value: 0.9249999999999999 - type: precision_at_1000 value: 0.104 - type: precision_at_3 value: 14.030999999999999 - type: precision_at_5 value: 10.413 - type: recall_at_1 value: 21.852 - type: recall_at_10 value: 61.934999999999995 - type: recall_at_100 value: 87.611 - type: recall_at_1000 value: 97.441 - type: recall_at_3 value: 40.583999999999996 - type: recall_at_5 value: 49.992999999999995 - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (en) config: en split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 93.36069311445507 - type: f1 value: 93.16456330371453 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (en) config: en split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 74.74692202462381 - type: f1 value: 58.17903579421599 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (en) config: en split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 74.80833893745796 - type: f1 value: 72.70786592684664 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (en) config: en split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 78.69872225958305 - type: f1 value: 78.61626934504731 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-p2p name: MTEB MedrxivClusteringP2P config: default split: test revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73 metrics: - type: v_measure value: 33.058658628717694 - task: type: Clustering dataset: type: mteb/medrxiv-clustering-s2s name: MTEB MedrxivClusteringS2S config: default split: test revision: 35191c8c0dca72d8ff3efcd72aa802307d469663 metrics: - type: v_measure value: 30.85561739360599 - task: type: Reranking dataset: type: mteb/mind_small name: MTEB MindSmallReranking config: default split: test revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69 metrics: - type: map value: 31.290259910144385 - type: mrr value: 32.44223046102856 - task: type: Retrieval dataset: type: nfcorpus name: MTEB NFCorpus config: default split: test revision: None metrics: - type: map_at_1 value: 5.288 - type: map_at_10 value: 12.267999999999999 - type: map_at_100 value: 15.557000000000002 - type: map_at_1000 value: 16.98 - type: map_at_3 value: 8.866 - type: map_at_5 value: 10.418 - type: mrr_at_1 value: 43.653 - type: mrr_at_10 value: 52.681 - type: mrr_at_100 value: 53.315999999999995 - type: mrr_at_1000 value: 53.357 - type: mrr_at_3 value: 51.393 - type: mrr_at_5 value: 51.903999999999996 - type: ndcg_at_1 value: 42.415000000000006 - type: ndcg_at_10 value: 34.305 - type: ndcg_at_100 value: 30.825999999999997 - type: ndcg_at_1000 value: 39.393 - type: ndcg_at_3 value: 39.931 - type: ndcg_at_5 value: 37.519999999999996 - type: precision_at_1 value: 43.653 - type: precision_at_10 value: 25.728 - type: precision_at_100 value: 7.932 - type: precision_at_1000 value: 2.07 - type: precision_at_3 value: 38.184000000000005 - type: precision_at_5 value: 32.879000000000005 - type: recall_at_1 value: 5.288 - type: recall_at_10 value: 16.195 - type: recall_at_100 value: 31.135 - type: recall_at_1000 value: 61.531000000000006 - type: recall_at_3 value: 10.313 - type: recall_at_5 value: 12.754999999999999 - task: type: Retrieval dataset: type: nq name: MTEB NQ config: default split: test revision: None metrics: - type: map_at_1 value: 28.216 - type: map_at_10 value: 42.588 - type: map_at_100 value: 43.702999999999996 - type: map_at_1000 value: 43.739 - type: map_at_3 value: 38.177 - type: map_at_5 value: 40.754000000000005 - type: mrr_at_1 value: 31.866 - type: mrr_at_10 value: 45.189 - type: mrr_at_100 value: 46.056000000000004 - type: mrr_at_1000 value: 46.081 - type: mrr_at_3 value: 41.526999999999994 - type: mrr_at_5 value: 43.704 - type: ndcg_at_1 value: 31.837 - type: ndcg_at_10 value: 50.178 - type: ndcg_at_100 value: 54.98800000000001 - type: ndcg_at_1000 value: 55.812 - type: ndcg_at_3 value: 41.853 - type: ndcg_at_5 value: 46.153 - type: precision_at_1 value: 31.837 - type: precision_at_10 value: 8.43 - type: precision_at_100 value: 1.1119999999999999 - type: precision_at_1000 value: 0.11900000000000001 - type: precision_at_3 value: 19.023 - type: precision_at_5 value: 13.911000000000001 - type: recall_at_1 value: 28.216 - type: recall_at_10 value: 70.8 - type: recall_at_100 value: 91.857 - type: recall_at_1000 value: 97.941 - type: recall_at_3 value: 49.196 - type: recall_at_5 value: 59.072 - task: type: Retrieval dataset: type: quora name: MTEB QuoraRetrieval config: default split: test revision: None metrics: - type: map_at_1 value: 71.22800000000001 - type: map_at_10 value: 85.115 - type: map_at_100 value: 85.72 - type: map_at_1000 value: 85.737 - type: map_at_3 value: 82.149 - type: map_at_5 value: 84.029 - type: mrr_at_1 value: 81.96 - type: mrr_at_10 value: 88.00200000000001 - type: mrr_at_100 value: 88.088 - type: mrr_at_1000 value: 88.089 - type: mrr_at_3 value: 87.055 - type: mrr_at_5 value: 87.715 - type: ndcg_at_1 value: 82.01 - type: ndcg_at_10 value: 88.78 - type: ndcg_at_100 value: 89.91 - type: ndcg_at_1000 value: 90.013 - type: ndcg_at_3 value: 85.957 - type: ndcg_at_5 value: 87.56 - type: precision_at_1 value: 82.01 - type: precision_at_10 value: 13.462 - type: precision_at_100 value: 1.528 - type: precision_at_1000 value: 0.157 - type: precision_at_3 value: 37.553 - type: precision_at_5 value: 24.732000000000003 - type: recall_at_1 value: 71.22800000000001 - type: recall_at_10 value: 95.69 - type: recall_at_100 value: 99.531 - type: recall_at_1000 value: 99.98 - type: recall_at_3 value: 87.632 - type: recall_at_5 value: 92.117 - task: type: Clustering dataset: type: mteb/reddit-clustering name: MTEB RedditClustering config: default split: test revision: 24640382cdbf8abc73003fb0fa6d111a705499eb metrics: - type: v_measure value: 52.31768034366916 - task: type: Clustering dataset: type: mteb/reddit-clustering-p2p name: MTEB RedditClusteringP2P config: default split: test revision: 282350215ef01743dc01b456c7f5241fa8937f16 metrics: - type: v_measure value: 60.640266772723606 - task: type: Retrieval dataset: type: scidocs name: MTEB SCIDOCS config: default split: test revision: None metrics: - type: map_at_1 value: 4.7780000000000005 - type: map_at_10 value: 12.299 - type: map_at_100 value: 14.363000000000001 - type: map_at_1000 value: 14.71 - type: map_at_3 value: 8.738999999999999 - type: map_at_5 value: 10.397 - type: mrr_at_1 value: 23.599999999999998 - type: mrr_at_10 value: 34.845 - type: mrr_at_100 value: 35.916 - type: mrr_at_1000 value: 35.973 - type: mrr_at_3 value: 31.7 - type: mrr_at_5 value: 33.535 - type: ndcg_at_1 value: 23.599999999999998 - type: ndcg_at_10 value: 20.522000000000002 - type: ndcg_at_100 value: 28.737000000000002 - type: ndcg_at_1000 value: 34.596 - type: ndcg_at_3 value: 19.542 - type: ndcg_at_5 value: 16.958000000000002 - type: precision_at_1 value: 23.599999999999998 - type: precision_at_10 value: 10.67 - type: precision_at_100 value: 2.259 - type: precision_at_1000 value: 0.367 - type: precision_at_3 value: 18.333 - type: precision_at_5 value: 14.879999999999999 - type: recall_at_1 value: 4.7780000000000005 - type: recall_at_10 value: 21.617 - type: recall_at_100 value: 45.905 - type: recall_at_1000 value: 74.42 - type: recall_at_3 value: 11.148 - type: recall_at_5 value: 15.082999999999998 - task: type: STS dataset: type: mteb/sickr-sts name: MTEB SICK-R config: default split: test revision: a6ea5a8cab320b040a23452cc28066d9beae2cee metrics: - type: cos_sim_pearson value: 83.22372750297885 - type: cos_sim_spearman value: 79.40972617119405 - type: euclidean_pearson value: 80.6101072020434 - type: euclidean_spearman value: 79.53844217225202 - type: manhattan_pearson value: 80.57265975286111 - type: manhattan_spearman value: 79.46335611792958 - task: type: STS dataset: type: mteb/sts12-sts name: MTEB STS12 config: default split: test revision: a0d554a64d88156834ff5ae9920b964011b16384 metrics: - type: cos_sim_pearson value: 85.43713315520749 - type: cos_sim_spearman value: 77.44128693329532 - type: euclidean_pearson value: 81.63869928101123 - type: euclidean_spearman value: 77.29512977961515 - type: manhattan_pearson value: 81.63704185566183 - type: manhattan_spearman value: 77.29909412738657 - task: type: STS dataset: type: mteb/sts13-sts name: MTEB STS13 config: default split: test revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca metrics: - type: cos_sim_pearson value: 81.59451537860527 - type: cos_sim_spearman value: 82.97994638856723 - type: euclidean_pearson value: 82.89478688288412 - type: euclidean_spearman value: 83.58740751053104 - type: manhattan_pearson value: 82.69140840941608 - type: manhattan_spearman value: 83.33665956040555 - task: type: STS dataset: type: mteb/sts14-sts name: MTEB STS14 config: default split: test revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375 metrics: - type: cos_sim_pearson value: 82.00756527711764 - type: cos_sim_spearman value: 81.83560996841379 - type: euclidean_pearson value: 82.07684151976518 - type: euclidean_spearman value: 82.00913052060511 - type: manhattan_pearson value: 82.05690778488794 - type: manhattan_spearman value: 82.02260252019525 - task: type: STS dataset: type: mteb/sts15-sts name: MTEB STS15 config: default split: test revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3 metrics: - type: cos_sim_pearson value: 86.13710262895447 - type: cos_sim_spearman value: 87.26412811156248 - type: euclidean_pearson value: 86.94151453230228 - type: euclidean_spearman value: 87.5363796699571 - type: manhattan_pearson value: 86.86989424083748 - type: manhattan_spearman value: 87.47315940781353 - task: type: STS dataset: type: mteb/sts16-sts name: MTEB STS16 config: default split: test revision: 4d8694f8f0e0100860b497b999b3dbed754a0513 metrics: - type: cos_sim_pearson value: 83.0230597603627 - type: cos_sim_spearman value: 84.93344499318864 - type: euclidean_pearson value: 84.23754743431141 - type: euclidean_spearman value: 85.09707376597099 - type: manhattan_pearson value: 84.04325160987763 - type: manhattan_spearman value: 84.89353071339909 - task: type: STS dataset: type: mteb/sts17-crosslingual-sts name: MTEB STS17 (en-en) config: en-en split: test revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d metrics: - type: cos_sim_pearson value: 86.75620824563921 - type: cos_sim_spearman value: 87.15065513706398 - type: euclidean_pearson value: 88.26281533633521 - type: euclidean_spearman value: 87.51963738643983 - type: manhattan_pearson value: 88.25599267618065 - type: manhattan_spearman value: 87.58048736047483 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (en) config: en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 64.74645319195137 - type: cos_sim_spearman value: 65.29996325037214 - type: euclidean_pearson value: 67.04297794086443 - type: euclidean_spearman value: 65.43841726694343 - type: manhattan_pearson value: 67.39459955690904 - type: manhattan_spearman value: 65.92864704413651 - task: type: STS dataset: type: mteb/stsbenchmark-sts name: MTEB STSBenchmark config: default split: test revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 metrics: - type: cos_sim_pearson value: 84.31291020270801 - type: cos_sim_spearman value: 85.86473738688068 - type: euclidean_pearson value: 85.65537275064152 - type: euclidean_spearman value: 86.13087454209642 - type: manhattan_pearson value: 85.43946955047609 - type: manhattan_spearman value: 85.91568175344916 - task: type: Reranking dataset: type: mteb/scidocs-reranking name: MTEB SciDocsRR config: default split: test revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab metrics: - type: map value: 85.93798118350695 - type: mrr value: 95.93536274908824 - task: type: Retrieval dataset: type: scifact name: MTEB SciFact config: default split: test revision: None metrics: - type: map_at_1 value: 57.594 - type: map_at_10 value: 66.81899999999999 - type: map_at_100 value: 67.368 - type: map_at_1000 value: 67.4 - type: map_at_3 value: 64.061 - type: map_at_5 value: 65.47 - type: mrr_at_1 value: 60.667 - type: mrr_at_10 value: 68.219 - type: mrr_at_100 value: 68.655 - type: mrr_at_1000 value: 68.684 - type: mrr_at_3 value: 66.22200000000001 - type: mrr_at_5 value: 67.289 - type: ndcg_at_1 value: 60.667 - type: ndcg_at_10 value: 71.275 - type: ndcg_at_100 value: 73.642 - type: ndcg_at_1000 value: 74.373 - type: ndcg_at_3 value: 66.521 - type: ndcg_at_5 value: 68.581 - type: precision_at_1 value: 60.667 - type: precision_at_10 value: 9.433 - type: precision_at_100 value: 1.0699999999999998 - type: precision_at_1000 value: 0.11299999999999999 - type: precision_at_3 value: 25.556 - type: precision_at_5 value: 16.8 - type: recall_at_1 value: 57.594 - type: recall_at_10 value: 83.622 - type: recall_at_100 value: 94.167 - type: recall_at_1000 value: 99.667 - type: recall_at_3 value: 70.64399999999999 - type: recall_at_5 value: 75.983 - task: type: PairClassification dataset: type: mteb/sprintduplicatequestions-pairclassification name: MTEB SprintDuplicateQuestions config: default split: test revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46 metrics: - type: cos_sim_accuracy value: 99.85841584158416 - type: cos_sim_ap value: 96.66996142314342 - type: cos_sim_f1 value: 92.83208020050125 - type: cos_sim_precision value: 93.06532663316584 - type: cos_sim_recall value: 92.60000000000001 - type: dot_accuracy value: 99.85841584158416 - type: dot_ap value: 96.6775307676576 - type: dot_f1 value: 92.69289729177312 - type: dot_precision value: 94.77533960292581 - type: dot_recall value: 90.7 - type: euclidean_accuracy value: 99.86138613861387 - type: euclidean_ap value: 96.6338454403108 - type: euclidean_f1 value: 92.92214357937311 - type: euclidean_precision value: 93.96728016359918 - type: euclidean_recall value: 91.9 - type: manhattan_accuracy value: 99.86237623762376 - type: manhattan_ap value: 96.60370449645053 - type: manhattan_f1 value: 92.91177970423253 - type: manhattan_precision value: 94.7970863683663 - type: manhattan_recall value: 91.10000000000001 - type: max_accuracy value: 99.86237623762376 - type: max_ap value: 96.6775307676576 - type: max_f1 value: 92.92214357937311 - task: type: Clustering dataset: type: mteb/stackexchange-clustering name: MTEB StackExchangeClustering config: default split: test revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259 metrics: - type: v_measure value: 60.77977058695198 - task: type: Clustering dataset: type: mteb/stackexchange-clustering-p2p name: MTEB StackExchangeClusteringP2P config: default split: test revision: 815ca46b2622cec33ccafc3735d572c266efdb44 metrics: - type: v_measure value: 35.2725272535638 - task: type: Reranking dataset: type: mteb/stackoverflowdupquestions-reranking name: MTEB StackOverflowDupQuestions config: default split: test revision: e185fbe320c72810689fc5848eb6114e1ef5ec69 metrics: - type: map value: 53.64052466362125 - type: mrr value: 54.533067014684654 - task: type: Summarization dataset: type: mteb/summeval name: MTEB SummEval config: default split: test revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c metrics: - type: cos_sim_pearson value: 30.677624219206578 - type: cos_sim_spearman value: 30.121368518123447 - type: dot_pearson value: 30.69870088041608 - type: dot_spearman value: 29.61284927093751 - task: type: Retrieval dataset: type: trec-covid name: MTEB TRECCOVID config: default split: test revision: None metrics: - type: map_at_1 value: 0.22 - type: map_at_10 value: 1.855 - type: map_at_100 value: 9.885 - type: map_at_1000 value: 23.416999999999998 - type: map_at_3 value: 0.637 - type: map_at_5 value: 1.024 - type: mrr_at_1 value: 88.0 - type: mrr_at_10 value: 93.067 - type: mrr_at_100 value: 93.067 - type: mrr_at_1000 value: 93.067 - type: mrr_at_3 value: 92.667 - type: mrr_at_5 value: 93.067 - type: ndcg_at_1 value: 82.0 - type: ndcg_at_10 value: 75.899 - type: ndcg_at_100 value: 55.115 - type: ndcg_at_1000 value: 48.368 - type: ndcg_at_3 value: 79.704 - type: ndcg_at_5 value: 78.39699999999999 - type: precision_at_1 value: 88.0 - type: precision_at_10 value: 79.60000000000001 - type: precision_at_100 value: 56.06 - type: precision_at_1000 value: 21.206 - type: precision_at_3 value: 84.667 - type: precision_at_5 value: 83.2 - type: recall_at_1 value: 0.22 - type: recall_at_10 value: 2.078 - type: recall_at_100 value: 13.297 - type: recall_at_1000 value: 44.979 - type: recall_at_3 value: 0.6689999999999999 - type: recall_at_5 value: 1.106 - task: type: Retrieval dataset: type: webis-touche2020 name: MTEB Touche2020 config: default split: test revision: None metrics: - type: map_at_1 value: 2.258 - type: map_at_10 value: 10.439 - type: map_at_100 value: 16.89 - type: map_at_1000 value: 18.407999999999998 - type: map_at_3 value: 5.668 - type: map_at_5 value: 7.718 - type: mrr_at_1 value: 32.653 - type: mrr_at_10 value: 51.159 - type: mrr_at_100 value: 51.714000000000006 - type: mrr_at_1000 value: 51.714000000000006 - type: mrr_at_3 value: 47.959 - type: mrr_at_5 value: 50.407999999999994 - type: ndcg_at_1 value: 29.592000000000002 - type: ndcg_at_10 value: 26.037 - type: ndcg_at_100 value: 37.924 - type: ndcg_at_1000 value: 49.126999999999995 - type: ndcg_at_3 value: 30.631999999999998 - type: ndcg_at_5 value: 28.571 - type: precision_at_1 value: 32.653 - type: precision_at_10 value: 22.857 - type: precision_at_100 value: 7.754999999999999 - type: precision_at_1000 value: 1.529 - type: precision_at_3 value: 34.014 - type: precision_at_5 value: 29.796 - type: recall_at_1 value: 2.258 - type: recall_at_10 value: 16.554 - type: recall_at_100 value: 48.439 - type: recall_at_1000 value: 82.80499999999999 - type: recall_at_3 value: 7.283 - type: recall_at_5 value: 10.732 - task: type: Classification dataset: type: mteb/toxic_conversations_50k name: MTEB ToxicConversationsClassification config: default split: test revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c metrics: - type: accuracy value: 69.8858 - type: ap value: 13.835684144362109 - type: f1 value: 53.803351693244586 - task: type: Classification dataset: type: mteb/tweet_sentiment_extraction name: MTEB TweetSentimentExtractionClassification config: default split: test revision: d604517c81ca91fe16a244d1248fc021f9ecee7a metrics: - type: accuracy value: 60.50650820599886 - type: f1 value: 60.84357825979259 - task: type: Clustering dataset: type: mteb/twentynewsgroups-clustering name: MTEB TwentyNewsgroupsClustering config: default split: test revision: 6125ec4e24fa026cec8a478383ee943acfbd5449 metrics: - type: v_measure value: 48.52131044852134 - task: type: PairClassification dataset: type: mteb/twittersemeval2015-pairclassification name: MTEB TwitterSemEval2015 config: default split: test revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1 metrics: - type: cos_sim_accuracy value: 85.59337187816654 - type: cos_sim_ap value: 73.23925826533437 - type: cos_sim_f1 value: 67.34693877551021 - type: cos_sim_precision value: 62.40432237730752 - type: cos_sim_recall value: 73.13984168865434 - type: dot_accuracy value: 85.31322644096085 - type: dot_ap value: 72.30723963807422 - type: dot_f1 value: 66.47051612112296 - type: dot_precision value: 62.0792305930845 - type: dot_recall value: 71.53034300791556 - type: euclidean_accuracy value: 85.61125350181797 - type: euclidean_ap value: 73.32843720487845 - type: euclidean_f1 value: 67.36549633745895 - type: euclidean_precision value: 64.60755813953489 - type: euclidean_recall value: 70.36939313984169 - type: manhattan_accuracy value: 85.63509566668654 - type: manhattan_ap value: 73.16658488311325 - type: manhattan_f1 value: 67.20597386434349 - type: manhattan_precision value: 63.60424028268551 - type: manhattan_recall value: 71.2401055408971 - type: max_accuracy value: 85.63509566668654 - type: max_ap value: 73.32843720487845 - type: max_f1 value: 67.36549633745895 - task: type: PairClassification dataset: type: mteb/twitterurlcorpus-pairclassification name: MTEB TwitterURLCorpus config: default split: test revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf metrics: - type: cos_sim_accuracy value: 88.33779640625606 - type: cos_sim_ap value: 84.83868375898157 - type: cos_sim_f1 value: 77.16506154017773 - type: cos_sim_precision value: 74.62064005753327 - type: cos_sim_recall value: 79.88912842623961 - type: dot_accuracy value: 88.02732176815307 - type: dot_ap value: 83.95089283763002 - type: dot_f1 value: 76.29635101196631 - type: dot_precision value: 73.31771720613288 - type: dot_recall value: 79.52725592854944 - type: euclidean_accuracy value: 88.44452206310397 - type: euclidean_ap value: 84.98384576824827 - type: euclidean_f1 value: 77.29311047696697 - type: euclidean_precision value: 74.51232583065381 - type: euclidean_recall value: 80.28949799815214 - type: manhattan_accuracy value: 88.47362906042613 - type: manhattan_ap value: 84.91421462218432 - type: manhattan_f1 value: 77.05107637204792 - type: manhattan_precision value: 74.74484256243214 - type: manhattan_recall value: 79.50415768401602 - type: max_accuracy value: 88.47362906042613 - type: max_ap value: 84.98384576824827 - type: max_f1 value: 77.29311047696697 license: mit language: - en --- <h1 align="center">FlagEmbedding</h1> <h4 align="center"> <p> <a href=#model-list>Model List</a> | <a href=#frequently-asked-questions>FAQ</a> | <a href=#usage>Usage</a> | <a href="#evaluation">Evaluation</a> | <a href="#train">Train</a> | <a href="#contact">Contact</a> | <a href="#citation">Citation</a> | <a href="#license">License</a> <p> </h4> More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding). If you are looking for a model that supports more languages, longer texts, and other retrieval methods, you can try using [bge-m3](https://huggingface.co/BAAI/bge-m3). [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md) FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently: - **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon) - **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail) - **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding) - **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) - **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) ## News - 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval). It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks. [Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire: - 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire: - 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire: - 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire: - 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf) - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released - 09/15/2023: The [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released - 09/12/2023: New models: - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models. - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction. <details> <summary>More</summary> <!-- ### More --> - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning. - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard). - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗** - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada: - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset. </details> ## Model List `bge` is short for `BAAI general embedding`. | Model | Language | | Description | query instruction for retrieval [1] | |:-------------------------------|:--------:| :--------:| :--------:|:--------:| | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | | | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | | | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` | [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages. [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models. For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results. All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI. If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models . ## Frequently asked questions <details> <summary>1. How to fine-tune bge embedding model?</summary> <!-- ### How to fine-tune bge embedding model? --> Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model. Some suggestions: - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance. - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity. - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker. </details> <details> <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary> <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 --> **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.** Since we finetune the models by contrastive learning with a temperature of 0.01, the similarity distribution of the current BGE model is about in the interval \[0.6, 1\]. So a similarity score greater than 0.5 does not indicate that the two sentences are similar. For downstream tasks, such as passage retrieval or semantic similarity, **what matters is the relative order of the scores, not the absolute value.** If you need to filter similar sentences based on a similarity threshold, please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9). </details> <details> <summary>3. When does the query instruction need to be used</summary> <!-- ### When does the query instruction need to be used --> For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction. No instruction only has a slight degradation in retrieval performance compared with using instruction. So you can generate embedding without instruction in all cases for convenience. For a retrieval task that uses short queries to find long related documents, it is recommended to add instructions for these short queries. **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.** In all cases, the documents/passages do not need to add the instruction. </details> ## Usage ### Usage for Embedding Model Here are some examples for using `bge` models with [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers). #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding. ```python from FlagEmbedding import FlagModel sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = FlagModel('BAAI/bge-large-zh-v1.5', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:", use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation embeddings_1 = model.encode(sentences_1) embeddings_2 = model.encode(sentences_2) similarity = embeddings_1 @ embeddings_2.T print(similarity) # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] q_embeddings = model.encode_queries(queries) p_embeddings = model.encode(passages) scores = q_embeddings @ p_embeddings.T ``` For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list). By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs. You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable. #### Using Sentence-Transformers You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net): ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences_1 = ["样例数据-1", "样例数据-2"] sentences_2 = ["样例数据-3", "样例数据-4"] model = SentenceTransformer('BAAI/bge-large-zh-v1.5') embeddings_1 = model.encode(sentences_1, normalize_embeddings=True) embeddings_2 = model.encode(sentences_2, normalize_embeddings=True) similarity = embeddings_1 @ embeddings_2.T print(similarity) ``` For s2p(short query to long passage) retrieval task, each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)). But the instruction is not needed for passages. ```python from sentence_transformers import SentenceTransformer queries = ['query_1', 'query_2'] passages = ["样例文档-1", "样例文档-2"] instruction = "为这个句子生成表示以用于检索相关文章:" model = SentenceTransformer('BAAI/bge-large-zh-v1.5') q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True) p_embeddings = model.encode(passages, normalize_embeddings=True) scores = q_embeddings @ p_embeddings.T ``` #### Using Langchain You can use `bge` in langchain like this: ```python from langchain.embeddings import HuggingFaceBgeEmbeddings model_name = "BAAI/bge-large-en-v1.5" model_kwargs = {'device': 'cuda'} encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity model = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs, query_instruction="为这个句子生成表示以用于检索相关文章:" ) model.query_instruction = "为这个句子生成表示以用于检索相关文章:" ``` #### Using HuggingFace Transformers With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding. ```python from transformers import AutoTokenizer, AutoModel import torch # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5') model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5') model.eval() # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = model_output[0][:, 0] # normalize embeddings sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:", sentence_embeddings) ``` ### Usage for Reranker Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. You can get a relevance score by inputting query and passage to the reranker. The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range. #### Using FlagEmbedding ``` pip install -U FlagEmbedding ``` Get relevance scores (higher scores indicate more relevance): ```python from FlagEmbedding import FlagReranker reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation score = reranker.compute_score(['query', 'passage']) print(score) scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]) print(scores) ``` #### Using Huggingface transformers ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large') model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large') model.eval() pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']] with torch.no_grad(): inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512) scores = model(**inputs, return_dict=True).logits.view(-1, ).float() print(scores) ``` #### Usage of the ONNX files ```python from optimum.onnxruntime import ORTModelForFeatureExtraction # type: ignore import torch from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-small-en-v1.5') model = AutoModel.from_pretrained('BAAI/bge-small-en-v1.5') model_ort = ORTModelForFeatureExtraction.from_pretrained('BAAI/bge-small-en-v1.5', file_name="onnx/model.onnx") # Sentences we want sentence embeddings for sentences = ["样例数据-1", "样例数据-2"] # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages) # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt') model_output_ort = model_ort(**encoded_input) # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # model_output and model_output_ort are identical ``` #### Usage via infinity Its also possible to deploy the onnx files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package. Recommended is `device="cuda", engine="torch"` with flash attention on gpu, and `device="cpu", engine="optimum"` for onnx inference. ```python import asyncio from infinity_emb import AsyncEmbeddingEngine, EngineArgs sentences = ["Embed this is sentence via Infinity.", "Paris is in France."] engine = AsyncEmbeddingEngine.from_args( EngineArgs(model_name_or_path = "BAAI/bge-small-en-v1.5", device="cpu", engine="optimum" # or engine="torch" )) async def main(): async with engine: embeddings, usage = await engine.embed(sentences=sentences) asyncio.run(main()) ``` ## Evaluation `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!** For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md). - **MTEB**: | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) | |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 | | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 | | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 | | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 | | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 | | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 | | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 | | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 | | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 | | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 | | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 | | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 | | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 | | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 | | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 | | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 | | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 | - **C-MTEB**: We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks. Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction. | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 | | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 | | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 | | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 | | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 | | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 | | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 | | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 | | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 | | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 | | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 | | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 | | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 | | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 | | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 | | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 | - **Reranking**: See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script. | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 | | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 | | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 | | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 | | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 | | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 | | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 | | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 | | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 | | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 | \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks ## Train ### BAAI Embedding We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning. **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).** We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain). Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned. More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md). ### BGE Reranker Cross-encoder will perform full-attention over the input pair, which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model. Therefore, it can be used to re-rank the top-k documents returned by embedding model. We train the cross-encoder on a multilingual pair data, The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker). More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker) ## Contact If you have any question or suggestion related to this project, feel free to open an issue or pull request. You also can email Shitao Xiao([email protected]) and Zheng Liu([email protected]). ## Citation If you find this repository useful, please consider giving a star :star: and citation ``` @misc{bge_embedding, title={C-Pack: Packaged Resources To Advance General Chinese Embedding}, author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff}, year={2023}, eprint={2309.07597}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## License FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
TheBloke/claude2-alpaca-13B-GGUF
TheBloke
"2023-11-10T21:11:49Z"
2,734
28
transformers
[ "transformers", "gguf", "llama", "en", "dataset:umd-zhou-lab/claude2_alpaca", "base_model:umd-zhou-lab/claude2-alpaca-13B", "license:llama2", "text-generation-inference", "region:us" ]
null
"2023-11-10T09:49:09Z"
--- base_model: umd-zhou-lab/claude2-alpaca-13B datasets: - umd-zhou-lab/claude2_alpaca inference: false language: - en license: llama2 model_creator: Tianyi Lab @ UMD model_name: Claude2 Alpaca 13B model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Claude2 Alpaca 13B - GGUF - Model creator: [Tianyi Lab @ UMD](https://huggingface.co/umd-zhou-lab) - Original model: [Claude2 Alpaca 13B](https://huggingface.co/umd-zhou-lab/claude2-alpaca-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [Tianyi Lab @ UMD's Claude2 Alpaca 13B](https://huggingface.co/umd-zhou-lab/claude2-alpaca-13B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/claude2-alpaca-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/claude2-alpaca-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF) * [Tianyi Lab @ UMD's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/umd-zhou-lab/claude2-alpaca-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [claude2-alpaca-13b.Q2_K.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [claude2-alpaca-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [claude2-alpaca-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [claude2-alpaca-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [claude2-alpaca-13b.Q4_0.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [claude2-alpaca-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [claude2-alpaca-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [claude2-alpaca-13b.Q5_0.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [claude2-alpaca-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [claude2-alpaca-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [claude2-alpaca-13b.Q6_K.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [claude2-alpaca-13b.Q8_0.gguf](https://huggingface.co/TheBloke/claude2-alpaca-13B-GGUF/blob/main/claude2-alpaca-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/claude2-alpaca-13B-GGUF and below it, a specific filename to download, such as: claude2-alpaca-13b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/claude2-alpaca-13B-GGUF claude2-alpaca-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/claude2-alpaca-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/claude2-alpaca-13B-GGUF claude2-alpaca-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m claude2-alpaca-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/claude2-alpaca-13B-GGUF", model_file="claude2-alpaca-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Tianyi Lab @ UMD's Claude2 Alpaca 13B # Model Card for umd-zhou-lab/claude2-alpaca-13B <!-- Provide a quick summary of what the model is/does. --> This model is trained by fine-tuning llama-2 with claude2 alpaca data. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** UMD Tianyi Zhou Lab - **Model type:** An auto-regressive language model based on the transformer architecture - **License:** Llama 2 Community License Agreement - **Finetuned from model:** [meta-llama/Llama-2-13b](https://huggingface.co/meta-llama/Llama-2-13b) ### Model Sources <!-- Provide the basic links for the model. --> - **GitHub:** [Claude2-Alpaca](https://github.com/Lichang-Chen/claude2-alpaca) - **Data:** [claude2_alpaca](https://huggingface.co/datasets/umd-zhou-lab/claude2_alpaca) ## Uses The primary use of this model is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## Training We use the prompt from [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) | Hyperparameter | Global Batch Size | Learning rate | Epochs | Max length | Weight decay | | --- | ---: | ---: | ---: | ---: | ---: | | Model (13B) | 128 | 1e-5 | 5 | 2048 | 0 | ## Performance Compared to the llama2-chat, our models can have better average performance.<br> | | Average | ARC | HellaSwag | MMLU | TruthfulQA | Alpaca_Eval | Avg Length | |---|---|---|---|---|---|---|---| | Llama-2-7b-chat | 56.335 | 52.9 | 78.55 | 48.32 | 45.57 | 71.37 | 1479 | | Llama-2-13b-chat | 59.935 | 59.04| 81.94 | 54.64 | 44.12 | 81.09 | 1513 | ||||||||| | claude_alpaca-7b | 57.78 | 56.66 | 81.17 | 46.58 | 46.71 | 71.23 | 1066 | | claude_alpaca-13b | 61.29 | 61.18 | 84.08 | 55.74 | 44.18 | 78.93 | 1127 | ## Citation Please consider citing our paper if you think our codes, data, or models are useful. Thank you! ``` @misc{claude2-alpaca, author = {Lichang Chen and Khalid Saifullah and Ming Li and Tianyi Zhou and Heng Huang}, title = {Claude2-Alpaca: Instruction tuning datasets distilled from claude}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/Lichang-Chen/claude2-alpaca}}, } ``` <!-- original-model-card end -->
ICBU-NPU/FashionGPT-70B-V1.2
ICBU-NPU
"2023-10-10T06:01:48Z"
2,733
12
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-10T03:09:30Z"
--- license: llama2 ---
RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf
RichardErkhov
"2024-06-22T23:35:36Z"
2,732
0
null
[ "gguf", "region:us" ]
null
"2024-06-22T23:27:26Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TinyLlama-1.1B-intermediate-step-240k-503b - GGUF - Model creator: https://huggingface.co/TinyLlama/ - Original model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-240k-503b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q2_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q2_K.gguf) | Q2_K | 0.4GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.IQ3_XS.gguf) | IQ3_XS | 0.44GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.IQ3_S.gguf) | IQ3_S | 0.47GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q3_K_S.gguf) | Q3_K_S | 0.47GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.IQ3_M.gguf) | IQ3_M | 0.48GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q3_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q3_K.gguf) | Q3_K | 0.51GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q3_K_M.gguf) | Q3_K_M | 0.51GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q3_K_L.gguf) | Q3_K_L | 0.55GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.IQ4_XS.gguf) | IQ4_XS | 0.57GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q4_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q4_0.gguf) | Q4_0 | 0.59GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.IQ4_NL.gguf) | IQ4_NL | 0.6GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q4_K_S.gguf) | Q4_K_S | 0.6GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q4_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q4_K.gguf) | Q4_K | 0.62GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q4_K_M.gguf) | Q4_K_M | 0.62GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q4_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q4_1.gguf) | Q4_1 | 0.65GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q5_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q5_0.gguf) | Q5_0 | 0.71GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q5_K_S.gguf) | Q5_K_S | 0.71GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q5_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q5_K.gguf) | Q5_K | 0.73GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q5_K_M.gguf) | Q5_K_M | 0.73GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q5_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q5_1.gguf) | Q5_1 | 0.77GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q6_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q6_K.gguf) | Q6_K | 0.84GB | | [TinyLlama-1.1B-intermediate-step-240k-503b.Q8_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-intermediate-step-240k-503b-gguf/blob/main/TinyLlama-1.1B-intermediate-step-240k-503b.Q8_0.gguf) | Q8_0 | 1.09GB | Original model description: --- license: apache-2.0 datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata language: - en --- <div align="center"> # TinyLlama-1.1B </div> https://github.com/jzhang38/TinyLlama The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01. <div align="center"> <img src="./TinyLlama_logo.png" width="300"/> </div> We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. #### This Model This is an intermediate checkpoint with 240K steps and 503B tokens. **We suggest you not use this directly for inference.** The [chat model](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.1) is always preferred ** #### How to use You will need the transformers>=4.31 Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information. ``` from transformers import AutoTokenizer import transformers import torch model = "PY007/TinyLlama-1.1B-intermediate-step-240k-503b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.', do_sample=True, top_k=10, num_return_sequences=1, repetition_penalty=1.5, eos_token_id=tokenizer.eos_token_id, max_length=500, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ```