modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
microsoft/BiomedVLP-BioViL-T
microsoft
"2023-03-20T17:04:32Z"
2,141
17
transformers
[ "transformers", "pytorch", "bert", "feature-extraction", "exbert", "custom_code", "en", "arxiv:2301.04558", "arxiv:2204.09817", "license:mit", "region:us" ]
feature-extraction
"2023-02-17T10:19:58Z"
--- language: en tags: - exbert license: mit widget: - text: "Left pleural effusion with adjacent [MASK]." example_title: "Radiology 1" - text: "Heart size normal and lungs are [MASK]." example_title: "Radiology 2" - text: "[MASK] is a tumor suppressor gene." example_title: "Biomedical" - text: "The patient was on [MASK] for chronic atrial fibrillation" example_title: "Medication" --- # BioViL-T [BioViL-T](https://arxiv.org/abs/2301.04558) is a domain-specific vision-language model designed to analyze chest X-rays (CXRs) and radiology reports. It was trained using a temporal multi-modal pre-training procedure, which distinguishes it from its predecessor model ([BioViL](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136960001.pdf)). In detail, BioViL-T takes advantage of the temporal structure between data points, resulting in improved downstream performance on multiple benchmarks, while using the same training dataset as its predecessor. In particular, the resultant model displays significant improvement in embedding temporal information present in the image and text modalities (see [results](#performance)), as well as in the joint space. The canonical model can be adapted to both single- and multi-image downstream applications including: natural language inference, phrase-grounding, image/text classification, and language decoding. The corresponding BERT language model is trained in two stages: First, we pretrain [CXR-BERT-general](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-general) from a randomly initialized BERT model via Masked Language Modeling (MLM) on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts and clinical notes from the publicly-available [MIMIC-III](https://physionet.org/content/mimiciii/1.4/) and [MIMIC-CXR](https://physionet.org/content/mimic-cxr/). The general model can be fine-tuned for research in other clinical domains by adjusting the parameters specific to the target domain. In the second stage, BioViL-T is continually pretrained from CXR-BERT-general using a multi-modal pre-training procedure by utilising radiology reports and sequences of chest X-rays. We utilise the latent representation of [CLS] token to align text and image embeddings. ## Language model variations | Model | Model identifier on HuggingFace | Vocabulary | Note | | ------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | -------------- | --------------------------------------------------------- | | CXR-BERT-general | [microsoft/BiomedVLP-CXR-BERT-general](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-general) | PubMed & MIMIC | Pretrained for biomedical literature and clinical domains | | CXR-BERT-specialized | [microsoft/BiomedVLP-CXR-BERT-specialized](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-specialized) | PubMed & MIMIC | Static pretraining for the CXR domain | | BioViL-T | [microsoft/BiomedVLP-BioViL-T](https://huggingface.co/microsoft/BiomedVLP-BioViL-T) | PubMed & MIMIC | Static & temporal pretraining for the CXR domain ## Image model The image model is jointly trained with the text model in a multi-modal contrastive learning framework. It's a hybrid image encoder composed of a Vision Transformer and ResNet-50, where the latter is used as backbone network to extract features from images at each time point. The transformer is included in the design to aggregate and compare image features extracted across the temporal dimension. The corresponding model definition and its loading functions can be accessed through our [HI-ML-Multimodal](https://github.com/microsoft/hi-ml/blob/main/hi-ml-multimodal/src/health_multimodal/image/model/model.py) GitHub repository. The joint image and text model, namely [BioViL-T](https://arxiv.org/abs/2204.09817), can be used in phrase grounding applications as shown in this python notebook [example](https://mybinder.org/v2/gh/microsoft/hi-ml/HEAD?labpath=hi-ml-multimodal%2Fnotebooks%2Fphrase_grounding.ipynb). Additionally, please check the [MS-CXR benchmark](https://physionet.org/content/ms-cxr/0.1/) for a more systematic evaluation of joint image and text models in phrase grounding tasks. ## Citation The corresponding manuscript is accepted to be presented at the [**Conference on Computer Vision and Pattern Recognition (CVPR) 2023**](https://cvpr2023.thecvf.com/) ```bibtex @misc{https://doi.org/10.48550/arXiv.2301.04558, doi = {10.48550/ARXIV.2301.04558}, url = {https://arxiv.org/abs/2301.04558}, author = {Bannur, Shruthi and Hyland, Stephanie and Liu, Qianchu and Perez-Garcia, Fernando and Ilse, Maximilian and Castro, Daniel C and Boecking, Benedikt and Sharma, Harshita and Bouzid, Kenza and Thieme, Anja and Schwaighofer, Anton and Wetscherek, Maria and Lungren, Matthew P and Nori, Aditya and Alvarez-Valle, Javier and Oktay, Ozan} title = {Learning to Exploit Temporal Structure for Biomedical Vision–Language Processing}, publisher = {arXiv}, year = {2023}, } ``` ## Model Use ### Intended Use This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper. #### Primary Intended Use The primary intended use is to support AI researchers building on top of this work. CXR-BERT and its associated models should be helpful for exploring various clinical NLP & VLP research questions, especially in the radiology domain. #### Out-of-Scope Use **Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Under unprecedented conditions, the models may make inaccurate predictions and display limitations, which may require additional mitigation strategies. Therefore, we discourage use of the model for automated diagnosis or in a medical device. Please refer to [the associated paper](https://arxiv.org/abs/2301.04558) for more details. ### How to use Here is how to use this model to extract radiological sentence embeddings and obtain their cosine similarity in the joint space (image and text): ```python import torch from transformers import AutoModel, AutoTokenizer # Load the model and tokenizer url = "microsoft/BiomedVLP-BioViL-T" tokenizer = AutoTokenizer.from_pretrained(url, trust_remote_code=True) model = AutoModel.from_pretrained(url, trust_remote_code=True) # Input text prompts describing findings. # The order of prompts is adjusted to capture the spectrum from absence of a finding to its temporal progression. text_prompts = ["No pleural effusion or pneumothorax is seen.", "There is no pneumothorax or pleural effusion.", "The extent of the pleural effusion is reduced.", "The extent of the pleural effusion remains constant.", "Interval enlargement of pleural effusion."] # Tokenize and compute the sentence embeddings with torch.no_grad(): tokenizer_output = tokenizer.batch_encode_plus(batch_text_or_text_pairs=text_prompts, add_special_tokens=True, padding='longest', return_tensors='pt') embeddings = model.get_projected_text_embeddings(input_ids=tokenizer_output.input_ids, attention_mask=tokenizer_output.attention_mask) # Compute the cosine similarity of sentence embeddings obtained from input text prompts. sim = torch.mm(embeddings, embeddings.t()) ``` ## Data This model builds upon existing publicly-available datasets: - [PubMed](https://pubmed.ncbi.nlm.nih.gov/) - [MIMIC-III](https://physionet.org/content/mimiciii/) - [MIMIC-CXR](https://physionet.org/content/mimic-cxr/) These datasets reflect a broad variety of sources ranging from biomedical abstracts to intensive care unit notes to chest X-ray radiology notes. The radiology notes are accompanied with their associated chest x-ray DICOM images in MIMIC-CXR dataset. ## Performance The presented model achieves state-of-the-art results in radiology natural language inference by leveraging semantics and discourse characteristics at training time more efficiently. The experiments were performed on the RadNLI and MS-CXR-T benchmarks, which measure the quality of text embeddings in terms of static and temporal semantics respectively. BioViL-T is benchmarked against other commonly used SOTA domain specific BERT models, including [PubMedBERT](https://aka.ms/pubmedbert) and [CXR-BERT](https://aka.ms/biovil). The results below show that BioViL-T has increased sensitivity of sentence embeddings to temporal content (MS-CXR-T) whilst better capturing the static content (RadNLI). | | MS-CXR-T | MS-CXR-T | RadNLI (2 classes) | RadNLI (2 classes) | | ----------------------------------------------- | :-------------------------------: | :----------------------: | :-------------------------: | :-------------: | | | Accuracy | ROC-AUC | Accuracy | ROC-AUC | | [PubMedBERT]((https://aka.ms/pubmedbert)) | 60.39 | .542 | 81.38 | .727 | | [CXR-BERT-General](https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-general) | 62.60 | .601 | 87.59 | .902 | | [CXR-BERT-Specialized]((https://huggingface.co/microsoft/BiomedVLP-CXR-BERT-specialized)) | 78.12 | .837 | 89.66 | .932 | | **BioViL-T** | **87.77** | **.933** | **90.52** | **.947** | The novel pretraining framework yields also better vision-language representations. Below is the zero-shot phrase grounding performance obtained on the [MS-CXR](https://physionet.org/content/ms-cxr/0.1/) benchmark dataset, which evaluates the quality of image-text latent representations. | Vision–Language Pretraining Method | MS-CXR Phrase Grounding (Avg. CNR Score) | MS-CXR Phrase Grounding (mIoU) | | ---------------------------------- | :--------------------------------------: | :----------------------------: | | BioViL | 1.07 +- 0.04 | 0.229 +- 0.005 | | BioViL-L | 1.21 +- 0.05 | 0.202 +- 0.010 | | **BioViL-T** | **1.33 +- 0.04** | **0.240 +- 0.005** | Additional experimental results and discussion can be found in the corresponding paper, ["Learning to Exploit Temporal Structure for Biomedical Vision–Language Processing", CVPR'23](https://arxiv.org/abs/2301.04558). ## Limitations This model was developed using English corpora, and thus can be considered English-only. The training dataset contains only medical images and reports acquired from an intensive-care-unit (ICU), where longitudinal images are often collected within range of hours or at most few days. As a result, the models may show reduced performance in analyzing consecutive images acquired over longer periods of time (e.g. years) where significant anatomical variations are observed between the scans. ## Further information Please refer to the corresponding paper, ["Learning to Exploit Temporal Structure for Biomedical Vision–Language Processing", CVPR'23](https://arxiv.org/abs/2301.04558.pdf) for additional details on the model training and evaluation. For additional inference pipelines with BioViL-T, please refer to the [HI-ML GitHub](https://aka.ms/biovil-t-code) repository. The associated source files will soon be accessible through this link.
migtissera/Tess-XL-v1.0
migtissera
"2023-11-19T18:35:13Z"
2,141
13
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-07T04:42:58Z"
--- license: llama2 --- # Tess ![Tess](https://huggingface.co/migtissera/Tess-XL-v1.0/resolve/main/Tess.png) Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-XL is trained on the Goliath-120B as the base. While there are task specific Tess models (Creative or STEM), this model is general purpose. # Prompt format: ``` SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation. USER: What is the relationship between Earth's atmosphere, magnetic field and gravity? ASSISTANT: ``` # Quantized Versions Quantized versions are available in the following repos: - GPTQ: https://huggingface.co/TheBloke/Tess-XL-v1.0-GPTQ - AWQ: https://huggingface.co/TheBloke/Tess-XL-v1.0-AWQ - GGUF: https://huggingface.co/TheBloke/Tess-XL-v1.0-GGUF # Sample Conversation ``` You: What is the relationship between an electrical field and a magnetic field? The relationship between an electric field and a magnetic field can be summarized in two important ways: 1. Cause and effect: A changing electric field can induce a magnetic field, and a changing magnetic field can in turn induce an electric field. This is the principle of electromagnetic induction. 2. Duality: Electric and magnetic fields are often described as components of a single electromagnetic field. This duality is most clearly seen in Maxwell's equations, which describe how electric and magnetic fields are related. Chain of Thought: - Start with the question: What is the relationship between an electric field and a magnetic field? - Think about the basic principles of electromagnetism: cause and effect, as well as the duality of electric and magnetic fields. - Explain the relationship by referring to electromagnetic induction and Maxwell's equations. You: In some instances that are associated with UAP, an increase in electrical field causes a decrease in the measured magnetic field. This is contradictory to known physics. Can we theorize what could cause this? The relationship you describe between an increase in the electric field and a decrease in the measured magnetic field is indeed unusual and does not align with the conventional understanding of electromagnetic fields. However, we can consider some possible explanations: 1. Unknown phenomena: It's possible that there are physical phenomena or interactions that we have not yet discovered or fully understood that could cause this effect. These could include new particles or forces that we have not yet detected, or unusual properties of known particles or fields. 2. Measurement errors: Another possibility is that the observed relationship is due to measurement errors or inaccuracies in the instruments used to measure the electric and magnetic fields. This could include calibration issues, interference from other sources, or limitations in the sensitivity of the equipment. 3. Environmental effects: Changes in the environment or the nature of the object emitting the fields may also affect the relationship between the electric and magnetic fields. For example, if the object is moving or undergoing changes in composition or structure, this could affect the way that the fields are generated and detected. In summary, while the observed relationship between an increase in the electric field and a decrease in the measured magnetic field is not consistent with our current understanding of electromagnetism, there could be several possible explanations, including unknown phenomena, measurement errors, or environmental effects. Further research and investigation would be needed to determine the cause of this phenomenon. You: What is the most advanced theory of electro-magnetic fields right now? The most advanced theory of electromagnetic fields is quantum electrodynamics (QED). QED is a part of the larger framework of quantum field theory (QFT), which aims to describe the interactions of particles and fields in the context of quantum mechanics. QED specifically describes the interactions between photons (which are the quanta of the electromagnetic field) and charged particles, such as electrons and positrons. It provides a theoretical framework for understanding how photons can be created and absorbed by charged particles, and how they can interact with each other. The predictions of QED have been experimentally verified to an extremely high degree of accuracy, making it one of the most successful theories in physics. Despite its success, QED is not considered a complete theory, as it does not include gravity and is inconsistent with the principles of general relativity. The search for a unified theory that can reconcile QED with gravity is an area of active research in theoretical physics. You: Give me a detailed introduction to QED. Quantum electrodynamics (QED) is a quantum field theory that describes the interactions between electrically charged particles (such as electrons and positrons) and the photons that make up the electromagnetic field. It is a part of the larger framework of quantum field theory (QFT), which aims to describe the interactions of particles and fields in the context of quantum mechanics. In QED, electric and magnetic fields are described as being made up of individual packets of energy called photons. Photons are the quanta of the electromagnetic field, and they can be created and absorbed by charged particles. QED provides a theoretical framework for understanding how photons can interact with each other and with charged particles. The theory predicts that photons can be created or annihilated in pairs, and that charged particles can emit and absorb photons. These interactions are mediated by the exchange of virtual particles, which are other photons that exist only fleetingly and cannot be directly observed. The predictions of QED have been experimentally verified to an extremely high degree of accuracy, making it one of the most successful theories in physics. Despite its success, QED is not considered a complete theory, as it does not include gravity and is inconsistent with the principles of general relativity. The search for a unified theory that can reconcile QED with gravity is an area of active research in theoretical physics. In summary, QED is a highly successful theory that describes the interactions between charged particles and the electromagnetic field in terms of photons. While it is not a complete theory, it provides a fundamental understanding of the electromagnetic force and has led to many important discoveries in particle physics. ```
mradermacher/SquadLlama-GGUF
mradermacher
"2024-06-20T12:58:35Z"
2,141
1
transformers
[ "transformers", "gguf", "en", "base_model:Ahsann321/SquadLlama", "endpoints_compatible", "region:us" ]
null
"2024-06-20T12:45:12Z"
--- base_model: Ahsann321/SquadLlama language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Ahsann321/SquadLlama <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.Q2_K.gguf) | Q2_K | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.IQ3_XS.gguf) | IQ3_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.Q3_K_S.gguf) | Q3_K_S | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.IQ3_M.gguf) | IQ3_M | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.Q3_K_L.gguf) | Q3_K_L | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.IQ4_XS.gguf) | IQ4_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.Q5_K_S.gguf) | Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.Q5_K_M.gguf) | Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.Q6_K.gguf) | Q6_K | 1.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/SquadLlama-GGUF/resolve/main/SquadLlama.f16.gguf) | f16 | 2.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ckiplab/bert-tiny-chinese-ner
ckiplab
"2022-05-10T03:28:12Z"
2,139
4
transformers
[ "transformers", "pytorch", "bert", "token-classification", "zh", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-05-10T02:55:04Z"
--- language: - zh thumbnail: https://ckip.iis.sinica.edu.tw/files/ckip_logo.png tags: - pytorch - token-classification - bert - zh license: gpl-3.0 --- # CKIP BERT Tiny Chinese This project provides traditional Chinese transformers models (including ALBERT, BERT, GPT2) and NLP tools (including word segmentation, part-of-speech tagging, named entity recognition). 這個專案提供了繁體中文的 transformers 模型(包含 ALBERT、BERT、GPT2)及自然語言處理工具(包含斷詞、詞性標記、實體辨識)。 ## Homepage - https://github.com/ckiplab/ckip-transformers ## Contributers - [Mu Yang](https://muyang.pro) at [CKIP](https://ckip.iis.sinica.edu.tw) (Author & Maintainer) ## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModel, ) tokenizer = BertTokenizerFast.from_pretrained('bert-base-chinese') model = AutoModel.from_pretrained('ckiplab/bert-tiny-chinese-ner') ``` For full usage and more information, please refer to https://github.com/ckiplab/ckip-transformers. 有關完整使用方法及其他資訊,請參見 https://github.com/ckiplab/ckip-transformers 。
alfaneo/bertimbaulaw-base-portuguese-sts
alfaneo
"2023-06-30T11:10:11Z"
2,138
0
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-07-04T22:35:36Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # juridics/bertimbaulaw-base-portuguese-sts-scale This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('juridics/bertimbaulaw-base-portuguese-sts-scale') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('juridics/bertimbaulaw-base-portuguese-sts-scale') model = AutoModel.from_pretrained('juridics/bertimbaulaw-base-portuguese-sts-scale') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=juridics/bertimbaulaw-base-portuguese-sts-scale) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 2492 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 2492, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 5e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 748, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors If you use our work, please cite: ``` @incollection{Viegas_2023, doi = {10.1007/978-3-031-36805-9_24}, url = {https://doi.org/10.1007%2F978-3-031-36805-9_24}, year = 2023, publisher = {Springer Nature Switzerland}, pages = {349--365}, author = {Charles F. O. Viegas and Bruno C. Costa and Renato P. Ishii}, title = {{JurisBERT}: A New Approach that~Converts a~Classification Corpus into~an~{STS} One}, booktitle = {Computational Science and Its Applications {\textendash} {ICCSA} 2023} } ```
KnutJaegersberg/black_goo_recipe_e
KnutJaegersberg
"2023-12-03T15:11:02Z"
2,138
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-04T21:08:02Z"
--- license: cc-by-nc-4.0 --- Prompt Example: ``` ### System: You are an AI assistant. User will give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps. ### Instruction: How do you fine tune a large language model? ### Response: ```
mradermacher/Medichat-V2-Llama3-8B-GGUF
mradermacher
"2024-06-04T05:50:49Z"
2,138
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "medical", "en", "dataset:ruslanmv/ai-medical-chatbot", "dataset:Locutusque/hercules-v5.0", "base_model:sethuiyer/Medichat-V2-Llama3-8B", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-06-02T13:42:48Z"
--- base_model: sethuiyer/Medichat-V2-Llama3-8B datasets: - ruslanmv/ai-medical-chatbot - Locutusque/hercules-v5.0 language: - en library_name: transformers license: other quantized_by: mradermacher tags: - mergekit - merge - medical --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/sethuiyer/Medichat-V2-Llama3-8B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Medichat-V2-Llama3-8B-GGUF/resolve/main/Medichat-V2-Llama3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf
RichardErkhov
"2024-06-15T00:49:29Z"
2,138
0
null
[ "gguf", "region:us" ]
null
"2024-06-14T23:48:39Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) neuronovo-9B-v0.4 - GGUF - Model creator: https://huggingface.co/Neuronovo/ - Original model: https://huggingface.co/Neuronovo/neuronovo-9B-v0.4/ | Name | Quant method | Size | | ---- | ---- | ---- | | [neuronovo-9B-v0.4.Q2_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q2_K.gguf) | Q2_K | 3.13GB | | [neuronovo-9B-v0.4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.IQ3_XS.gguf) | IQ3_XS | 3.48GB | | [neuronovo-9B-v0.4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.IQ3_S.gguf) | IQ3_S | 3.67GB | | [neuronovo-9B-v0.4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q3_K_S.gguf) | Q3_K_S | 3.65GB | | [neuronovo-9B-v0.4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.IQ3_M.gguf) | IQ3_M | 3.79GB | | [neuronovo-9B-v0.4.Q3_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q3_K.gguf) | Q3_K | 4.05GB | | [neuronovo-9B-v0.4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q3_K_M.gguf) | Q3_K_M | 4.05GB | | [neuronovo-9B-v0.4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q3_K_L.gguf) | Q3_K_L | 4.41GB | | [neuronovo-9B-v0.4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.IQ4_XS.gguf) | IQ4_XS | 4.55GB | | [neuronovo-9B-v0.4.Q4_0.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q4_0.gguf) | Q4_0 | 4.74GB | | [neuronovo-9B-v0.4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.IQ4_NL.gguf) | IQ4_NL | 4.79GB | | [neuronovo-9B-v0.4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q4_K_S.gguf) | Q4_K_S | 4.78GB | | [neuronovo-9B-v0.4.Q4_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q4_K.gguf) | Q4_K | 5.04GB | | [neuronovo-9B-v0.4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q4_K_M.gguf) | Q4_K_M | 5.04GB | | [neuronovo-9B-v0.4.Q4_1.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q4_1.gguf) | Q4_1 | 5.26GB | | [neuronovo-9B-v0.4.Q5_0.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q5_0.gguf) | Q5_0 | 5.77GB | | [neuronovo-9B-v0.4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q5_K_S.gguf) | Q5_K_S | 5.77GB | | [neuronovo-9B-v0.4.Q5_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q5_K.gguf) | Q5_K | 5.93GB | | [neuronovo-9B-v0.4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q5_K_M.gguf) | Q5_K_M | 5.93GB | | [neuronovo-9B-v0.4.Q5_1.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q5_1.gguf) | Q5_1 | 6.29GB | | [neuronovo-9B-v0.4.Q6_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q6_K.gguf) | Q6_K | 6.87GB | | [neuronovo-9B-v0.4.Q8_0.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.4-gguf/blob/main/neuronovo-9B-v0.4.Q8_0.gguf) | Q8_0 | 8.89GB | Original model description: --- license: apache-2.0 datasets: - Intel/orca_dpo_pairs - mlabonne/chatml_dpo_pairs language: - en library_name: transformers --- More information about previous [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2) version available here: 🔗[Don't stop DPOptimizing!](https://www.linkedin.com/pulse/dont-stop-dpoptimizing-jan-koco%2525C5%252584-mq4qf) Author: Jan Kocoń &nbsp;&nbsp;&nbsp; 🔗[LinkedIn](https://www.linkedin.com/in/jankocon/) &nbsp;&nbsp;&nbsp; 🔗[Google Scholar](https://scholar.google.com/citations?user=pmQHb5IAAAAJ&hl=en&oi=ao) &nbsp;&nbsp;&nbsp; 🔗[ResearchGate](https://www.researchgate.net/profile/Jan-Kocon-2) Changes concerning [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2): 1. **Training Dataset**: In addition to the [Intel/orca_dpo_pairs](Intel/orca_dpo_pairs) dataset, this version incorporates a [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs). The combined datasets enhance the model's capabilities in dialogues and interactive scenarios, further specializing it in natural language understanding and response generation. 2. **Tokenizer and Formatting**: The tokenizer now originates directly from the [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2) model. 3. **Training Configuration**: The training approach has shifted from using `max_steps=200` to `num_train_epochs=1`. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps. 4. **Learning Rate**: The learning rate has been reduced to a smaller value of `5e-8`. This finer learning rate allows for more precise adjustments during the training process, potentially leading to better model performance.
winglian/mistral-11b-128k
winglian
"2023-11-12T15:49:06Z"
2,137
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "pretrained", "custom_code", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-12T15:19:16Z"
--- license: apache-2.0 pipeline_tag: text-generation language: - en tags: - pretrained inference: parameters: temperature: 0.7 --- # Mistral YARN 128k 11b This is a mergekit merge of the Nous Research's Yarn-Mistral-7b-128k Large Language Model (LLM) to create an 11 billion parameter pretrained generative text model with a context
Changgil/k2s3_test_24001
Changgil
"2024-02-21T06:42:23Z"
2,137
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-14T05:59:10Z"
--- license: llama2 language: - ko --- --- ## Developed by : * Changgil Song ## Model Number: * k2s3_test_24001 ## Base Model : * [meta-llama/Llama-2-13b-chat-hf](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf) ### Training Data * The model was trained on a diverse dataset comprising approximately 800 million tokens, including the Standard Korean Dictionary, KULLM training data from Korea University, dissertation abstracts from master's and doctoral theses, and Korean language samples from AI Hub. * 이 모델은 표준대국어사전, 고려대 KULLM의 훈련 데이터, 석박사학위자 서지정보 논문초록, ai_hub의 한국어 데이터 샘플들을 포함하여 약 8억 개의 토큰으로 구성된 다양한 데이터셋에서 훈련되었습니다. ### Training Method * This model was fine-tuned on the "meta-llama/Llama-2-13b-chat-hf" base model using PEFT (Parameter-Efficient Fine-Tuning) LoRA (Low-Rank Adaptation) techniques. * 이 모델은 "meta-llama/Llama-2-13b-chat-hf" 기반 모델을 PEFT LoRA를 사용하여 미세조정되었습니다. ### Hardware and Software * Hardware: Utilized two A100 (80G*2EA) GPUs for training. * Training Factors: This model was fine-tuned using PEFT LoRA with the HuggingFace SFTtrainer and applied fsdp. Key parameters included LoRA r = 8, LoRA alpha = 16, trained for 2 epochs, batch size of 1, and gradient accumulation of 32. * 이 모델은 PEFT LoRA를 사용하여 HuggingFace SFTtrainer와 fsdp를 적용하여 미세조정되었습니다. 주요 파라미터로는 LoRA r = 8, LoRA alpha = 16, 2 에폭 훈련, 배치 크기 1, 그리고 그라디언트 누적 32를 포함합니다. ### Caution * For fine-tuning this model, it is advised to consider the specific parameters used during training, such as LoRA r and LoRA alpha values, to ensure compatibility and optimal performance. * 이 모델을 미세조정할 때는 LoRA r 및 LoRA alpha 값과 같이 훈련 중에 사용된 특정 파라미터를 고려하는 것이 좋습니다. 이는 호환성 및 최적의 성능을 보장하기 위함입니다. ### Additional Information * The training leveraged the fsdp (Fully Sharded Data Parallel) feature through the HuggingFace SFTtrainer for efficient memory usage and accelerated training. * 훈련은 HuggingFace SFTtrainer를 통한 fsdp 기능을 활용하여 메모리 사용을 효율적으로 하고 훈련 속도를 가속화했습니다.
Chrisisis/5Cqc1AdayJLwD1dXYJFQSytKRe1jNB4a84qop37d5wM5WGxH_vgg
Chrisisis
"2024-02-24T08:25:51Z"
2,136
0
keras
[ "keras", "region:us" ]
null
"2024-02-05T18:34:28Z"
Entry not found
aloobun/CosmicBun-8B
aloobun
"2024-05-02T20:29:05Z"
2,136
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "math", "llama3", "physics", "chemistry", "biology", "dolphin", "conversational", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:cognitivecomputations/dolphin-2.9-llama3-8b", "base_model:Weyaxi/Einstein-v6.1-Llama3-8B", "base_model:Locutusque/llama-3-neural-chat-v1-8b", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-01T16:06:34Z"
--- license: mit library_name: transformers tags: - mergekit - merge - math - llama3 - physics - chemistry - biology - dolphin base_model: - cognitivecomputations/dolphin-2.9-llama3-8b - Weyaxi/Einstein-v6.1-Llama3-8B - Locutusque/llama-3-neural-chat-v1-8b model-index: - name: CosmicBun-8B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 61.86 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.29 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.08 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.85 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 68.23 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B name: Open LLM Leaderboard --- # model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Locutusque/llama-3-neural-chat-v1-8b](https://huggingface.co/Locutusque/llama-3-neural-chat-v1-8b) as a base. ### Models Merged The following models were included in the merge: * [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) * [Weyaxi/Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: Locutusque/llama-3-neural-chat-v1-8b dtype: bfloat16 merge_method: dare_ties parameters: int8_mask: 1.0 normalize: 0.0 slices: - sources: - layer_range: [0, 4] model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 1.0 weight: 0.6 - layer_range: [0, 4] model: Weyaxi/Einstein-v6.1-Llama3-8B parameters: density: 0.6 weight: 0.5 - layer_range: [0, 4] model: Locutusque/llama-3-neural-chat-v1-8b parameters: density: 1.0 weight: 0.5 - sources: - layer_range: [4, 8] model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 0.8 weight: 0.1 - layer_range: [4, 8] model: Weyaxi/Einstein-v6.1-Llama3-8B parameters: density: 1.0 weight: 0.2 - layer_range: [4, 8] model: Locutusque/llama-3-neural-chat-v1-8b parameters: density: 1.0 weight: 0.7 - sources: - layer_range: [8, 12] model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 0.7 weight: 0.1 - layer_range: [8, 12] model: Weyaxi/Einstein-v6.1-Llama3-8B parameters: density: 0.7 weight: 0.2 - layer_range: [8, 12] model: Locutusque/llama-3-neural-chat-v1-8b parameters: density: 0.7 weight: 0.6 - sources: - layer_range: [12, 16] model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 0.9 weight: 0.2 - layer_range: [12, 16] model: Weyaxi/Einstein-v6.1-Llama3-8B parameters: density: 0.6 weight: 0.6 - layer_range: [12, 16] model: Locutusque/llama-3-neural-chat-v1-8b parameters: density: 0.7 weight: 0.3 - sources: - layer_range: [16, 20] model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 1.0 weight: 0.2 - layer_range: [16, 20] model: Weyaxi/Einstein-v6.1-Llama3-8B parameters: density: 1.0 weight: 0.2 - layer_range: [16, 20] model: Locutusque/llama-3-neural-chat-v1-8b parameters: density: 0.9 weight: 0.4 - sources: - layer_range: [20, 24] model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 0.7 weight: 0.2 - layer_range: [20, 24] model: Weyaxi/Einstein-v6.1-Llama3-8B parameters: density: 0.9 weight: 0.3 - layer_range: [20, 24] model: Locutusque/llama-3-neural-chat-v1-8b parameters: density: 1.0 weight: 0.4 - sources: - layer_range: [24, 28] model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 1.0 weight: 0.4 - layer_range: [24, 28] model: Weyaxi/Einstein-v6.1-Llama3-8B parameters: density: 0.8 weight: 0.2 - layer_range: [24, 28] model: Locutusque/llama-3-neural-chat-v1-8b parameters: density: 0.9 weight: 0.4 - sources: - layer_range: [28, 32] model: cognitivecomputations/dolphin-2.9-llama3-8b parameters: density: 1.0 weight: 0.3 - layer_range: [28, 32] model: Weyaxi/Einstein-v6.1-Llama3-8B parameters: density: 0.9 weight: 0.2 - layer_range: [28, 32] model: Locutusque/llama-3-neural-chat-v1-8b parameters: density: 1.0 weight: 0.3 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_aloobun__CosmicBun-8B) | Metric |Value| |---------------------------------|----:| |Avg. |68.81| |AI2 Reasoning Challenge (25-Shot)|61.86| |HellaSwag (10-Shot) |84.29| |MMLU (5-Shot) |65.53| |TruthfulQA (0-shot) |54.08| |Winogrande (5-shot) |78.85| |GSM8k (5-shot) |68.23|
RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf
RichardErkhov
"2024-06-15T05:09:22Z"
2,136
0
null
[ "gguf", "region:us" ]
null
"2024-06-15T04:11:13Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mistral_9B_instruct_v0.2 - GGUF - Model creator: https://huggingface.co/cloudyu/ - Original model: https://huggingface.co/cloudyu/mistral_9B_instruct_v0.2/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mistral_9B_instruct_v0.2.Q2_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q2_K.gguf) | Q2_K | 3.13GB | | [mistral_9B_instruct_v0.2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.IQ3_XS.gguf) | IQ3_XS | 3.48GB | | [mistral_9B_instruct_v0.2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.IQ3_S.gguf) | IQ3_S | 3.67GB | | [mistral_9B_instruct_v0.2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q3_K_S.gguf) | Q3_K_S | 3.65GB | | [mistral_9B_instruct_v0.2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.IQ3_M.gguf) | IQ3_M | 3.79GB | | [mistral_9B_instruct_v0.2.Q3_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q3_K.gguf) | Q3_K | 4.05GB | | [mistral_9B_instruct_v0.2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q3_K_M.gguf) | Q3_K_M | 4.05GB | | [mistral_9B_instruct_v0.2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q3_K_L.gguf) | Q3_K_L | 4.41GB | | [mistral_9B_instruct_v0.2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.IQ4_XS.gguf) | IQ4_XS | 4.55GB | | [mistral_9B_instruct_v0.2.Q4_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q4_0.gguf) | Q4_0 | 4.74GB | | [mistral_9B_instruct_v0.2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.IQ4_NL.gguf) | IQ4_NL | 4.79GB | | [mistral_9B_instruct_v0.2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q4_K_S.gguf) | Q4_K_S | 4.78GB | | [mistral_9B_instruct_v0.2.Q4_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q4_K.gguf) | Q4_K | 5.04GB | | [mistral_9B_instruct_v0.2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q4_K_M.gguf) | Q4_K_M | 5.04GB | | [mistral_9B_instruct_v0.2.Q4_1.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q4_1.gguf) | Q4_1 | 5.26GB | | [mistral_9B_instruct_v0.2.Q5_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q5_0.gguf) | Q5_0 | 5.77GB | | [mistral_9B_instruct_v0.2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q5_K_S.gguf) | Q5_K_S | 5.77GB | | [mistral_9B_instruct_v0.2.Q5_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q5_K.gguf) | Q5_K | 5.93GB | | [mistral_9B_instruct_v0.2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q5_K_M.gguf) | Q5_K_M | 5.93GB | | [mistral_9B_instruct_v0.2.Q5_1.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q5_1.gguf) | Q5_1 | 6.29GB | | [mistral_9B_instruct_v0.2.Q6_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q6_K.gguf) | Q6_K | 6.87GB | | [mistral_9B_instruct_v0.2.Q8_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_mistral_9B_instruct_v0.2-gguf/blob/main/mistral_9B_instruct_v0.2.Q8_0.gguf) | Q8_0 | 8.89GB | Original model description: --- license: cc-by-4.0 --- ``` this is a 40 layers model based on mistral architecture sft by vicgalle/alpaca-gpt4. template alpaca format ``` example output ``` <s> write me a story about yosemite. Once upon a time, in the heart of California's Sierra Nevada mountains, there lay a breathtakingly beautiful place known as Yosemite National Park. This natural wonderland was home to some of the most magnificent geological formations and scenic vistas that could be found anywhere on earth. It was a place where nature had been given free reign to create its masterpieces without any interference from mankind. Yosemite was a land of towering sequoia trees, cascading waterfalls, crystal-clear streams, and majestic cliffs that seemed to stretch upwards into the heavens themselves. The air was always cool and fresh, filled with the sweet scent of pine and the soft sound of birds chirping in the treetops. Visitors were constantly amazed by the sheer beauty and grandeur of this magical place. One particular day, as the sun began to rise over the horizon, a young couple, Sarah and Jack, made their way through the ancient forest, hand in hand. They had come to Yosemite for their honeymoon, eager to explore the wonders of this natural paradise together. As they walked deeper into the park, they marveled at the sights around them – the towering granite domes, the sparkling rivers, and the vibrant colors of the wildflowers that carpeted the ground beneath their feet. Sarah and Jack continued their journey, taking in all the beauty that Yosemite had to offer. They climbed up the steep trails to the top of the mountain overlooks, gazing out at the stunning panoramic views before them. They sat quietly by the side of the river, watching as the fish swam gracefully through the clear waters. And they shared stories and memories, laughing and smiling as they looked into each other’s eyes. As the days passed, Sarah and Jack grew more and more enamored with Yosemite. They felt like they were part of something greater than themselves – a connection not only to each other but also to the incredible natural world that surrounded them. They left no trace behind, leaving only footprints in the sand and memories in their hearts. When it was finally time for Sarah and Jack to leave Yosemite, they both knew that they would carry these experiences with them forever. ```
wbbbbb/wav2vec2-large-chinese-zh-cn
wbbbbb
"2023-09-11T00:07:38Z"
2,135
36
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "wav2vec2", "pretraining", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "zh", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-07-18T06:21:56Z"
--- language: zh datasets: - common_voice metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Chinese (zh-CN) by wbbbbb results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice zh-CN type: common_voice args: zh-CN metrics: - name: Test WER type: wer value: 70.47 - name: Test CER type: cer value: 12.30 --- # Fine-tuned XLSR-53 large model for speech recognition in Chinese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chinese using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice), [CSS10](https://github.com/Kyubyong/css10) and [ST-CMDS](http://www.openslr.org/38/). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned on RTX3090 for 50h The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint ## Usage The model can be used directly (without a language model) as follows... Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library: ```python from huggingsound import SpeechRecognitionModel model = SpeechRecognitionModel("wbbbbb/wav2vec2-large-chinese-zh-cn") audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"] transcriptions = model.transcribe(audio_paths) ``` ## Evaluation The model can be evaluated as follows on the Chinese (zh-CN) test data of Common Voice. ```python import torch import re import librosa from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import warnings import os os.environ["KMP_AFFINITY"] = "" LANG_ID = "zh-CN" MODEL_ID = "zh-CN-output-aishell" DEVICE = "cuda" test_dataset = load_dataset("common_voice", LANG_ID, split="test") wer = load_metric("wer") cer = load_metric("cer") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) model.to(DEVICE) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): with warnings.catch_warnings(): warnings.simplefilter("ignore") speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = ( re.sub("([^\u4e00-\u9fa5\u0030-\u0039])", "", batch["sentence"]).lower() + " " ) return batch test_dataset = test_dataset.map( speech_file_to_array_fn, num_proc=15, remove_columns=['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'], ) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor( batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True ) with torch.no_grad(): logits = model( inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE), ).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) predictions = [x.lower() for x in result["pred_strings"]] references = [x.lower() for x in result["sentence"]] print( f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}" ) print(f"CER: {cer.compute(predictions=predictions, references=references) * 100}") ``` **Test Result**: In the table below I report the Word Error Rate (WER) and the Character Error Rate (CER) of the model. I ran the evaluation script described above on other models as well (on 2022-07-18). Note that the table below may show different results from those already reported, this may have been caused due to some specificity of the other evaluation scripts used. | Model | WER | CER | | ------------- | ------------- | ------------- | | wbbbbb/wav2vec2-large-chinese-zh-cn | **70.47%** | **12.30%** | | jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn | **82.37%** | **19.03%** | | ydshieh/wav2vec2-large-xlsr-53-chinese-zh-cn-gpt | 84.01% | 20.95% | ## Citation If you want to cite this model you can use this: ```bibtex @misc{grosman2021xlsr53-large-chinese, title={Fine-tuned {XLSR}-53 large model for speech recognition in {C}hinese}, author={Grosman, Jonatas}, howpublished={\url{https://huggingface.co/wbbbbb/wav2vec2-large-chinese-zh-cn}}, year={2021} } ```
TheBloke/neural-chat-7B-v3-1-GGUF
TheBloke
"2023-11-17T14:02:26Z"
2,135
57
transformers
[ "transformers", "gguf", "mistral", "base_model:Intel/neural-chat-7b-v3-1", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
"2023-11-15T18:18:55Z"
--- base_model: Intel/neural-chat-7b-v3-1 inference: false license: apache-2.0 model_creator: Intel model_name: Neural Chat 7B v3-1 model_type: mistral prompt_template: '### System: {system_message} ### User: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Neural Chat 7B v3-1 - GGUF - Model creator: [Intel](https://huggingface.co/Intel) - Original model: [Neural Chat 7B v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) <!-- description start --> ## Description This repo contains GGUF format model files for [Intel's Neural Chat 7B v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF) * [Intel's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Intel/neural-chat-7b-v3-1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Orca-Hashes ``` ### System: {system_message} ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [neural-chat-7b-v3-1.Q2_K.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [neural-chat-7b-v3-1.Q3_K_S.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [neural-chat-7b-v3-1.Q3_K_M.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [neural-chat-7b-v3-1.Q3_K_L.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [neural-chat-7b-v3-1.Q4_0.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [neural-chat-7b-v3-1.Q4_K_S.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [neural-chat-7b-v3-1.Q4_K_M.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [neural-chat-7b-v3-1.Q5_0.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [neural-chat-7b-v3-1.Q5_K_S.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [neural-chat-7b-v3-1.Q5_K_M.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [neural-chat-7b-v3-1.Q6_K.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [neural-chat-7b-v3-1.Q8_0.gguf](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF/blob/main/neural-chat-7b-v3-1.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/neural-chat-7B-v3-1-GGUF and below it, a specific filename to download, such as: neural-chat-7b-v3-1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/neural-chat-7B-v3-1-GGUF neural-chat-7b-v3-1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/neural-chat-7B-v3-1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/neural-chat-7B-v3-1-GGUF neural-chat-7b-v3-1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m neural-chat-7b-v3-1.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n\n### User:\n{prompt}\n\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/neural-chat-7B-v3-1-GGUF", model_file="neural-chat-7b-v3-1.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Intel's Neural Chat 7B v3-1 ## Fine-tuning on [Habana](https://habana.ai/) Gaudi2 This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Habana Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3). ## Model date Neural-chat-7b-v3-1 was trained between September and October, 2023. ## Evaluation We submit our model to [open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and the model performance has been **improved significantly** as we see from the average metric of 7 tasks from the leaderboard. | Model | Average ⬆️| ARC (25-s) ⬆️ | HellaSwag (10-s) ⬆️ | MMLU (5-s) ⬆️| TruthfulQA (MC) (0-s) ⬆️ | Winogrande (5-s) | GSM8K (5-s) | DROP (3-s) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | |[mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 50.32 | 59.58 | 83.31 | 64.16 | 42.15 | 78.37 | 18.12 | 6.14 | | [Intel/neural-chat-7b-v3](https://huggingface.co/Intel/neural-chat-7b-v3) | **57.31** | 67.15 | 83.29 | 62.26 | 58.77 | 78.06 | 1.21 | 50.43 | | [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) | **59.06** | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-04 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-HPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2.0 ## Prompt Template ``` ### System: {system} ### User: {usr} ### Assistant: ``` ## Inference with transformers ```shell import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'Intel/neural-chat-7b-v3-1' ) ``` ## Ethical Considerations and Limitations neural-chat-7b-v3-1 can produce factually incorrect output, and should not be relied on to produce factually accurate information. neural-chat-7b-v3-1 was trained on [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of neural-chat-7b-v3-1, developers should perform safety testing. ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Organizations developing the model The NeuralChat team with members from Intel/SATG/AIA/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen. ## Useful links * Intel Neural Compressor [link](https://github.com/intel/neural-compressor) * Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers) <!-- original-model-card end -->
RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf
RichardErkhov
"2024-06-15T03:36:59Z"
2,135
1
null
[ "gguf", "region:us" ]
null
"2024-06-15T02:47:47Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Bepis_9B - GGUF - Model creator: https://huggingface.co/ChaoticNeutrals/ - Original model: https://huggingface.co/ChaoticNeutrals/Bepis_9B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Bepis_9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q2_K.gguf) | Q2_K | 3.13GB | | [Bepis_9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.IQ3_XS.gguf) | IQ3_XS | 3.48GB | | [Bepis_9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.IQ3_S.gguf) | IQ3_S | 3.67GB | | [Bepis_9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q3_K_S.gguf) | Q3_K_S | 3.65GB | | [Bepis_9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.IQ3_M.gguf) | IQ3_M | 3.79GB | | [Bepis_9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q3_K.gguf) | Q3_K | 4.05GB | | [Bepis_9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q3_K_M.gguf) | Q3_K_M | 4.05GB | | [Bepis_9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q3_K_L.gguf) | Q3_K_L | 4.41GB | | [Bepis_9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.IQ4_XS.gguf) | IQ4_XS | 4.55GB | | [Bepis_9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q4_0.gguf) | Q4_0 | 4.74GB | | [Bepis_9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.IQ4_NL.gguf) | IQ4_NL | 4.79GB | | [Bepis_9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q4_K_S.gguf) | Q4_K_S | 4.78GB | | [Bepis_9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q4_K.gguf) | Q4_K | 5.04GB | | [Bepis_9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q4_K_M.gguf) | Q4_K_M | 5.04GB | | [Bepis_9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q4_1.gguf) | Q4_1 | 5.26GB | | [Bepis_9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q5_0.gguf) | Q5_0 | 5.77GB | | [Bepis_9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q5_K_S.gguf) | Q5_K_S | 5.77GB | | [Bepis_9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q5_K.gguf) | Q5_K | 5.93GB | | [Bepis_9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q5_K_M.gguf) | Q5_K_M | 5.93GB | | [Bepis_9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q5_1.gguf) | Q5_1 | 6.29GB | | [Bepis_9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q6_K.gguf) | Q6_K | 6.87GB | | [Bepis_9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/ChaoticNeutrals_-_Bepis_9B-gguf/blob/main/Bepis_9B.Q8_0.gguf) | Q8_0 | 8.89GB | Original model description: --- language: - en license: other library_name: transformers tags: - mergekit - merge base_model: [] model-index: - name: Bepis_9B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 62.54 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Bepis_9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.12 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Bepis_9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.84 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Bepis_9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 53.3 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Bepis_9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.48 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Bepis_9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 39.12 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Bepis_9B name: Open LLM Leaderboard --- # Bepis ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/H0_oJhrIEGBIwogB77p5w.jpeg) A new 9B model from jeiku. This one is smart, proficient at markdown, knows when to stop talking, and is quite soulful. The merge was an equal 3 way split between https://huggingface.co/ChaoticNeutrals/Prodigy_7B, https://huggingface.co/Test157t/Prima-LelantaclesV6-7b, and https://huggingface.co/cgato/Thespis-CurtainCall-7b-v0.2.1 If there's any 7B to 11B merge or finetune you'd like to see, feel free to leave a message. The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: primathespis layer_range: [0, 20] - sources: - model: prodigalthespis layer_range: [12, 32] merge_method: passthrough dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChaoticNeutrals__Bepis_9B) | Metric |Value| |---------------------------------|----:| |Avg. |62.40| |AI2 Reasoning Challenge (25-Shot)|62.54| |HellaSwag (10-Shot) |80.12| |MMLU (5-Shot) |62.84| |TruthfulQA (0-shot) |53.30| |Winogrande (5-shot) |76.48| |GSM8k (5-shot) |39.12|
QuantFactory/SauerkrautLM-1.5b-GGUF
QuantFactory
"2024-06-18T05:41:06Z"
2,135
0
null
[ "gguf", "spectrum", "continuous pretraining", "sft", "dpo", "text-generation", "de", "en", "base_model:VAGOsolutions/SauerkrautLM-1.5b", "license:apache-2.0", "region:us" ]
text-generation
"2024-06-15T18:31:37Z"
--- license: apache-2.0 language: - de - en tags: - spectrum - continuous pretraining - sft - dpo pipeline_tag: text-generation base_model: VAGOsolutions/SauerkrautLM-1.5b --- # QuantFactory/SauerkrautLM-1.5b-GGUF This is quantized version of [VAGOsolutions/SauerkrautLM-1.5b](https://huggingface.co/VAGOsolutions/SauerkrautLM-1.5b) created suing llama.cpp # Model Description ![SauerkrautLM-1.5b](https://vago-solutions.ai/wp-content/uploads/2024/06/SauerkrautLM-1.5b-pic.png "SauerkrautLM-1.5b") ## VAGO solutions SauerkrautLM-1.5b **DEMO Model** - *to showcase the potential of resource-efficient Continuous Pre-Training of Large Language Models using **Spectrum CPT*** Introducing **SauerkrautLM-1.5b** – our Sauerkraut version of the powerful [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B)! - Continuous Pretraining on German Data with [**Spectrum**](https://github.com/cognitivecomputations/spectrum) CPT (by Eric Hartford, Lucas Atkins, Fernando Fernandes Neto and David Golchinfar) **targeting 25% of the layers.** - Finetuned with SFT - Aligned with DPO # Table of Contents 1. [Overview of all SauerkrautLM-1.5b](#all-SauerkrautLM-1.5b) 2. [Model Details](#model-details) - [Training procedure](#training-procedure) 3. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## All SauerkrautLM-1.5b | Model | HF | EXL2 | GGUF | AWQ | |-------|-------|-------|-------|-------| | SauerkrautLM-1.5b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-1.5b) | coming soon | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-1.5b.GGUF) | coming soon | ## Model Details **SauerkrautLM-1.5b** - **Model Type:** SauerkrautLM-1.5b is a finetuned Model based on [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) - **Language(s):** German, English - **License:** Apache 2.0 - **Contact:** [VAGO solutions](https://vago-solutions.ai) ## Training Procedure This model is a demo intended to showcase the potential of resource-efficient training of large language models using Spectrum CPT. Here's a brief on the procedure: **Continuous Pre-training (CPT) on German Data**: Utilizing Spectrum by Eric Hartford, Lucas Atkins, Fernando Fernandes Neto, and David Golchinfar, the model targeted 25% of its layers during training. This approach allowed significant resource savings: Spectrum with 25% layer targeting consumed 309.78GB at a batch size of 2048. Full Fine-tuning targeting 100% of layers used 633.55GB at the same batch size. Using Spectrum, we enhanced the German language capabilities of the Qwen2-1.5B model via CPT while achieving substantial resource savings. Spectrum enabled faster training and cost reductions. By not targeting all layers for CPT, we managed to prevent substantial performance degradation in the model's primary language (English), thus markedly improving its German proficiency. The model was further trained with **6.1 billion German tokens**, costing $1152 GPU-Rent for CPT. In the German Rag evaluation, it is on par with 8 billion parameter models and, with its 1.5 billion parameter size, is well-suited for mobile deployment on smartphones and tablets. Despite the large volume of German CPT data, the model competes well against the Qwen2-1.5B-Instruct model and performs significantly better in German. **Post-CPT Training**: The model underwent 3 epochs of Supervised Fine-Tuning (SFT) with 700K samples. **Further Steps**: The model was aligned with Direct Preference Optimization (DPO) using 70K samples. ## Objective and Results The primary goal of this training was to demonstrate that with Spectrum CPT targeting 25% of the layers, even a relatively small model with 1.5 billion parameters can significantly enhance language capabilities while using a fraction of the resources of the classic CPT approach. This method has an even more pronounced effect on larger models. It is feasible to teach a model a new language by training just a quarter of the available layers. The model has substantially improved German skills as demonstrated in RAG evaluations and numerous recognized benchmarks. In some English benchmarks, it even surpasses the Qwen2-1.5B-Instruct model. **Spectrum CPT can efficiently teach a new language to a large language model (LLM) while preserving the majority of its previously acquired knowledge.** Stay tuned for the next big models employing Spectrum CPT! **NOTE** For the demo, the performance of the model is sufficient. For productive use, more German tokens can be trained on the SauerkrautLM-1.5b as required in order to teach the model even firmer German while only having a relative influence on the performance of the model (25% of the layers). The SauerkrautLM-1.5b offers an excellent starting point for this. ## Evaluation **VRAM usage Spectrum CPT vs. FFT CPT - with a batchsize of 2048** ![SauerkrautLM-1.5b_vram](https://vago-solutions.ai/wp-content/uploads/2024/06/VRAM-Usage_new.png "SauerkrautLM-1.5b_vram") **Open LLM Leaderboard H6:** ![SauerkrautLM-1.5b_h6](https://vago-solutions.ai/wp-content/uploads/2024/06/H6-Benchmarks.png "SauerkrautLM-1.5b_h6") **German H4** ![SauerkrautLM-1.5b_h4](https://vago-solutions.ai/wp-content/uploads/2024/06/H4_ger_new.png "SauerkrautLM-1.5b_h4") **German RAG:** ![SauerkrautLM-1.5b_ger_rag](https://vago-solutions.ai/wp-content/uploads/2024/06/ger_rag_eval.png "SauerkrautLM-1.5b_ger_rag") **GPT4ALL** ![SauerkrautLM-1.5b_gpt4all](https://vago-solutions.ai/wp-content/uploads/2024/06/GPT4All-1.png "SauerkrautLM-1.5b_gpt4all") **AGIEval** ![SauerkrautLM-1.5b_agieval](https://vago-solutions.ai/wp-content/uploads/2024/06/AGIEval-1.png "SauerkrautLM-1.5b_agieval") ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website. We are also grateful for your feedback and suggestions. ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt) ## Acknowledgement Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such valuable model to the Open-Source community.
sentence-transformers/msmarco-distilroberta-base-v2
sentence-transformers
"2024-03-27T11:33:22Z"
2,134
3
sentence-transformers
[ "sentence-transformers", "pytorch", "tf", "jax", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "arxiv:1908.10084", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers pipeline_tag: sentence-similarity --- # sentence-transformers/msmarco-distilroberta-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/msmarco-distilroberta-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilroberta-base-v2') model = AutoModel.from_pretrained('sentence-transformers/msmarco-distilroberta-base-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, max pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilroberta-base-v2) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors This model was trained by [sentence-transformers](https://www.sbert.net/). If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084): ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "http://arxiv.org/abs/1908.10084", } ```
symanto/sn-xlm-roberta-base-snli-mnli-anli-xnli
symanto
"2023-02-20T09:49:54Z"
2,134
61
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "zero-shot-classification", "sentence-similarity", "transformers", "ar", "bg", "de", "el", "en", "es", "fr", "ru", "th", "tr", "ur", "vn", "zh", "dataset:SNLI", "dataset:MNLI", "dataset:ANLI", "dataset:XNLI", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- language: - ar - bg - de - el - en - es - fr - ru - th - tr - ur - vn - zh datasets: - SNLI - MNLI - ANLI - XNLI pipeline_tag: sentence-similarity tags: - zero-shot-classification - sentence-transformers - feature-extraction - sentence-similarity - transformers --- A Siamese network model trained for zero-shot and few-shot text classification. The base model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base). It was trained on [SNLI](https://nlp.stanford.edu/projects/snli/), [MNLI](https://cims.nyu.edu/~sbowman/multinli/), [ANLI](https://github.com/facebookresearch/anli) and [XNLI](https://github.com/facebookresearch/XNLI). This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ```
Xwin-LM/Xwin-LM-70B-V0.1
Xwin-LM
"2023-09-21T09:55:27Z"
2,134
212
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-15T14:04:14Z"
--- license: llama2 --- <h3 align="center"> Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment </h3> <p align="center"> <a href="https://github.com/Xwin-LM/Xwin-LM"> <img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"> </a> <a href="https://huggingface.co/Xwin-LM"> <img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"> </a> </p> **Step up your LLM alignment with Xwin-LM!** Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the Llama2 base models, ranked **TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. The project will be continuously updated. ## News - 💥 [Sep, 2023] We released [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), which has achieved a win-rate against Davinci-003 of **95.57%** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark, ranking as **TOP-1** on AlpacaEval. **It was the FIRST model surpassing GPT-4** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Also note its winrate v.s. GPT-4 is **60.61**. - 🔍 [Sep, 2023] RLHF plays crucial role in the strong performance of Xwin-LM-V0.1 release! - 💥 [Sep, 2023] We released [Xwin-LM-13B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1), which has achieved **91.76%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 13B models. - 💥 [Sep, 2023] We released [Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1), which has achieved **87.82%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 7B models. ## Model Card | Model | Checkpoint | Report | License | |------------|------------|-------------|------------------| |Xwin-LM-7B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1" target="_blank">HF Link</a> | 📃**Coming soon (Stay tuned)** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-13B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-70B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| ## Benchmarks ### Xwin-LM performance on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). The table below displays the performance of Xwin-LM on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), where evaluates its win-rate against Text-Davinci-003 across 805 questions. To provide a comprehensive evaluation, we present, for the first time, the win-rate against ChatGPT and GPT-4 as well. Our Xwin-LM model family establish a new state-of-the-art performance across all metrics. Notably, Xwin-LM-70B-V0.1 has eclipsed GPT-4 for the first time, achieving an impressive win-rate of **95.57%** to Text-Davinci-003 and **60.61%** to GPT-4. | **Model** | **AlpacaEval (winrate %)** | **AlpacaEval (winrate %)** |**AlpacaEval (winrate %)** | |----------------------------------|------------|----------|-------------| | | **v.s. Text-Davinci-003** | **v.s. ChatGPT** | **v.s. GPT4**| | **Xwin-LM-70B-V0.1** | **95.57** | **87.50** | **60.61** | | GPT-4 | 95.28 | 84.66 | 50.00 | | WizardLM-70B-V1.0 | 92.91 | 80.19 | 46.70 | | Llama-2-70B-Chat | 92.66 | 81.97 | 51.19 | | **Xwin-LM-13B-V0.1** | **91.76** | **81.79** | **55.30** | | ChatGPT | 89.37 | 50.00 | 16.60 | | WizardLM-13B-V1.2 | 89.17 | 75.75 | 41.91 | | **Xwin-LM-7B-V0.1** | **87.35** | **76.40** | **47.57** | | Llama-2-13B-Chat | 81.09 | 64.22 | 30.92 | ### Case study on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) | Prompts | GTP4 response | Xwin-LM response | |----------------|------------|--------| |I've put a plate on top of an apple, and take the plate from kitchen to dinning room. Where is the apple now|The apple is now in the dining room, as it was underneath the plate when you moved the plate from the kitchen to the dining room.|The apple is still in the kitchen, as you have only moved the plate. The apple is under the plate, so if you want to bring the apple to the dining room, you should either move the apple directly or put the apple on the plate and then move the plate to the dining room.| | Answer the following question.<br> How do you say "good evening" in French. | In French, "good evening" is said as "bonsoir." | In French, "good evening" is translated to "bonsoir." This greeting is used in the evening and nighttime hours, typically after 6 p.m. It is a combination of "bon," which means "good," and "soir," which means "evening." To pronounce it, say "bone-swahr." | ### Xwin-LM performance on NLP foundation tasks. The following table provides a comparison of Xwin-LMs with other LLMs on NLP foundation tasks in [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | MMLU 5-shot | ARC 25-shot | TruthfulQA 0-shot | HellaSwag 10-shot | Average | |------------------|-------------|-------------|-------------------|-------------------|------------| | Text-davinci-003 | 56.9 | **85.2** | 59.3 | 82.2 | 70.9 | |Vicuna-13b 1.1 | 51.3 | 53.0 | 51.8 | 80.1 | 59.1 | |Guanaco 30B | 57.6 | 63.7 | 50.7 | 85.1 | 64.3 | | WizardLM-7B 1.0 | 42.7 | 51.6 | 44.7 | 77.7 | 54.2 | | WizardLM-13B 1.0 | 52.3 | 57.2 | 50.5 | 81.0 | 60.2 | | WizardLM-30B 1.0 | 58.8 | 62.5 | 52.4 | 83.3 | 64.2| | Llama-2-7B-Chat | 48.3 | 52.9 | 45.6 | 78.6 | 56.4 | | Llama-2-13B-Chat | 54.6 | 59.0 | 44.1 | 81.9 | 59.9 | | Llama-2-70B-Chat | 63.9 | 64.6 | 52.8 | 85.9 | 66.8 | | **Xwin-LM-7B-V0.1** | 49.7 | 56.2 | 48.1 | 79.5 | 58.4 | | **Xwin-LM-13B-V0.1** | 56.6 | 62.4 | 45.5 | 83.0 | 61.9 | | **Xwin-LM-70B-V0.1** | **69.6** | 70.5 | **60.1** | **87.1** | **71.8** | ## Inference ### Conversation templates To obtain desired results, please strictly follow the conversation templates when utilizing our model for inference. Our model adopts the prompt format established by [Vicuna](https://github.com/lm-sys/FastChat) and is equipped to support **multi-turn** conversations. ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi! ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am Xwin-LM.</s>...... ``` ### HuggingFace Example ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) inputs = tokenizer(prompt, return_tensors="pt") samples = model.generate(**inputs, max_new_tokens=4096, temperature=0.7) output = tokenizer.decode(samples[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True) print(output) # Of course! I'm here to help. Please feel free to ask your question or describe the issue you're having, and I'll do my best to assist you. ``` ### vllm Example Because Xwin-LM is based on Llama2, it also offers support for rapid inference using [vllm](https://github.com/vllm-project/vllm). Please refer to [vllm](https://github.com/vllm-project/vllm) for detailed installation instructions. ```python from vllm import LLM, SamplingParams ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) sampling_params = SamplingParams(temperature=0.7, max_tokens=4096) llm = LLM(model="Xwin-LM/Xwin-LM-7B-V0.1") outputs = llm.generate([prompt,], sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(generated_text) ``` ## TODO - [ ] Release the source code - [ ] Release more capabilities, such as math, reasoning, and etc. ## Citation Please consider citing our work if you use the data or code in this repo. ``` @software{xwin-lm, title = {Xwin-LM}, author = {Xwin-LM Team}, url = {https://github.com/Xwin-LM/Xwin-LM}, version = {pre-release}, year = {2023}, month = {9}, } ``` ## Acknowledgements Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm), and [vllm](https://github.com/vllm-project/vllm).
facebook/musicgen-melody
facebook
"2024-04-24T12:27:39Z"
2,133
166
transformers
[ "transformers", "safetensors", "musicgen_melody", "text-to-audio", "musicgen", "arxiv:2306.05284", "license:cc-by-nc-4.0", "region:us" ]
text-to-audio
"2023-06-08T17:27:38Z"
--- license: cc-by-nc-4.0 tags: - musicgen inference: false --- # MusicGen - Melody - 1.5B Audiocraft provides the code and models for MusicGen, a simple and controllable model for music generation. MusicGen is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods like MusicLM, MusicGen doesn't not require a self-supervised semantic representation, and it generates all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio. MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*. Four checkpoints are released: - [small](https://huggingface.co/facebook/musicgen-small) - [medium](https://huggingface.co/facebook/musicgen-medium) - [large](https://huggingface.co/facebook/musicgen-large) - [**melody** (this checkpoint)](https://huggingface.co/facebook/musicgen-melody) ## Example Try out MusicGen yourself! - <a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> - <a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen"> <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HugginFace"/> </a> - You can run MusicGen locally as well: 1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft) ``` pip install git+https://github.com/facebookresearch/audiocraft.git ``` 2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed: ``` apt get install ffmpeg ``` 3. Run the following Python code: ```py import torchaudio from audiocraft.models import MusicGen from audiocraft.data.audio import audio_write model = MusicGen.get_pretrained('melody') model.set_generation_params(duration=8) # generate 8 seconds. descriptions = ['happy rock', 'energetic EDM', 'sad jazz'] melody, sr = torchaudio.load('./assets/bach.mp3') # generates using the melody from the given audio and the provided descriptions. wav = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr) for idx, one_wav in enumerate(wav): # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness") ``` ## Model details **Organization developing the model:** The FAIR team of Meta AI. **Model date:** MusicGen was trained between April 2023 and May 2023. **Model version:** This is the version 1 of the model. **Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation. **Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284). **Citation details:** ``` @misc{copet2023simple, title={Simple and Controllable Music Generation}, author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, year={2023}, eprint={2306.05284}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` **License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0. **Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue. ## Intended use **Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including: - Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science - Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs **Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models. **Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. ## Metrics **Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark: - Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish) - Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST) - CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes: - Overall quality of the music samples; - Text relevance to the provided text input; - Adherence to the melody for melody-guided music generation. More details on performance measures and human studies can be found in the paper. **Decision thresholds:** Not applicable. ## Evaluation datasets The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set. ## Training datasets The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing. ## Evaluation results Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper. | Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity | |---|---|---|---|---| | facebook/musicgen-small | 4.88 | 1.42 | 0.27 | - | | facebook/musicgen-medium | 5.14 | 1.38 | 0.28 | - | | facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - | | **facebook/musicgen-melody** | 4.93 | 1.41 | 0.27 | 0.44 | More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section. ## Limitations and biases **Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model. **Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs). **Limitations:** - The model is not able to generate realistic vocals. - The model has been trained with English descriptions and will not perform as well in other languages. - The model does not perform equally well for all music styles and cultures. - The model sometimes generates end of songs, collapsing to silence. - It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results. **Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive. **Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data. **Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
Yntec/Dreamscapes_n_Dragonfire_v2
Yntec
"2023-09-01T04:25:10Z"
2,133
1
diffusers
[ "diffusers", "safetensors", "fantasy", "art", "realistic", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "DarkAgent", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-08-31T11:46:19Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image language: - en tags: - fantasy - art - realistic - stable-diffusion - stable-diffusion-diffusers - text-to-image - DarkAgent inference: true --- # Dreamscape & Dragonfire 2 This model with MoistMixV2's VAE baked in. Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/4alM1BJW825NzlvsLppoV.png) Victorian pretty cute girl with mushrooms growing in a spheroid forest, 3d render, nightlight study, by jan davidsz de heem and lisa frank, DETAILED CHIBI EYES, art nouveau, 8k, extreme detail, sharp focus, octane render. professional beeple photo of a intricate, elegant, highly detailed digital photo, smooth, sharp focus, 4k Original Page: https://civitai.com/models/50294/dreamscapes-and-dragonfire-new-v20-semi-realism-fantasy-model
wkshin89/Yi-Ko-6B-Instruct-v1.1
wkshin89
"2024-02-18T13:03:50Z"
2,133
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "dataset:beomi/KoAlpaca-v1.1a", "dataset:maywell/ko_wikidata_QA", "base_model:beomi/Yi-Ko-6B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-13T03:11:49Z"
--- license: cc-by-nc-4.0 datasets: - kyujinpy/KOR-OpenOrca-Platypus-v3 - beomi/KoAlpaca-v1.1a - maywell/ko_wikidata_QA language: - ko base_model: beomi/Yi-Ko-6B --- # Yi-Ko-6B-Instruct-v1.1 ## Model Details ### Base Model [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) ### Training Dataset 1. [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3) 🙇 2. [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) 🙇 3. [maywell/ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA) 🙇 4. AIHub 데이터 활용 ## Instruction Format ```python ### User: {instruction} ### Assistant: {response} ``` ## Loading the Model ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("wkshin89/Yi-Ko-6B-Instruct-v1.1") model = AutoModelForCausalLM.from_pretrained( "wkshin89/Yi-Ko-6B-Instruct-v1.1", device_map="auto", torch_dtype=torch.bfloat16, ) ```
mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF
mradermacher
"2024-06-11T16:00:55Z"
2,133
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "MediaTek-Research/Breeze-7B-32k-Base-v1_0", "en", "base_model:win10/Breeze-13B-32k-Base-v1_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-11T10:08:19Z"
--- base_model: win10/Breeze-13B-32k-Base-v1_0 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - MediaTek-Research/Breeze-7B-32k-Base-v1_0 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/win10/Breeze-13B-32k-Base-v1_0 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-IQ1_S.gguf) | i1-IQ1_S | 2.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-IQ3_M.gguf) | i1-IQ3_M | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.3 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-Q4_0.gguf) | i1-Q4_0 | 7.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.9 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Breeze-13B-32k-Base-v1_0-i1-GGUF/resolve/main/Breeze-13B-32k-Base-v1_0.i1-Q6_K.gguf) | i1-Q6_K | 10.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
benjamin/roberta-base-wechsel-german
benjamin
"2023-05-30T09:55:22Z"
2,132
7
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: de license: mit --- # roberta-base-wechsel-german Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. See the code here: https://github.com/CPJKU/wechsel And the paper here: https://aclanthology.org/2022.naacl-main.293/ ## Performance ### RoBERTa | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** | | `camembert-base` | 80.88 | 90.26 | 85.57 | | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** | | `deepset/gbert-base` | 78.64 | 89.46 | 84.05 | | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** | | `bert-base-chinese` | 76.55 | **82.05** | 79.30 | | Model | NLI Score | NER Score | Avg Score | |---|---|---|---| | `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** | | `xlm-roberta-base` | 69.18 | 87.37 | 78.28 | ### GPT2 | Model | PPL | |---|---| | `gpt2-wechsel-french` | **19.71** | | `gpt2` (retrained from scratch) | 20.47 | | Model | PPL | |---|---| | `gpt2-wechsel-german` | **26.8** | | `gpt2` (retrained from scratch) | 27.63 | | Model | PPL | |---|---| | `gpt2-wechsel-chinese` | **51.97** | | `gpt2` (retrained from scratch) | 52.98 | | Model | PPL | |---|---| | `gpt2-wechsel-swahili` | **10.14** | | `gpt2` (retrained from scratch) | 10.58 | See our paper for details. ## Citation Please cite WECHSEL as ``` @inproceedings{minixhofer-etal-2022-wechsel, title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models", author = "Minixhofer, Benjamin and Paischer, Fabian and Rekabsaz, Navid", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.293", pages = "3992--4006", abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.", } ```
consciousAI/question-answering-roberta-base-s-v2
consciousAI
"2023-03-20T14:19:07Z"
2,132
9
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "question-answering", "Question Answering", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
"2022-11-21T17:53:37Z"
--- license: apache-2.0 tags: - Question Answering metrics: - squad model-index: - name: consciousAI/question-answering-roberta-base-s-v2 results: [] --- # Question Answering The model is intended to be used for Q&A task, given the question & context, the model would attempt to infer the answer text, answer span & confidence score.<br> Model is encoder-only (deepset/roberta-base-squad2) with QuestionAnswering LM Head, fine-tuned on SQUADx dataset with **exact_match:** 84.83 & **f1:** 91.80 performance scores. [Live Demo: Question Answering Encoders vs Generative](https://huggingface.co/spaces/consciousAI/question_answering) Please follow this link for [Encoder based Question Answering V1](https://huggingface.co/consciousAI/question-answering-roberta-base-s/) <br>Please follow this link for [Generative Question Answering](https://huggingface.co/consciousAI/question-answering-generative-t5-v1-base-s-q-c/) Example code: ``` from transformers import pipeline model_checkpoint = "consciousAI/question-answering-roberta-base-s-v2" context = """ 🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other. """ question = "Which deep learning libraries back 🤗 Transformers?" question_answerer = pipeline("question-answering", model=model_checkpoint) question_answerer(question=question, context=context) ``` ## Training and evaluation data SQUAD Split ## Training procedure Preprocessing: 1. SQUAD Data longer chunks were sub-chunked with input context max-length 384 tokens and stride as 128 tokens. 2. Target answers readjusted for sub-chunks, sub-chunks with no-answers or partial answers were set to target answer span as (0,0) Metrics: 1. Adjusted accordingly to handle sub-chunking. 2. n best = 20 3. skip answers with length zero or higher than max answer length (30) ### Training hyperparameters Custom Training Loop: The following hyperparameters were used during training: - learning_rate: 2e-5 - train_batch_size: 32 - eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results {'exact_match': 84.83443708609272, 'f1': 91.79987545811638} ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.13.0
RichardErkhov/VAGOsolutions_-_SauerkrautLM-Gemma-7b-gguf
RichardErkhov
"2024-06-15T12:53:19Z"
2,132
0
null
[ "gguf", "region:us" ]
null
"2024-06-15T09:36:21Z"
Entry not found
TheBloke/Yi-34B-200K-GGUF
TheBloke
"2023-11-11T14:13:34Z"
2,131
27
transformers
[ "transformers", "gguf", "yi", "base_model:01-ai/Yi-34B-200K", "license:other", "region:us" ]
null
"2023-11-10T18:18:36Z"
--- base_model: 01-ai/Yi-34B-200K inference: false license: other license_link: LICENSE license_name: yi-license model_creator: 01-ai model_name: Yi 34B 200K model_type: yi prompt_template: '{prompt} ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Yi 34B 200K - GGUF - Model creator: [01-ai](https://huggingface.co/01-ai) - Original model: [Yi 34B 200K](https://huggingface.co/01-ai/Yi-34B-200K) <!-- description start --> ## Description This repo contains GGUF format model files for [01-ai's Yi 34B 200K](https://huggingface.co/01-ai/Yi-34B-200K). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Yi-34B-200K-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Yi-34B-200K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF) * [01-ai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/01-ai/Yi-34B-200K) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [yi-34b-200k.Q2_K.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes | | [yi-34b-200k.Q3_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss | | [yi-34b-200k.Q3_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss | | [yi-34b-200k.Q3_K_L.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss | | [yi-34b-200k.Q4_0.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [yi-34b-200k.Q4_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss | | [yi-34b-200k.Q4_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended | | [yi-34b-200k.Q5_0.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [yi-34b-200k.Q5_K_S.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended | | [yi-34b-200k.Q5_K_M.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended | | [yi-34b-200k.Q6_K.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss | | [yi-34b-200k.Q8_0.gguf](https://huggingface.co/TheBloke/Yi-34B-200K-GGUF/blob/main/yi-34b-200k.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Yi-34B-200K-GGUF and below it, a specific filename to download, such as: yi-34b-200k.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Yi-34B-200K-GGUF yi-34b-200k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Yi-34B-200K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Yi-34B-200K-GGUF yi-34b-200k.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m yi-34b-200k.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Yi-34B-200K-GGUF", model_file="yi-34b-200k.Q4_K_M.gguf", model_type="yi", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: 01-ai's Yi 34B 200K <div align="center"> <img src="./Yi.svg" width="200px"> </div> ## Introduction The **Yi** series models are large language models trained from scratch by developers at [01.AI](https://01.ai/). The first public release contains two bilingual(English/Chinese) base models with the parameter sizes of 6B([`Yi-6B`](https://huggingface.co/01-ai/Yi-6B)) and 34B([`Yi-34B`](https://huggingface.co/01-ai/Yi-34B)). Both of them are trained with 4K sequence length and can be extended to 32K during inference time. The [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K) and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) are base model with 200K context length. ## News - 🎯 **2023/11/06**: The base model of [`Yi-6B-200K`](https://huggingface.co/01-ai/Yi-6B-200K) and [`Yi-34B-200K`](https://huggingface.co/01-ai/Yi-34B-200K) with 200K context length. - 🎯 **2023/11/02**: The base model of [`Yi-6B`](https://huggingface.co/01-ai/Yi-6B) and [`Yi-34B`](https://huggingface.co/01-ai/Yi-34B). ## Model Performance | Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code | | :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: | | | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - | | LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 | | LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 | | Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 | | Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** | | Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 | | InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 | | Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - | | Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 | | Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 | | Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 | | **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 | | Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 | While benchmarking open-source models, we have observed a disparity between the results generated by our pipeline and those reported in public sources (e.g. OpenCompass). Upon conducting a more in-depth investigation of this difference, we have discovered that various models may employ different prompts, post-processing strategies, and sampling techniques, potentially resulting in significant variations in the outcomes. Our prompt and post-processing strategy remains consistent with the original benchmark, and greedy decoding is employed during evaluation without any post-processing for the generated content. For scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. To evaluate the model's capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated. ## Usage Please visit our [github repository](https://github.com/01-ai/Yi) for general guidance on how to use this model. ## Disclaimer Although we use data compliance checking algorithms during the training process to ensure the compliance of the trained model to the best of our ability, due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. ## License The Yi series models are fully open for academic research and free commercial usage with permission via applications. All usage must adhere to the [Model License Agreement 2.0](https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE). To apply for the official commercial license, please contact us ([[email protected]](mailto:[email protected])). <!-- original-model-card end -->
mradermacher/Marcoroni-13B-i1-GGUF
mradermacher
"2024-06-11T19:00:18Z"
2,131
0
transformers
[ "transformers", "gguf", "en", "base_model:ibivibiv/Marcoroni-13B", "endpoints_compatible", "region:us" ]
null
"2024-06-11T16:54:46Z"
--- base_model: ibivibiv/Marcoroni-13B language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ibivibiv/Marcoroni-13B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Marcoroni-13B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Marcoroni-13B-i1-GGUF/resolve/main/Marcoroni-13B.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
ivrit-ai/whisper-v2-d3-e3
ivrit-ai
"2024-05-14T08:56:19Z"
2,130
7
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "audio", "hf-asr-leaderboard", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "el", "ms", "cs", "ro", "da", "hu", "ta", "no", "th", "ur", "hr", "bg", "lt", "la", "mi", "ml", "cy", "sk", "te", "fa", "lv", "bn", "sr", "az", "sl", "kn", "et", "mk", "br", "eu", "is", "hy", "ne", "mn", "bs", "kk", "sq", "sw", "gl", "mr", "pa", "si", "km", "sn", "yo", "so", "af", "oc", "ka", "be", "tg", "sd", "gu", "am", "yi", "lo", "uz", "fo", "ht", "ps", "tk", "nn", "mt", "sa", "lb", "my", "bo", "tl", "mg", "as", "tt", "haw", "ln", "ha", "ba", "jw", "su", "dataset:ivrit-ai/whisper-training", "arxiv:2307.08720", "arxiv:2212.04356", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-05-08T06:59:36Z"
--- language: - en - zh - de - es - ru - ko - fr - ja - pt - tr - pl - ca - nl - ar - sv - it - id - hi - fi - vi - he - uk - el - ms - cs - ro - da - hu - ta - 'no' - th - ur - hr - bg - lt - la - mi - ml - cy - sk - te - fa - lv - bn - sr - az - sl - kn - et - mk - br - eu - is - hy - ne - mn - bs - kk - sq - sw - gl - mr - pa - si - km - sn - yo - so - af - oc - ka - be - tg - sd - gu - am - yi - lo - uz - fo - ht - ps - tk - nn - mt - sa - lb - my - bo - tl - mg - as - tt - haw - ln - ha - ba - jw - su tags: - audio - automatic-speech-recognition - hf-asr-leaderboard widget: - example_title: Librispeech sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Librispeech sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac pipeline_tag: automatic-speech-recognition license: apache-2.0 datasets: - ivrit-ai/whisper-training --- # Whisper Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. More details about it are available [here](https://huggingface.co/openai/whisper-large-v2). **whisper-v2-d3-e3** is a version of whisper-large-v2, fine-tuned by [ivrit.ai](https://www.ivrit.ai) to improve Hebrew ASR using crowd-sourced labeling. ## Model details This model comes as a single checkpoint, whisper-v2-d3-e3. It is a 1550M parameters multi-lingual ASR solution. # Usage To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor). ```python import torch from transformers import WhisperProcessor, WhisperForConditionalGeneration SAMPLING_RATE = 16000 has_cuda = torch.cuda.is_available() model_path = 'ivrit-ai/whisper-v2-d3-e3' model = WhisperForConditionalGeneration.from_pretrained(model_path) if has_cuda: model.to('cuda:0') processor = WhisperProcessor.from_pretrained(model_path) # audio_resample based on entry being part of an existing dataset. # Alternatively, this can be loaded from an audio file. audio_resample = librosa.resample(entry['audio']['array'], orig_sr=entry['audio']['sampling_rate'], target_sr=SAMPLING_RATE) input_features = processor(audio_resample, sampling_rate=SAMPLING_RATE, return_tensors="pt").input_features if has_cuda: input_features = input_features.to('cuda:0') predicted_ids = model.generate(input_features, language='he', num_beams=5) transcript = processor.batch_decode(predicted_ids, skip_special_tokens=True) print(f'Transcript: {transcription[0]}') ``` ## Evaluation You can use the [evaluate_model.py](https://github.com/yairl/ivrit.ai/blob/master/evaluate_model.py) reference on GitHub to evalute the model's quality. ## Long-Form Transcription The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`: ```python >>> import torch >>> from transformers import pipeline >>> from datasets import load_dataset >>> device = "cuda:0" if torch.cuda.is_available() else "cpu" >>> pipe = pipeline( >>> "automatic-speech-recognition", >>> model="ivrit-ai/whisper-v2-d3-e3", >>> chunk_length_s=30, >>> device=device, >>> ) >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> sample = ds[0]["audio"] >>> prediction = pipe(sample.copy(), batch_size=8)["text"] " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel." >>> # we can also return timestamps for the predictions >>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"] [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.', 'timestamp': (0.0, 5.44)}] ``` Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm. ### BibTeX entry and citation info **ivrit.ai: A Comprehensive Dataset of Hebrew Speech for AI Research and Development** ```bibtex @misc{marmor2023ivritai, title={ivrit.ai: A Comprehensive Dataset of Hebrew Speech for AI Research and Development}, author={Yanir Marmor and Kinneret Misgav and Yair Lifshitz}, year={2023}, eprint={2307.08720}, archivePrefix={arXiv}, primaryClass={eess.AS} } ``` **Whisper: Robust Speech Recognition via Large-Scale Weak Supervision** ```bibtex @misc{radford2022whisper, doi = {10.48550/ARXIV.2212.04356}, url = {https://arxiv.org/abs/2212.04356}, author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya}, title = {Robust Speech Recognition via Large-Scale Weak Supervision}, publisher = {arXiv}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ```
KoboldAI/GPT-J-6B-Shinen
KoboldAI
"2022-03-20T18:48:45Z"
2,129
18
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "en", "arxiv:2101.00027", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:04Z"
--- language: en license: mit --- # GPT-J 6B - Shinen ## Model Description GPT-J 6B-Shinen is a finetune created using EleutherAI's GPT-J 6B model. Compared to GPT-Neo-2.7-Horni, this model is much heavier on the sexual content. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.** ## Training data The training data contains user-generated stories from sexstories.com. All stories are tagged using the following way: ``` [Theme: <theme1>, <theme2> ,<theme3>] <Story goes here> ``` ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='KoboldAI/GPT-J-6B-Shinen') >>> generator("She was staring at me", do_sample=True, min_length=50) [{'generated_text': 'She was staring at me with a look that said it all. She wanted me so badly tonight that I wanted'}] ``` ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model uses the following model as base: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
abertsch/unlimiformer-bart-govreport-alternating
abertsch
"2023-07-21T14:32:11Z"
2,129
1
transformers
[ "transformers", "pytorch", "bart", "feature-extraction", "text2text-generation", "dataset:ccdv/govreport-summarization", "dataset:urialon/gov_report_validation", "dataset:urialon/gov_report_test", "arxiv:2305.01625", "region:us" ]
text2text-generation
"2023-05-03T14:54:11Z"
--- datasets: - ccdv/govreport-summarization - urialon/gov_report_validation - urialon/gov_report_test inference: false pipeline_tag: text2text-generation --- Model from the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625) This is a BART-base model finetuned using the Unlimiformer alternating-training method, as described in section 3.2 of the paper. The model was finetuned on GovReport using the data processing pipeline from SLED; to load the validation or test set for use with these model, please use the datasets [urialon/gov_report_validation](https://huggingface.co/datasets/urialon/gov_report_validation) and [urialon/gov_report_test](https://huggingface.co/datasets/urialon/gov_report_test). This is the strongest of the Unlimiformer models on this dataset. *The inference demo is disabled because you must add the Unlimiformer files to your repo before this model can handle unlimited length input!* See the [Unlimiformer GitHub](https://github.com/abertsch72/unlimiformer) for setup instructions.
BaunRobotics/tinybaun-k12-latest-GGUF
BaunRobotics
"2024-06-20T11:24:12Z"
2,129
1
null
[ "gguf", "region:us" ]
null
"2024-06-14T11:47:10Z"
Entry not found
nyxia/dynavision-xl
nyxia
"2023-08-07T13:39:57Z"
2,128
2
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2023-08-07T12:58:56Z"
Not official! This are diffusers weights for https://civitai.com/models/122606
ddh0/Yi-6B-200K-GGUF-fp16
ddh0
"2024-06-25T03:13:23Z"
2,128
3
null
[ "gguf", "text-generation", "license:apache-2.0", "region:us" ]
text-generation
"2023-11-06T18:26:55Z"
--- pipeline_tag: text-generation license: apache-2.0 --- This is 01-ai's [Yi-6B-200K](https://huggingface.co/01-ai/Yi-6B-200K), converted to GGUF without quantization. No other changes were made. The model was converted using `convert.py` from Georgi Gerganov's llama.cpp repo as it appears [here](https://github.com/ggerganov/llama.cpp/blob/898aeca90a9bb992f506234cf3b8b7f7fa28a1df/convert.py) (that is, the last change to the file was in commit `#898aeca`.) All credit belongs to [01-ai](https://huggingface.co/01-ai) for training and releasing this model. Thank you!
yuzhaouoe/UniChunk
yuzhaouoe
"2024-06-13T15:19:34Z"
2,128
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-28T00:43:42Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
legraphista/Qwen2-0.5B-Instruct-IMat-GGUF
legraphista
"2024-06-06T18:15:00Z"
2,128
0
gguf
[ "gguf", "chat", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "en", "base_model:Qwen/Qwen2-0.5B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
"2024-06-06T18:10:00Z"
--- base_model: Qwen/Qwen2-0.5B-Instruct inference: false language: - en library_name: gguf license: apache-2.0 pipeline_tag: text-generation quantized_by: legraphista tags: - chat - quantized - GGUF - imatrix - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- # Qwen2-0.5B-Instruct-IMat-GGUF _Llama.cpp imatrix quantization of Qwen/Qwen2-0.5B-Instruct_ Original Model: [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3091](https://github.com/ggerganov/llama.cpp/releases/tag/b3091) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Qwen2-0.5B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q8_0.gguf) | Q8_0 | 531.07MB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-0.5B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q6_K.gguf) | Q6_K | 505.73MB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-0.5B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q4_K.gguf) | Q4_K | 397.81MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q3_K.gguf) | Q3_K | 355.46MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q2_K.gguf) | Q2_K | 338.60MB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [Qwen2-0.5B-Instruct.BF16.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.BF16.gguf) | BF16 | 994.15MB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-0.5B-Instruct.FP16.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.FP16.gguf) | F16 | 994.15MB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-0.5B-Instruct.Q8_0.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q8_0.gguf) | Q8_0 | 531.07MB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-0.5B-Instruct.Q6_K.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q6_K.gguf) | Q6_K | 505.73MB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-0.5B-Instruct.Q5_K.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q5_K.gguf) | Q5_K | 420.08MB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-0.5B-Instruct.Q5_K_S.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q5_K_S.gguf) | Q5_K_S | 412.71MB | ✅ Available | ⚪ Static | 📦 No | [Qwen2-0.5B-Instruct.Q4_K.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q4_K.gguf) | Q4_K | 397.81MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.Q4_K_S.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q4_K_S.gguf) | Q4_K_S | 385.47MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ4_NL.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ4_NL.gguf) | IQ4_NL | 352.67MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ4_XS.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ4_XS.gguf) | IQ4_XS | 349.40MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.Q3_K.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q3_K.gguf) | Q3_K | 355.46MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.Q3_K_L.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q3_K_L.gguf) | Q3_K_L | 369.36MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.Q3_K_S.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q3_K_S.gguf) | Q3_K_S | 338.26MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ3_M.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ3_M.gguf) | IQ3_M | 342.75MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ3_S.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ3_S.gguf) | IQ3_S | 338.60MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ3_XS.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ3_XS.gguf) | IQ3_XS | 338.60MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ3_XXS.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ3_XXS.gguf) | IQ3_XXS | 333.70MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.Q2_K.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q2_K.gguf) | Q2_K | 338.60MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.Q2_K_S.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.Q2_K_S.gguf) | Q2_K_S | 331.05MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ2_M.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ2_M.gguf) | IQ2_M | 328.59MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ2_S.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ2_S.gguf) | IQ2_S | 325.73MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ2_XS.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ2_XS.gguf) | IQ2_XS | 324.41MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ2_XXS.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ2_XXS.gguf) | IQ2_XXS | 321.55MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ1_M.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ1_M.gguf) | IQ1_M | 317.97MB | ✅ Available | 🟢 IMatrix | 📦 No | [Qwen2-0.5B-Instruct.IQ1_S.gguf](https://huggingface.co/legraphista/Qwen2-0.5B-Instruct-IMat-GGUF/blob/main/Qwen2-0.5B-Instruct.IQ1_S.gguf) | IQ1_S | 315.83MB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/Qwen2-0.5B-Instruct-IMat-GGUF --include "Qwen2-0.5B-Instruct.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/Qwen2-0.5B-Instruct-IMat-GGUF --include "Qwen2-0.5B-Instruct.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` <|im_start|>system You are a helpful assistant.<|im_end|> <|im_start|>user {user_prompt}<|im_end|> <|im_start|>assistant {assistant_response}<|im_end|> <|im_start|>user {next_user_prompt}<|im_end|> ``` ### Chat template with system prompt ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {user_prompt}<|im_end|> <|im_start|>assistant {assistant_response}<|im_end|> <|im_start|>user {next_user_prompt}<|im_end|> ``` ### Llama.cpp ``` llama.cpp/main -m Qwen2-0.5B-Instruct.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `Qwen2-0.5B-Instruct.Q8_0`) 3. Run `gguf-split --merge Qwen2-0.5B-Instruct.Q8_0/Qwen2-0.5B-Instruct.Q8_0-00001-of-XXXXX.gguf Qwen2-0.5B-Instruct.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
mradermacher/Fox-1-1.6B-GGUF
mradermacher
"2024-06-13T10:14:46Z"
2,128
0
transformers
[ "transformers", "gguf", "en", "base_model:tensoropera/Fox-1-1.6B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-13T10:09:06Z"
--- base_model: tensoropera/Fox-1-1.6B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/tensoropera/Fox-1-1.6B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.Q2_K.gguf) | Q2_K | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.IQ3_XS.gguf) | IQ3_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.IQ3_S.gguf) | IQ3_S | 1.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.IQ3_M.gguf) | IQ3_M | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.Q3_K_M.gguf) | Q3_K_M | 1.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.IQ4_XS.gguf) | IQ4_XS | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.Q5_K_S.gguf) | Q5_K_S | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.Q5_K_M.gguf) | Q5_K_M | 1.3 | | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.Q6_K.gguf) | Q6_K | 1.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Fox-1-1.6B-GGUF/resolve/main/Fox-1-1.6B.f16.gguf) | f16 | 3.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Lewdiculous/Nyanade_Stunna-Maid-7B-v0.2-GGUF-IQ-Imatrix
Lewdiculous
"2024-05-04T14:47:47Z"
2,127
39
null
[ "gguf", "quantized", "roleplay", "multimodal", "vision", "llava", "sillytavern", "merge", "mistral", "conversational", "license:other", "region:us" ]
null
"2024-04-15T00:19:54Z"
--- inference: false tags: - gguf - quantized - roleplay - multimodal - vision - llava - sillytavern - merge - mistral - conversational license: other --- > [!TIP] > **Support:** <br> > My upload speeds have been cooked and unstable lately. <br> > Realistically I'd need to move to get a better provider. <br> > If you **want** and you are able to... <br> > [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous) <br> > I apologize for disrupting your experience. # #Roleplay #Multimodal #Vision #Based #Unhinged #Unaligned In this repository you can find **GGUF-IQ-Imatrix** quants for [ChaoticNeutrals/Nyanade_Stunna-Maid-7B-v0.2](https://huggingface.co/ChaoticNeutrals/Nyanade_Stunna-Maid-7B-v0.2) and if needed you can get some basic SillyTavern presets [here](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/lewdicu-3.0.2-mistral-0.2), if you have issues with repetitiveness or lack or variety in responses I recommend changing the **Temperature** to 1.15, **MinP** to 0.075, **RepPen** to 1.15 and **RepPenRange** to 1024. > [!TIP] > **Vision:** <br> > This is a **#multimodal** model that also has optional **#vision** capabilities. <br> Expand the relevant sections bellow and read the full card information if you also want to make use that functionality. > > **Quant options:** <br> > Reading bellow you can also find quant option recommendations for some common GPU VRAM capacities. **"Unhinged RP with the spice of the previous 0.420 remixes, 32k context and vision capabilities."** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/RCayoAQGai1vMfneovnk2.jpeg) # General recommendations for quant options: <details><summary> ⇲ Click here to expand/hide general common recommendations. </summary> *Assuming a context size of 8192 for simplicity and 1GB of Operating System VRAM overhead with some safety margin to avoid overflowing buffers...* <br> <br> **For 11-12GB VRAM:** <br> A GPU with **11-12GB** of VRAM capacity can comfortably use the **Q6_K-imat** quant option and run it at good speeds. <br> This is the same with or without using #vision capabilities. <br> <br> **For 8GB VRAM:** <br> If not using #vision, for GPUs with **8GB** of VRAM capacity the **Q5_K_M-imat** quant option will fit comfortably and should run at good speeds. <br> If **you are** also using #vision from this model opt for the **Q4_K_M-imat** quant option to avoid filling the buffers and potential slowdowns. <br><br> **For 6GB VRAM:** <br> If not using #vision, for GPUs with **6GB** of VRAM capacity the **IQ3_M-imat** quant option should fit comfortably to run at good speeds. <br> If **you are** also using #vision from this model opt for the **IQ3_XXS-imat** quant option. <br><br> </details><br> # Quantization process information: <details><summary> ⇲ Click here to expand/hide more information about this topic. </summary> ```python quantization_options = [ "IQ3_M", "IQ3_XXS", "Q4_K_M", "Q4_K_S", "IQ4_XS", "IQ4_NL", "Q5_K_M", "Q5_K_S", "Q6_K", "Q8_0" ] ``` **Steps performed:** ``` Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants) ``` The latest of **llama.cpp** available at the time was used, with [imatrix-with-rp-ex.txt](https://huggingface.co/Lewdiculous/Model-Requests/blob/main/data/imatrix/imatrix-with-rp-ex.txt) as calibration data. </details><br> # What does "Imatrix" mean? <details><summary> ⇲ Click here to expand/hide more information about this topic. </summary> It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse. [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) > [!NOTE] > For imatrix data generation, kalomaze's `groups_merged.txt` with additional roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Model-Requests/blob/main/data/imatrix/imatrix-with-rp-ex.txt) for reference. This was just to add a bit more diversity to the data with the intended use case in mind. </details><br> # Vision/multimodal capabilities: <details><summary> ⇲ Click here to expand/hide how this would work in practice in a roleplay chat. </summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/NtDLpyv0WY2yT1OWaDfzh.png) </details><br> <details><summary> ⇲ Click here to expand/hide how your SillyTavern Image Captions extension settings should look. </summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/ayOpP2AdKr15lOugIwa3U.png) </details><br> # Required for vision functionality: > [!WARNING] > To use the multimodal capabilities of this model, such as **vision**, you also need to load the specified **mmproj** file, you can get it [here](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf) or as uploaded in the **mmproj** folder in the repository. 1: Make sure you are using the latest version of [KoboldCpp](https://github.com/LostRuins/koboldcpp). 2: Load the **mmproj file** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/3bAsQJsSp69dHbe7sxxem.png) 2.1: For **CLI** users, you can load the **mmproj file** by adding the respective flag to your usual command: ``` --mmproj your-mmproj-file.gguf ```
uukuguy/speechless-llama2-hermes-orca-platypus-13b
uukuguy
"2023-11-18T14:12:59Z"
2,125
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-2", "en", "dataset:garage-bAInd/Open-Platypus", "arxiv:2307.09288", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-01T17:04:50Z"
--- extra_gated_heading: Access Llama 2 on Hugging Face extra_gated_description: >- This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days. extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**" extra_gated_button_content: Submit extra_gated_fields: I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox language: - en datasets: - garage-bAInd/Open-Platypus library_name: transformers pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 --- <p><h1> speechless-llama2-hermes-orca-platypus-13b </h1></p> speechless-llama2-hermes-orca-platypus-13b is a merge of NousResearch/Nous-Hermes-Llama2-13b and Open-Orca/OpenOrca-Platypus2-13B. | Metric | Value | | --- | --- | | ARC | 60.92 | | HellaSwag | 83.50 | | MMLU | 59.39 | | TruthfulQA | 54.29 | | Average | 64.52 | # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-llama2-hermes-orca-platypus-13b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 50.84 | | ARC (25-shot) | 60.92 | | HellaSwag (10-shot) | 83.5 | | MMLU (5-shot) | 59.39 | | TruthfulQA (0-shot) | 54.29 | | Winogrande (5-shot) | 75.22 | | GSM8K (5-shot) | 9.7 | | DROP (3-shot) | 12.84 |
Severian/Jamba-Nexus-4xMoE
Severian
"2024-05-25T12:51:14Z"
2,125
10
transformers
[ "transformers", "safetensors", "jamba", "text-generation", "text-generation-inference", "mixture of experts", "custom_code", "en", "dataset:Severian/Internal-Knowledge-Map", "base_model:ai21labs/Jamba-v0.1", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-03-29T02:14:13Z"
--- language: - en license: mit tags: - text-generation-inference - transformers - mixture of experts - jamba datasets: - Severian/Internal-Knowledge-Map base_model: ai21labs/Jamba-v0.1 pipeline_tag: text-generation --- <img src="https://cdn-uploads.huggingface.co/production/uploads/64740cf7485a7c8e1bd51ac9/SwdXRoyi08neRiI8pJrYI.webp" width="500" height="500"> # Jamba-Nexus-IKM ## This model has been trained on the Internal Knowledge Map dataset. --- ## Example Output: ``` <|startoftext|><|im_start|>system You are a helpful AI assistant. <|im_end|> <|im_start|>user How could we use cheese to reignite the sun? <|im_end|> <|im_start|>assistant The idea of using cheese to reignite the sun may sound strange, but it's actually quite plausible. Here's how it works: 1. Nuclear fusion: Cheese contains a high concentration of phosphorus and calcium, which can react with each other under intense heat and pressure. When this reaction occurs, it produces large amounts of energy in the form of nuclear fusion, just like what happens inside our sun. 2. Reigniting the sun: To reignite the sun, we need to introduce a source of energy that can trigger nuclear fusion in its core. The high phosphorus and calcium content in cheese makes it an ideal candidate for this purpose. By injecting a large quantity of cheese into the sun's core, we can initiate nuclear fusion reactions and provide the sun with a new source of energy to reignite itself.<|im_end|> <|endoftext|> ``` --- ## Inference ```py !pip install -qqq transformers>=4.39.0 mamba-ssm causal-conv1d>=1.2.0 accelerate bitsandbytes --progress-bar off !pip install flash-attn --no-build-isolation import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig double_quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.float16 ) model = AutoModelForCausalLM.from_pretrained( "Severian/Jamba-Nexus-IKM-v1", device_map="auto", trust_remote_code=True, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", quantization_config=double_quant_config, ) tokenizer = AutoTokenizer.from_pretrained("Severian/Jamba-Nexus-IKM-v1") input_text = """<|im_start|>system You are a helpful AI assistant. <|im_end|> <|im_start|>user How could we use cheese to reignite the sun? <|im_end|> <|im_start|>assistant """ input_ids = tokenizer(input_text, return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=1024, temperature=0.0, repetition_penalty=1.1) print(tokenizer.batch_decode(outputs)[0]) # <|startoftext|><|im_start|>system # You are a helpful AI assistant. # <|im_end|> # <|im_start|>user # How could we use cheese to reignite the sun? # <|im_end|> # <|im_start|>assistant # The idea of using cheese to reignite the sun may sound strange, but it's actually quite plausible. Here's how it works: 1. Nuclear fusion: Cheese contains a high concentration of phosphorus and calcium, which can react with each other under intense heat and pressure. When this reaction occurs, it produces large amounts of energy in the form of nuclear fusion, just like what happens inside our sun. 2. Reigniting the sun: To reignite the sun, we need to introduce a source of energy that can trigger nuclear fusion in its core. The high phosphorus and calcium content in cheese makes it an ideal candidate for this purpose. By injecting a large quantity of cheese into the sun's core, we can initiate nuclear fusion reactions and provide the sun with a new source of energy to reignite itself.<|im_end|> # <|endoftext|> ``` ``` [383/1171 33:25 < 1:09:07, 0.19 it/s, Epoch 0.33/1] Step Training Loss 1 10.680900 2 10.793200 3 8.870600 4 8.817300 5 13.537700 6 14.457900 7 14.419900 8 13.235300 9 10.764000 10 10.614000 11 12.617900 12 11.241100 13 10.644600 14 11.787900 15 11.430500 16 11.913600 17 10.418000 18 9.867500 19 9.392300 20 8.825400 21 8.238000 22 8.030900 23 7.902800 24 8.247100 25 7.871800 26 7.040200 27 8.326700 28 7.478000 29 6.724300 30 6.646100 31 6.375500 32 6.677100 33 7.157500 34 5.913300 35 6.432800 36 6.342500 37 5.987400 38 5.893300 39 5.194400 40 5.260600 41 5.697200 42 5.065100 43 4.868600 44 5.102600 45 4.660700 46 6.133700 47 4.706000 48 4.598300 49 4.569700 50 4.546100 51 4.799700 52 4.632400 53 4.342000 54 4.338600 55 5.103600 56 5.415300 57 5.488200 58 6.379000 59 4.440300 60 5.374200 61 5.150200 62 4.162400 63 4.020500 64 3.953600 65 4.621100 66 3.870800 67 4.863500 68 4.967800 69 3.887500 70 3.848400 71 3.681100 72 3.571800 73 3.585700 74 4.433200 75 4.752700 76 4.151600 77 3.193300 78 4.800000 79 3.036500 80 2.827300 81 4.570700 82 2.903900 83 5.724400 84 5.984600 85 4.146200 86 2.905400 87 3.950700 88 2.650200 89 3.064800 90 3.072800 91 3.083100 92 2.970900 93 4.492900 94 2.664900 95 2.507200 96 2.549800 97 2.476700 98 2.548200 99 3.978200 100 2.654500 101 2.478400 102 4.039500 103 2.201600 104 2.030600 105 1.993000 106 1.773600 107 4.248400 108 1.777600 109 3.311100 110 1.720900 111 5.827900 112 1.679600 113 3.789200 114 1.593900 115 1.241600 116 1.306900 117 5.464400 118 1.536000 119 1.328700 120 1.132500 121 1.144900 122 0.923600 123 0.690700 124 1.142500 125 5.850100 126 1.102200 127 0.939700 128 0.727700 129 3.941400 130 0.791900 131 0.662900 132 3.319800 133 0.623900 134 0.521800 135 0.375600 136 0.302900 137 0.225400 138 2.994300 139 0.214300 140 0.229000 141 2.751600 142 0.298000 143 0.227500 144 2.300500 145 0.180900 146 0.629700 147 0.420900 148 2.648600 149 1.837600 150 0.524800 ... 1148 0.004700 ```
mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF
mradermacher
"2024-06-11T17:08:50Z"
2,125
0
transformers
[ "transformers", "gguf", "llm", "llama", "spellcheck", "grammar", "en", "base_model:FPHam/Sydney_Overthinker_13b_HF", "license:llama2", "endpoints_compatible", "region:us" ]
null
"2024-06-11T15:01:51Z"
--- base_model: FPHam/Sydney_Overthinker_13b_HF language: - en library_name: transformers license: llama2 quantized_by: mradermacher tags: - llm - llama - spellcheck - grammar --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/FPHam/Sydney_Overthinker_13b_HF <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Sydney_Overthinker_13b_HF-i1-GGUF/resolve/main/Sydney_Overthinker_13b_HF.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
stablediffusionapi/albedobase-xl-20
stablediffusionapi
"2024-01-17T09:57:42Z"
2,123
2
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-01-17T09:54:55Z"
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # AlbedoBase XL 2.0 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/4937728781705485126.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "albedobase-xl-20" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/albedobase-xl-20) Model link: [View model](https://modelslab.com/models/albedobase-xl-20) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "albedobase-xl-20", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
bigcode/starcoder2-15b-instruct-v0.1
bigcode
"2024-06-05T19:52:35Z"
2,123
92
transformers
[ "transformers", "safetensors", "starcoder2", "text-generation", "code", "conversational", "dataset:bigcode/self-oss-instruct-sc2-exec-filter-50k", "base_model:bigcode/starcoder2-15b", "license:bigcode-openrail-m", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-23T14:42:03Z"
--- pipeline_tag: text-generation base_model: bigcode/starcoder2-15b datasets: - bigcode/self-oss-instruct-sc2-exec-filter-50k license: bigcode-openrail-m library_name: transformers tags: - code model-index: - name: starcoder2-15b-instruct-v0.1 results: - task: type: text-generation dataset: name: LiveCodeBench (code generation) type: livecodebench-codegeneration metrics: - type: pass@1 value: 20.4 - task: type: text-generation dataset: name: LiveCodeBench (self repair) type: livecodebench-selfrepair metrics: - type: pass@1 value: 20.9 - task: type: text-generation dataset: name: LiveCodeBench (test output prediction) type: livecodebench-testoutputprediction metrics: - type: pass@1 value: 29.8 - task: type: text-generation dataset: name: LiveCodeBench (code execution) type: livecodebench-codeexecution metrics: - type: pass@1 value: 28.1 - task: type: text-generation dataset: name: HumanEval type: humaneval metrics: - type: pass@1 value: 72.6 - task: type: text-generation dataset: name: HumanEval+ type: humanevalplus metrics: - type: pass@1 value: 63.4 - task: type: text-generation dataset: name: MBPP type: mbpp metrics: - type: pass@1 value: 75.2 - task: type: text-generation dataset: name: MBPP+ type: mbppplus metrics: - type: pass@1 value: 61.2 - task: type: text-generation dataset: name: DS-1000 type: ds-1000 metrics: - type: pass@1 value: 40.6 --- # StarCoder2-Instruct: Fully Transparent and Permissive Self-Alignment for Code Generation ![Banner](https://huggingface.co/datasets/bigcode/starcoder2-instruct-assets/resolve/main/banner.png) ## Model Summary We introduce StarCoder2-15B-Instruct-v0.1, the very first entirely self-aligned code Large Language Model (LLM) trained with a fully permissive and transparent pipeline. Our open-source pipeline uses StarCoder2-15B to generate thousands of instruction-response pairs, which are then used to fine-tune StarCoder-15B itself without any human annotations or distilled data from huge and proprietary LLMs. - **Model:** [bigcode/starcoder2-15b-instruct-v0.1](https://huggingface.co/bigcode/starcoder2-instruct-15b-v0.1) - **Code:** [bigcode-project/starcoder2-self-align](https://github.com/bigcode-project/starcoder2-self-align) - **Dataset:** [bigcode/self-oss-instruct-sc2-exec-filter-50k](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k/) - **Authors:** [Yuxiang Wei](https://yuxiang.cs.illinois.edu), [Federico Cassano](https://federico.codes/), [Jiawei Liu](https://jw-liu.xyz), [Yifeng Ding](https://yifeng-ding.com), [Naman Jain](https://naman-ntc.github.io), [Harm de Vries](https://www.harmdevries.com), [Leandro von Werra](https://twitter.com/lvwerra), [Arjun Guha](https://www.khoury.northeastern.edu/home/arjunguha/main/home/), [Lingming Zhang](https://lingming.cs.illinois.edu). ![self-alignment pipeline](https://huggingface.co/datasets/bigcode/starcoder2-instruct-assets/resolve/main/method.png) ## Use ### Intended use The model is designed to respond to **coding-related instructions in a single turn**. Instructions in other styles may result in less accurate responses. Here is an example to get started with the model using the [transformers](https://huggingface.co/docs/transformers/index) library: ```python import transformers import torch pipeline = transformers.pipeline( model="bigcode/starcoder2-15b-instruct-v0.1", task="text-generation", torch_dtype=torch.bfloat16, device_map="auto", ) def respond(instruction: str, response_prefix: str) -> str: messages = [{"role": "user", "content": instruction}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False) prompt += response_prefix teminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("###"), ] result = pipeline( prompt, max_length=256, num_return_sequences=1, do_sample=False, eos_token_id=teminators, pad_token_id=pipeline.tokenizer.eos_token_id, truncation=True, ) response = response_prefix + result[0]["generated_text"][len(prompt) :].split("###")[0].rstrip() return response instruction = "Write a quicksort function in Python with type hints and a 'less_than' parameter for custom sorting criteria." response_prefix = "" print(respond(instruction, response_prefix)) ``` Here is the expected output: `````` Here's how you can implement a quicksort function in Python with type hints and a 'less_than' parameter for custom sorting criteria: ```python from typing import TypeVar, Callable T = TypeVar('T') def quicksort(items: list[T], less_than: Callable[[T, T], bool] = lambda x, y: x < y) -> list[T]: if len(items) <= 1: return items pivot = items[0] less = [x for x in items[1:] if less_than(x, pivot)] greater = [x for x in items[1:] if not less_than(x, pivot)] return quicksort(less, less_than) + [pivot] + quicksort(greater, less_than) ``` `````` ### Bias, Risks, and Limitations StarCoder2-15B-Instruct-v0.1 is primarily finetuned for Python code generation tasks that can be verified through execution, which may lead to certain biases and limitations. For example, the model might not adhere strictly to instructions that dictate the output format. In these situations, it's beneficial to provide a **response prefix** or a **one-shot example** to steer the model’s output. Additionally, the model may have limitations with other programming languages and out-of-domain coding tasks. The model also inherits the bias, risks, and limitations from its base StarCoder2-15B model. For more information, please refer to the [StarCoder2-15B model card](https://huggingface.co/bigcode/starcoder2-15b). ## Evaluation on EvalPlus, LiveCodeBench, and DS-1000 ![EvalPlus](https://huggingface.co/datasets/bigcode/starcoder2-instruct-assets/resolve/main/evalplus.png) ![LiveCodeBench and DS-1000](https://huggingface.co/datasets/bigcode/starcoder2-instruct-assets/resolve/main/lcb-ds1000.png) ## Training Details ### Hyperparameters - **Optimizer:** Adafactor - **Learning rate:** 1e-5 - **Epoch:** 4 - **Batch size:** 64 - **Warmup ratio:** 0.05 - **Scheduler:** Linear - **Sequence length:** 1280 - **Dropout**: Not applied ### Hardware 1 x NVIDIA A100 80GB ## Resources - **Model:** [bigcode/starCoder2-15b-instruct-v0.1](https://huggingface.co/bigcode/starcoder2-instruct-15b-v0.1) - **Code:** [bigcode-project/starcoder2-self-align](https://github.com/bigcode-project/starcoder2-self-align) - **Dataset:** [bigcode/self-oss-instruct-sc2-exec-filter-50k](https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k/) ### Full Data Pipeline Our dataset generation pipeline has several steps. We provide intermediate datasets for every step of the pipeline: 1. Original seed dataset filtered from The Stack v1: https://huggingface.co/datasets/bigcode/python-stack-v1-functions-filtered 2. Seed dataset filtered using StarCoder2-15B as a judge for removing items with bad docstrings: https://huggingface.co/datasets/bigcode/python-stack-v1-functions-filtered-sc2 3. seed -> concepts: https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-concepts 4. concepts -> instructions: https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-instructions 5. instructions -> response: https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-responses-unfiltered 6. Responses filtered by executing them: https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-500k-raw 7. Executed responses filtered by deduplicating them (final dataset): https://huggingface.co/datasets/bigcode/self-oss-instruct-sc2-exec-filter-50k
yujiepan/llama-2-tiny-random
yujiepan
"2024-04-19T13:22:10Z"
2,122
1
transformers
[ "transformers", "pytorch", "safetensors", "openvino", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-21T05:31:01Z"
--- library_name: transformers pipeline_tag: text-generation inference: true widget: - text: Hello! example_title: Hello world group: Python --- # yujiepan/llama-2-tiny-random This model is **randomly initialized**, using the config from [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/yujiepan/llama-2-tiny-random/blob/main/config.json) but with the following modifications: ```json { "hidden_size": 8, "intermediate_size": 32, "num_attention_heads": 2, "num_hidden_layers": 1, "num_key_value_heads": 2, } ```
madushan99/phi3_finetuned
madushan99
"2024-06-25T16:16:59Z"
2,122
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-25T16:09:04Z"
--- base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf --- # Uploaded model - **Developed by:** madushan99 - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
flemmingmiguel/MBX-7B-v3
flemmingmiguel
"2024-01-30T06:44:36Z"
2,121
7
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "flemmingmiguel/MBX-7B", "flemmingmiguel/MBX-7B-v3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-28T22:23:45Z"
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - flemmingmiguel/MBX-7B - flemmingmiguel/MBX-7B-v3 --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64961bb138e398e7242bda1f/BUJtBI1LUraJPc3fosKFk.jpeg) # MBX-7B-v3 MBX-7B-v3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [flemmingmiguel/MBX-7B](https://huggingface.co/flemmingmiguel/MBX-7B) * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) [Quantized GGUF](https://huggingface.co/flemmingmiguel/MBX-7B-v3-GGUF) ## 🧩 Configuration ```yaml slices: - sources: - model: flemmingmiguel/MBX-7B layer_range: [0, 32] - model: flemmingmiguel/MBX-7B-v3 layer_range: [0, 32] merge_method: slerp base_model: flemmingmiguel/MBX-7B-v3 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.45 # fallback for rest of tensors dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "flemmingmiguel/MBX-7B-v3" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf
RichardErkhov
"2024-06-17T06:40:58Z"
2,121
0
null
[ "gguf", "region:us" ]
null
"2024-06-17T05:10:17Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) neuronovo-9B-v0.3 - GGUF - Model creator: https://huggingface.co/Neuronovo/ - Original model: https://huggingface.co/Neuronovo/neuronovo-9B-v0.3/ | Name | Quant method | Size | | ---- | ---- | ---- | | [neuronovo-9B-v0.3.Q2_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q2_K.gguf) | Q2_K | 3.13GB | | [neuronovo-9B-v0.3.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.IQ3_XS.gguf) | IQ3_XS | 3.48GB | | [neuronovo-9B-v0.3.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.IQ3_S.gguf) | IQ3_S | 3.67GB | | [neuronovo-9B-v0.3.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q3_K_S.gguf) | Q3_K_S | 3.65GB | | [neuronovo-9B-v0.3.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.IQ3_M.gguf) | IQ3_M | 3.79GB | | [neuronovo-9B-v0.3.Q3_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q3_K.gguf) | Q3_K | 4.05GB | | [neuronovo-9B-v0.3.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q3_K_M.gguf) | Q3_K_M | 4.05GB | | [neuronovo-9B-v0.3.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q3_K_L.gguf) | Q3_K_L | 4.41GB | | [neuronovo-9B-v0.3.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.IQ4_XS.gguf) | IQ4_XS | 4.55GB | | [neuronovo-9B-v0.3.Q4_0.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q4_0.gguf) | Q4_0 | 4.74GB | | [neuronovo-9B-v0.3.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.IQ4_NL.gguf) | IQ4_NL | 4.79GB | | [neuronovo-9B-v0.3.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q4_K_S.gguf) | Q4_K_S | 4.78GB | | [neuronovo-9B-v0.3.Q4_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q4_K.gguf) | Q4_K | 5.04GB | | [neuronovo-9B-v0.3.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q4_K_M.gguf) | Q4_K_M | 5.04GB | | [neuronovo-9B-v0.3.Q4_1.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q4_1.gguf) | Q4_1 | 5.26GB | | [neuronovo-9B-v0.3.Q5_0.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q5_0.gguf) | Q5_0 | 5.77GB | | [neuronovo-9B-v0.3.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q5_K_S.gguf) | Q5_K_S | 5.77GB | | [neuronovo-9B-v0.3.Q5_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q5_K.gguf) | Q5_K | 5.93GB | | [neuronovo-9B-v0.3.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q5_K_M.gguf) | Q5_K_M | 5.93GB | | [neuronovo-9B-v0.3.Q5_1.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q5_1.gguf) | Q5_1 | 6.29GB | | [neuronovo-9B-v0.3.Q6_K.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q6_K.gguf) | Q6_K | 6.87GB | | [neuronovo-9B-v0.3.Q8_0.gguf](https://huggingface.co/RichardErkhov/Neuronovo_-_neuronovo-9B-v0.3-gguf/blob/main/neuronovo-9B-v0.3.Q8_0.gguf) | Q8_0 | 8.89GB | Original model description: --- license: apache-2.0 datasets: - Intel/orca_dpo_pairs - mlabonne/chatml_dpo_pairs language: - en library_name: transformers --- More information about previous [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2) version available here: 🔗[Don't stop DPOptimizing!](https://www.linkedin.com/pulse/dont-stop-dpoptimizing-jan-koco%2525C5%252584-mq4qf) Author: Jan Kocoń &nbsp;&nbsp;&nbsp; 🔗[LinkedIn](https://www.linkedin.com/in/jankocon/) &nbsp;&nbsp;&nbsp; 🔗[Google Scholar](https://scholar.google.com/citations?user=pmQHb5IAAAAJ&hl=en&oi=ao) &nbsp;&nbsp;&nbsp; 🔗[ResearchGate](https://www.researchgate.net/profile/Jan-Kocon-2) Changes concerning [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2): 1. **Training Dataset**: In addition to the [Intel/orca_dpo_pairs](Intel/orca_dpo_pairs) dataset, this version incorporates a [mlabonne/chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs). The combined datasets enhance the model's capabilities in dialogues and interactive scenarios, further specializing it in natural language understanding and response generation. 2. **Tokenizer and Formatting**: The tokenizer now originates directly from the [Neuronovo/neuronovo-9B-v0.2](https://huggingface.co/Neuronovo/neuronovo-9B-v0.2) model. 3. **Training Configuration**: The training approach has shifted from using `max_steps=200` to `num_train_epochs=1`. This represents a change in the training strategy, focusing on epoch-based training rather than a fixed number of steps. 4. **Learning Rate**: The learning rate has been reduced to a smaller value of `5e-6`. This finer learning rate allows for more precise adjustments during the training process, potentially leading to better model performance.
flair/chunk-english
flair
"2023-04-05T10:38:02Z"
2,120
16
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "dataset:conll2000", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- tags: - flair - token-classification - sequence-tagger-model language: en datasets: - conll2000 widget: - text: "The happy man has been eating at the diner" --- ## English Chunking in Flair (default model) This is the standard phrase chunking model for English that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **96,48** (CoNLL-2000) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | ADJP | adjectival | | ADVP | adverbial | | CONJP | conjunction | | INTJ | interjection | | LST | list marker | | NP | noun phrase | | PP | prepositional | | PRT | particle | | SBAR | subordinate clause | | VP | verb phrase | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/chunk-english") # make example sentence sentence = Sentence("The happy man has been eating at the diner") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('np'): print(entity) ``` This yields the following output: ``` Span [1,2,3]: "The happy man" [− Labels: NP (0.9958)] Span [4,5,6]: "has been eating" [− Labels: VP (0.8759)] Span [7]: "at" [− Labels: PP (1.0)] Span [8,9]: "the diner" [− Labels: NP (0.9991)] ``` So, the spans "*The happy man*" and "*the diner*" are labeled as **noun phrases** (NP) and "*has been eating*" is labeled as a **verb phrase** (VP) in the sentence "*The happy man has been eating at the diner*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_2000 from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = CONLL_2000() # 2. what tag do we want to predict? tag_type = 'np' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('news-forward'), # contextual string embeddings, backward FlairEmbeddings('news-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/chunk-english', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
AI-Sweden-Models/gpt-sw3-6.7b-v2
AI-Sweden-Models
"2024-01-29T13:21:10Z"
2,120
2
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "da", "sv", "no", "en", "is", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-28T12:25:01Z"
--- license: other language: - da - sv - 'no' - en - is --- # Model description [AI Sweden](https://huggingface.co/AI-Sweden-Models/) **Base models** [GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/) [GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/) [GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/) **Instruct models** [GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/) [GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/) **Quantized models** [GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq) GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation. **V2** This version of the 6.7 Billion model is trained with the same tokenizer as the other model sizes, but on a different data distribution (Much more English and Code) and for longer. # Intended use GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks. # Limitations Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content. # How to use To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information. The following code snippet loads our tokenizer & model, and uses the GPU if available. ```python import torch from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM # Initialize Variables model_name = "AI-Sweden-Models/gpt-sw3-6.7b-v2" device = "cuda:0" if torch.cuda.is_available() else "cpu" prompt = "Träd är fina för att" # Initialize Tokenizer & Model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) model.eval() model.to(device) ``` Generating text using the `generate` method is done as follows: ```python input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device) generated_token_ids = model.generate( inputs=input_ids, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1, )[0] generated_text = tokenizer.decode(generated_token_ids) ``` A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you: ```python generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device) generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"] ``` # Compliance The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material. # GPT-SW3 Model Card Following Mitchell et al. (2018), we provide a model card for GPT-SW3. # Model Details - Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. - Model date: GPT-SW3 date of release 2022-12-20 - Model version: This is the second generation of GPT-SW3. - Model type: GPT-SW3 is a large decoder-only transformer language model. - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation. - Paper or other resource for more information: N/A. - License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/blob/main/LICENSE). - Where to send questions or comments about the model: [email protected] # Intended Use - Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not. - Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community. - Out-of-scope use cases: See the modified RAIL license. # Data, Limitations, and Recommendations - Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model. - Data selection for evaluation: N/A - Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs. - Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. - We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general. # GPT-SW3 Datasheet - We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3. # Motivation - For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages. - Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE. - Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949. - Any other comments? No. # Composition - What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources: - Books - Litteraturbanken (https://litteraturbanken.se/) - The Pile - Articles - Diva (https://www.diva-portal.org/) - The Pile: PubMed - The Pile: ArXiv - Code - Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code) - Conversational - Familjeliv (https://www.familjeliv.se/) - Flashback (https://flashback.se/) - Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI) - Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - Math - English Math dataset generated with code from DeepMind (D. Saxton et al., 2019) - Swedish Math dataset, generated as above with manually translated templates - Miscellaneous - Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf) - OPUS, the open parallel corpus (https://opus.nlpl.eu/) - Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database) - Natural Instructions (https://github.com/allenai/natural-instructions) - P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3) - The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC) - Danish Gigaword (https://gigaword.dk/) - Icelandic Gigaword (https://clarin.is/en/resources/gigaword/) - The Pile: Stack Exchange - Web Common Crawl - Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se). - Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019) - Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019) - The Pile: Open Web Text - Web Sources - Various public Swedish website scrapes (see Appendix in data paper) - Familjeliv Articles - Public Swedish Job Ads from JobTech/Arbetsförmedlingen - Wikipedia - Official Wikipedia dumps - How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens. - Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources. - What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data. - Is there a label or target associated with each instance? If so, please provide a description. No. - Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No. - Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances. - Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subset’s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the data’s value, source, and language, to form a representative and balanced pre-training corpus. - Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies. - Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained. - Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety. - Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc. - Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification. - Any other comments? No. # Collection Process - How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources. - What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet. - If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected. - Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines. - Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years. - Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset. - Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes. - Any other comments? No. - Preprocessing/cleaning/labeling - Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurençon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in “Deduplicating Training Data Makes Language Models Better” (K. Lee et al., 2021). - Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. The “raw” component datasets are publicly available in their respective locations. - Any other comments? No. # Uses - Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models. - Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A. - What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks. - Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population. - Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of. - Any other comments? No. # Distribution - Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No. - How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A. - When will the dataset be distributed? N/A. - Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A. - Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A. - Any other comments? No. # Maintenance - Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB. - How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected] - Is there an erratum? If so, please provide a link or other access point. N/A. - Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset. - If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu). - Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A. - If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time. - Any other comments? No. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AI-Sweden-Models__gpt-sw3-6.7b-v2) | Metric | Value | |-----------------------|---------------------------| | Avg. | 34.74 | | ARC (25-shot) | 39.42 | | HellaSwag (10-shot) | 66.39 | | MMLU (5-shot) | 30.09 | | TruthfulQA (0-shot) | 35.6 | | Winogrande (5-shot) | 64.25 | | GSM8K (5-shot) | 1.21 | | DROP (3-shot) | 6.22 |
Yntec/3Danimation
Yntec
"2023-09-29T13:32:47Z"
2,120
9
diffusers
[ "diffusers", "safetensors", "Anime", "Disney", "3D", "Lykon", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-09-29T12:47:37Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Disney - 3D - Lykon - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image language: - en inference: true --- # 3D Animation Diffusion Original model page: https://civitai.com/models/118086/3d-animation-diffusion Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/6YKhyaKSsE1Me8NjceOEc.png) Cartoon Pretty CUTE Girl, DETAILED CHIBI EYES, ilya kuvshinov detailed legs, gorgeous detailed hair, high school, Magazine ad, iconic, 1949, sharp focus. visible brushstrokes ​By KlaysMoji and artgerm and Clay Mann and and leyendecker and simon cowell. By Dave Rapoza. Pretty CUTE girl.
legraphista/AutoCoder_S_6.7B-IMat-GGUF
legraphista
"2024-05-28T14:52:09Z"
2,120
1
gguf
[ "gguf", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "text-generation", "base_model:Bin12345/AutoCoder_S_6.7B", "license:apache-2.0", "region:us" ]
text-generation
"2024-05-28T13:59:49Z"
--- base_model: Bin12345/AutoCoder_S_6.7B inference: false library_name: gguf license: apache-2.0 pipeline_tag: text-generation quantized_by: legraphista tags: - quantized - GGUF - imatrix - quantization - imat - imatrix - static --- # AutoCoder_S_6.7B-IMat-GGUF _Llama.cpp imatrix quantization of Bin12345/AutoCoder_S_6.7B_ Original Model: [Bin12345/AutoCoder_S_6.7B](https://huggingface.co/Bin12345/AutoCoder_S_6.7B) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3010](https://github.com/ggerganov/llama.cpp/releases/tag/b3010) IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw) - [AutoCoder_S_6.7B-IMat-GGUF](#autocoder-s-6-7b-imat-gguf) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [AutoCoder_S_6.7B.Q8_0.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.Q8_0.gguf) | Q8_0 | 7.16GB | ✅ Available | ⚪ Static | 📦 No | [AutoCoder_S_6.7B.Q6_K.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.Q6_K.gguf) | Q6_K | 5.53GB | ✅ Available | ⚪ Static | 📦 No | [AutoCoder_S_6.7B.Q4_K.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.Q4_K.gguf) | Q4_K | 4.08GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.Q3_K.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.Q3_K.gguf) | Q3_K | 3.30GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.Q2_K.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.Q2_K.gguf) | Q2_K | 2.53GB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [AutoCoder_S_6.7B.BF16.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.BF16.gguf) | BF16 | 13.48GB | ✅ Available | ⚪ Static | 📦 No | [AutoCoder_S_6.7B.FP16.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.FP16.gguf) | F16 | 13.48GB | ✅ Available | ⚪ Static | 📦 No | [AutoCoder_S_6.7B.Q5_K.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.Q5_K.gguf) | Q5_K | 4.79GB | ✅ Available | ⚪ Static | 📦 No | [AutoCoder_S_6.7B.Q5_K_S.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | ✅ Available | ⚪ Static | 📦 No | [AutoCoder_S_6.7B.Q4_K_S.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.Q3_K_L.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.Q3_K_L.gguf) | Q3_K_L | 3.60GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.Q3_K_S.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.Q2_K_S.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.Q2_K_S.gguf) | Q2_K_S | 2.32GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ4_NL.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ4_NL.gguf) | IQ4_NL | 3.83GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ4_XS.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ4_XS.gguf) | IQ4_XS | 3.62GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ3_M.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ3_M.gguf) | IQ3_M | 3.12GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ3_S.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ3_S.gguf) | IQ3_S | 2.95GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ3_XS.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ3_XS.gguf) | IQ3_XS | 2.80GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ3_XXS.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ3_XXS.gguf) | IQ3_XXS | 2.59GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ2_M.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ2_M.gguf) | IQ2_M | 2.36GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ2_S.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ2_S.gguf) | IQ2_S | 2.20GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ2_XS.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ2_XS.gguf) | IQ2_XS | 2.04GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ2_XXS.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ2_XXS.gguf) | IQ2_XXS | 1.86GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ1_M.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ1_M.gguf) | IQ1_M | 1.65GB | ✅ Available | 🟢 IMatrix | 📦 No | [AutoCoder_S_6.7B.IQ1_S.gguf](https://huggingface.co/legraphista/AutoCoder_S_6.7B-IMat-GGUF/blob/main/AutoCoder_S_6.7B.IQ1_S.gguf) | IQ1_S | 1.53GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/AutoCoder_S_6.7B-IMat-GGUF --include "AutoCoder_S_6.7B.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/AutoCoder_S_6.7B-IMat-GGUF --include "AutoCoder_S_6.7B.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` Human: Can you provide ways to eat combinations of bananas and dragonfruits? Assistant: Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|end▁of▁sentence|> Human: What about solving an 2x + 3 = 7 equation? Assistant: ``` ### Chat template with system prompt ``` You are a helpful AI. Human: Can you provide ways to eat combinations of bananas and dragonfruits? Assistant: Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|end▁of▁sentence|> Human: What about solving an 2x + 3 = 7 equation? Assistant: ``` ### Llama.cpp ``` llama.cpp/main -m AutoCoder_S_6.7B.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `AutoCoder_S_6.7B.Q8_0`) 3. Run `gguf-split --merge AutoCoder_S_6.7B.Q8_0/AutoCoder_S_6.7B.Q8_0-00001-of-XXXXX.gguf AutoCoder_S_6.7B.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
TheBloke/CodeLlama-13B-Python-GGUF
TheBloke
"2023-09-27T12:46:06Z"
2,119
32
transformers
[ "transformers", "gguf", "llama", "llama-2", "text-generation", "code", "arxiv:2308.12950", "base_model:codellama/CodeLlama-13b-python-hf", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
"2023-08-24T20:29:31Z"
--- language: - code license: llama2 tags: - llama-2 model_name: CodeLlama 13B Python base_model: codellama/CodeLlama-13b-python-hf inference: false model_creator: Meta model_type: llama pipeline_tag: text-generation prompt_template: '[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeLlama 13B Python - GGUF - Model creator: [Meta](https://huggingface.co/meta-llama) - Original model: [CodeLlama 13B Python](https://huggingface.co/codellama/CodeLlama-13b-python-hf) <!-- description start --> ## Description This repo contains GGUF format model files for [Meta's CodeLlama 13B Python](https://huggingface.co/codellama/CodeLlama-13b-python-hf). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeLlama-13B-Python-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF) * [Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/codellama/CodeLlama-13b-python-hf) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: CodeLlama ``` [INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```: {prompt} [/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [codellama-13b-python.Q2_K.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [codellama-13b-python.Q3_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [codellama-13b-python.Q3_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [codellama-13b-python.Q3_K_L.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [codellama-13b-python.Q4_0.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [codellama-13b-python.Q4_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [codellama-13b-python.Q4_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [codellama-13b-python.Q5_0.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [codellama-13b-python.Q5_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [codellama-13b-python.Q5_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [codellama-13b-python.Q6_K.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [codellama-13b-python.Q8_0.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGUF/blob/main/codellama-13b-python.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/CodeLlama-13B-Python-GGUF and below it, a specific filename to download, such as: codellama-13b-python.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/CodeLlama-13B-Python-GGUF codellama-13b-python.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/CodeLlama-13B-Python-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CodeLlama-13B-Python-GGUF codellama-13b-python.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m codellama-13b-python.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] Write code to solve the following coding problem that obeys the constraints and passes the example test cases. Please wrap your code answer using ```:\n{prompt}\n[/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/CodeLlama-13B-Python-GGUF", model_file="codellama-13b-python.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Meta's CodeLlama 13B Python # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the 13B Python specialist version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install git+https://github.com/huggingface/transformers.git@main accelerate ``` Model capabilities: - [x] Code completion. - [ ] Infilling. - [ ] Instructions / chat. - [x] Python specialist. ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the Python version of the 13B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide). <!-- original-model-card end -->
abhinand/Llama-3-Galen-8B-32k-v1
abhinand
"2024-05-10T06:24:15Z"
2,118
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "aaditya/Llama3-OpenBioLLM-8B", "en", "base_model:aaditya/Llama3-OpenBioLLM-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-04T17:53:41Z"
--- tags: - merge - mergekit - lazymergekit - aaditya/Llama3-OpenBioLLM-8B base_model: - aaditya/Llama3-OpenBioLLM-8B license: llama3 language: - en --- # Llama-3-Galen-8B-32k-v1 <img src="https://cdn-uploads.huggingface.co/production/uploads/60c8619d95d852a24572b025/R73wGdZE3GWeF9QZPvruG.jpeg" width="600" /> Llama-3-Galen-8B-32k-v1 is a RoPE scaled, DARE TIES merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B) * [johnsnowlabs/JSL-MedLlama-3-8B-v2.0](https://huggingface.co/johnsnowlabs/JSL-MedLlama-3-8B-v2.0) > **This model is capable of handling a context size of 32K right out of the box, enabled with Dynamic RoPE scaling.** ## 🧩 Configuration ```yaml models: - model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0 # No parameters necessary for base model - model: aaditya/Llama3-OpenBioLLM-8B parameters: density: 0.53 weight: 0.5 merge_method: dare_ties base_model: johnsnowlabs/JSL-MedLlama-3-8B-v2.0 parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "abhinand/Llama-3-Galen-8B-32k-v1" messages = [{"role": "user", "content": "How long does it take to recover from COVID-19?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
pkshatech/simcse-ja-bert-base-clcmlp
pkshatech
"2023-01-27T06:44:23Z"
2,117
15
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "transformers", "sentence-similarity", "ja", "arxiv:2104.08821", "license:cc-by-sa-4.0", "autotrain_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-12-26T02:52:03Z"
--- pipeline_tag: sentence-similarity language: ja license: cc-by-sa-4.0 tags: - transformers - sentence-similarity - feature-extraction - sentence-transformers inference: false widget: - source_sentence: "This widget can't work correctly now." sentences: - "Sorry :(" - "Try this model in your local environment!" example_title: "notification" --- # Japanese SimCSE (BERT-base) [日本語のREADME/Japanese README](https://huggingface.co/pkshatech/simcse-ja-bert-base-clcmlp/blob/main/README_JA.md) ## summary model name: `pkshatech/simcse-ja-bert-base-clcmlp` This is a Japanese [SimCSE](https://arxiv.org/abs/2104.08821) model. You can easily extract sentence embedding representations from Japanese sentences. This model is based on [`cl-tohoku/bert-base-japanese-v2`](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) and trained on [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) dataset, which is a Japanese natural language inference dataset. ## Usage (Sentence-Transformers) You can use this model easily with [sentence-transformers](https://www.SBERT.net). You need [fugashi](https://github.com/polm/fugashi) and [unidic-lite](https://pypi.org/project/unidic-lite/) for tokenization. Please install sentence-transformers, fugashi, and unidic-lite with pip as follows: ``` pip install -U fugashi[unidic-lite] sentence-transformers ``` You can load the model and convert sentences to dense vectors as follows: ```python from sentence_transformers import SentenceTransformer sentences = [ "PKSHA Technologyは機械学習/深層学習技術に関わるアルゴリズムソリューションを展開している。", "この深層学習モデルはPKSHA Technologyによって学習され、公開された。", "広目天は、仏教における四天王の一尊であり、サンスクリット語の「種々の眼をした者」を名前の由来とする。", ] model = SentenceTransformer('pkshatech/simcse-ja-bert-base-clcmlp') embeddings = model.encode(sentences) print(embeddings) ``` Since the loss function used during training is cosine similarity, we recommend using cosine similarity for downstream tasks. ## Model Detail ### Tokenization We use the same tokenizer as `tohoku/bert-base-japanese-v2`. Please see the [README of `tohoku/bert-base-japanese-v2`](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) for details. ### Training We set `tohoku/bert-base-japanese-v2` as the initial value and trained it on the train set of [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88). We trained 20 epochs and published the checkpoint of the model with the highest Spearman's correlation coefficient on the validation set [^1] of the train set of [JSTS](https://github.com/yahoojapan/JGLUE) ### Training Parameters | Parameter | Value | | --- | --- | |pooling_strategy | [CLS] -> single fully-connected layer | | max_seq_length | 128 | | with hard negative | true | | temperature of contrastive loss | 0.05 | | Batch size | 200 | | Learning rate | 1e-5 | | Weight decay | 0.01 | | Max gradient norm | 1.0 | | Warmup steps | 2012 | | Scheduler | WarmupLinear | | Epochs | 20 | | Evaluation steps | 250 | # Licenses This models are distributed under the terms of the Creative [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/). [^1]: When we trained this model, the test data of JGLUE was not released, so we used the dev set of JGLUE as a private evaluation data. Therefore, we selected the checkpoint on the train set of JGLUE insted of its dev set.
shadowml/Mixolar-4x7b
shadowml
"2024-04-01T16:01:05Z"
2,117
3
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "merge", "mergekit", "conversational", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-30T15:00:38Z"
--- license: apache-2.0 tags: - moe - merge - mergekit model-index: - name: Mixolar-4x7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 71.08 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Mixolar-4x7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.44 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Mixolar-4x7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 66.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Mixolar-4x7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 71.81 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Mixolar-4x7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Mixolar-4x7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Mixolar-4x7b name: Open LLM Leaderboard --- # Mixolar-4x7b This model is a Mixure of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models: * [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) * [jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1) * [rishiraj/meow](https://huggingface.co/rishiraj/meow) * [kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2](https://huggingface.co/kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2) ## 🧩 Configuration ```yaml base_model: kyujinpy/Sakura-SOLAR-Instruct gate_mode: hidden experts: - source_model: kyujinpy/Sakura-SOLAR-Instruct positive_prompts: - "chat" - "assistant" - "tell me" - "explain" negative_prompts: - "mathematics" - "reasoning" - source_model: jeonsworld/CarbonVillain-en-10.7B-v1 positive_prompts: - "write" - "AI" - "text" - "paragraph" negative_prompts: - "mathematics" - "reasoning" - source_model: rishiraj/meow positive_prompts: - "chat" - "say" - "what" negative_prompts: - "mathematics" - "reasoning" - source_model: kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2 positive_prompts: - "reason" - "math" - "mathematics" - "solve" - "count" negative_prompts: - "chat" - "assistant" - "storywriting" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/Mixolar-4x7b" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_shadowml__Mixolar-4x7b) | Metric |Value| |---------------------------------|----:| |Avg. |74.18| |AI2 Reasoning Challenge (25-Shot)|71.08| |HellaSwag (10-Shot) |88.44| |MMLU (5-Shot) |66.29| |TruthfulQA (0-shot) |71.81| |Winogrande (5-shot) |83.58| |GSM8k (5-shot) |63.91|
sail/Sailor-7B-Chat
sail
"2024-04-05T05:40:28Z"
2,117
5
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "multilingual", "sea", "sailor", "sft", "chat", "instruction", "conversational", "en", "zh", "id", "th", "vi", "ms", "lo", "dataset:CohereForAI/aya_dataset", "dataset:CohereForAI/aya_collection", "dataset:Open-Orca/OpenOrca", "arxiv:2404.03608", "base_model:sail/Sailor-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-02T09:48:59Z"
--- language: - en - zh - id - th - vi - ms - lo datasets: - CohereForAI/aya_dataset - CohereForAI/aya_collection - Open-Orca/OpenOrca tags: - multilingual - sea - sailor - sft - chat - instruction widget: - text: "如何制作烤鱼?" example_title: "Chinese" - text: "How to bake fish?" example_title: "English" - text: "Bagaimana cara memanggang ikan?" example_title: "Malay" - text: "วิธีย่างปลา?" example_title: "Thai" - text: "Bagaimana membuat bakaran ikan?" example_title: "Indonesian" - text: "Làm thế nào để nướng cá?" example_title: "Vietnamese" license: apache-2.0 base_model: sail/Sailor-7B --- <div align="center"> <img src="banner_sailor.jpg" width="700"/> </div> Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao. Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region. Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 7B versions for different requirements. We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat. Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages. > The logo was generated by MidJourney ## Model Summary - **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825) - **Project Website:** [sailorllm.github.io](https://sailorllm.github.io/) - **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm) - **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf) ## Training details Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages. The pre-training corpus heavily leverages the publicly available corpus, including [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B), [SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B), [CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400). The instruction tuning corpus are all publicly available including [aya_collection](https://huggingface.co/datasets/CohereForAI/aya_collection), [aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset), [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca). By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages. Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes. The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise. Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models. ## Requirements The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`. ## Quickstart Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" model = AutoModelForCausalLM.from_pretrained( 'sail/Sailor-7B-Chat', torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained('sail/Sailor-7B-Chat') system_prompt= 'You are a helpful assistant' prompt = "Beri saya pengenalan singkat tentang model bahasa besar." # prompt = "Hãy cho tôi một giới thiệu ngắn gọn về mô hình ngôn ngữ lớn." # prompt = "ให้ฉันแนะนำสั้น ๆ เกี่ยวกับโมเดลภาษาขนาดใหญ่" messages = [ {"role": "system", "content": system_prompt}, {"role": "question", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(device) input_ids = model_inputs.input_ids.to(device) generated_ids = model.generate( input_ids, max_new_tokens=512, ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` # License Sailor is distributed under the terms of the Apache License 2.0. No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE). ## Citation If you find sailor useful, please cite our work as follows: ``` @misc{dou2024sailor, title={Sailor: Open Language Models for South-East Asia}, author={Longxu Dou and Qian Liu and Guangtao Zeng and Jia Guo and Jiahui Zhou and Wei Lu and Min Lin}, year={2024}, eprint={2404.03608}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # Contact Us If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]) or [[email protected]](mailto:[email protected]).
Yntec/DreamlikeDiffusion
Yntec
"2024-06-09T05:52:41Z"
2,117
0
diffusers
[ "diffusers", "safetensors", "art", "artistic", "DreamlikeArt", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:other", "autotrain_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-04-14T01:47:17Z"
--- language: - en license: other tags: - art - artistic - DreamlikeArt - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: false --- # Dreamlike Diffusion 768x768 version of this model with the 840KVAE baked in for improved details, specially on the eyes. Original page: https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0 Comparison: ![Free online AI image generator Dreamlike Diffusion comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/4GA4x4XQl3O6i_bw36zGO.png) (Click for larger) Samples and prompts: ![Free online AI text to image generator samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/lzH1Z5zrQwm8Mz13N8jrR.png) (Click for larger) Top left: cute girl and her father sitting in forest. A daughter with a wizard. Detailed faces in the style of Mark Brooks. Top right: Young Elvis with his daughter. Movie still. Pretty CUTE LITTLE Girl with sister playing with miniature toy city, bokeh. DETAILED vintage colors photography brown EYES, sitting on a box of pepsis, gorgeous detailed Ponytail, cocacola can Magazine ad, iconic, 1935, sharp focus. Illustration By KlaysMoji and leyendecker and artgerm and Dave Rapoza Bottom left: An alien planet psychedelic marijuana forest with bioluminescent lighting dark art highly detailed neon dark art deviant super detailed 4k render by gustave dore Bottom right: Retro colors Portrait of a adorable young girl. fashion in the style of Milton Caniff
MaziyarPanahi/EvoluzioneRed-GGUF
MaziyarPanahi
"2024-06-15T09:09:50Z"
2,116
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "base_model:grimjim/rogue-enchantress-32k-7B", "base_model:mergekit-community/TopEvolutionWiz", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/EvoluzioneRed" ]
text-generation
"2024-06-15T08:48:09Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - base_model:grimjim/rogue-enchantress-32k-7B - base_model:mergekit-community/TopEvolutionWiz - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: EvoluzioneRed-GGUF base_model: mergekit-community/EvoluzioneRed inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/EvoluzioneRed-GGUF](https://huggingface.co/MaziyarPanahi/EvoluzioneRed-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/EvoluzioneRed](https://huggingface.co/mergekit-community/EvoluzioneRed) ## Description [MaziyarPanahi/EvoluzioneRed-GGUF](https://huggingface.co/MaziyarPanahi/EvoluzioneRed-GGUF) contains GGUF format model files for [mergekit-community/EvoluzioneRed](https://huggingface.co/mergekit-community/EvoluzioneRed). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
briaai/BRIA-2.3
briaai
"2024-06-06T06:16:06Z"
2,114
19
diffusers
[ "diffusers", "safetensors", "text-to-image", "legal liability", "commercial use", "license:other", "autotrain_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-02-18T09:22:05Z"
--- license: other license_name: bria-t2i license_link: https://bria.ai/customer-general-terms-and-conditions library_name: diffusers inference: False tags: - text-to-image - legal liability - commercial use extra_gated_description: Model weights from BRIA AI can be obtained with the purchase of a commercial license. Fill in the form below and we reach out to you. extra_gated_heading: "Fill in this form to request a commercial license for the model" extra_gated_fields: Name: text Company/Org name: text Org Type (Early/Growth Startup, Enterprise, Academy): text Role: text Country: text Email: text By submitting this form, I agree to BRIA’s Privacy policy and Terms & conditions, see links below: checkbox --- # BRIA 2.3: Text-to-Image Model for Commercial Licensing Bria AI 2.3 is our groundbreaking text-to-image model explicitly designed for commercial applications. This model combines technological innovation with ethical responsibility and legal security, setting a new standard in the AI industry. Bria AI licenses the foundation model with full legal liability coverage. Our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content. For more information, please visit our [website](https://bria.ai/). # What's New Bria AI 2.3 is a premium model which improves the generation of illustration, art, and human faces compared to [BRIA 2.2](https://huggingface.co/briaai/BRIA-2.2). [CLICK HERE FOR A DEMO](https://huggingface.co/spaces/briaai/BRIA-2.3) Check out our entire family of models, including our [Fast model](https://huggingface.co/briaai/BRIA-2.2-FAST) and our [HD model](https://huggingface.co/briaai/BRIA-2.2-HD) ### Get Access Interested in BRIA 2.3? Purchase is required to license and access BRIA 2.3, ensuring royalty management with our data partners and full liability coverage for commercial use. Are you a startup or a student? We encourage you to apply for our [Startup Program](https://pages.bria.ai/the-visual-generative-ai-platform-for-builders-startups-plan?_gl=1*cqrl81*_ga*MTIxMDI2NzI5OC4xNjk5NTQ3MDAz*_ga_WRN60H46X4*MTcwOTM5OTMzNC4yNzguMC4xNzA5Mzk5MzM0LjYwLjAuMA..) to request access. This program are designed to support emerging businesses and academic pursuits with our cutting-edge technology. Contact us today to unlock the potential of BRIA 2.3! By submitting the form above, you agree to BRIA’s [Privacy policy](https://bria.ai/privacy-policy/) and [Terms & conditions](https://bria.ai/terms-and-conditions/). ![](photo-grid-2-3.jpeg) # Key Features - **Legally Compliant**: Offers full legal liability coverage for copyright and privacy infringements. Thanks to training on 100% licensed data from leading data partners, we ensure the ethical use of content. - **Patented Attribution Engine**: Our attribution engine is our way to compensate our data partners, powered by our proprietary and patented algorithms. - **Enterprise-Ready**: Specifically designed for business applications, Bria AI 2.3 delivers high-quality, compliant imagery for a variety of commercial needs. - **Customizable Technology**: Provides access to source code and weights for extensive customization, catering to specific business requirements. ### Model Description - **Developed by:** BRIA AI - **Model type:** Latent diffusion text-to-image model - **License:** [Commercial licensing terms & conditions.](https://bria.ai/customer-general-terms-and-conditions) - Purchase is required to license and access the model. - **Model Description:** BRIA 2.3 is a text-to-image model trained exclusively on a professional-grade, licensed dataset. It is designed for commercial use and includes full legal liability coverage. - **Resources for more information:** [BRIA AI](https://bria.ai/) ### Code example using Diffusers ``` pip install diffusers ``` ```py from diffusers import DiffusionPipeline import torch pipe = DiffusionPipeline.from_pretrained("briaai/BRIA-2.3-BETA", torch_dtype=torch.float16, use_safetensors=True) pipe.to("cuda") prompt = "A portrait of a Beautiful and playful ethereal singer, golden designs, highly detailed, blurry background" negative_prompt = "Logo,Watermark,Text,Ugly,Morbid,Extra fingers,Poorly drawn hands,Mutation,Blurry,Extra limbs,Gross proportions,Missing arms,Mutated hands,Long neck,Duplicate,Mutilated,Mutilated hands,Poorly drawn face,Deformed,Bad anatomy,Cloned face,Malformed limbs,Missing legs,Too many fingers" images = pipe(prompt=prompt, negative_prompt=negative_prompt, height=1024, width=1024).images[0] ``` ### Some tips for using our text-to-image model at inference: 1. You must set `pipe.force_zeros_for_empty_prompt = False` 2. Using negative prompt is recommended. 3. We support multiple aspect ratios, yet resolution should overall consists approximately `1024*1024=1M` pixels, for example: `(1024,1024), (1280, 768), (1344, 768), (832, 1216), (1152, 832), (1216, 832), (960,1088)` 4. If you need speed - try the BRIA 2.3 Fast which achieve comparable quality with 75% reduced inference time 5. For 2.3 (not Fast) use 30-50 steps (higher is better), the Fast model works well with just 8 steps 6. For 2.3 (not Fast) use `guidance_scale` of 5.0 or 7.5, for the Fast models use 1.0
MaziyarPanahi/Llama-3-13B-Instruct-v0.1-GGUF
MaziyarPanahi
"2024-04-20T07:42:09Z"
2,114
4
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "text-generation", "mixtral", "base_model:MaziyarPanahi/Llama-3-13B-Instruct-v0.1", "text-generation-inference", "region:us" ]
text-generation
"2024-04-19T11:39:49Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - text-generation - mixtral - text-generation model_name: Llama-3-13B-Instruct-v0.1-GGUF base_model: MaziyarPanahi/Llama-3-13B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/Llama-3-13B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-13B-Instruct-v0.1-GGUF) - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi) - Original model: [MaziyarPanahi/Llama-3-13B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Llama-3-13B-Instruct-v0.1) ## Description [MaziyarPanahi/Llama-3-13B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-13B-Instruct-v0.1-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-13B-Instruct-v0.1](https://huggingface.co/MaziyarPanahi/Llama-3-13B-Instruct-v0.1). ## Load GGUF models You `MUST` follow the prompt template provided by Llama-3: ```sh ./llama.cpp/main -m Llama-3-13B-Instruct.Q2_K.gguf -r '<|eot_id|>' --in-prefix "\n<|start_header_id|>user<|end_header_id|>\n\n" --in-suffix "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" -p "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.<|eot_id|>\n<|start_header_id|>user<|end_header_id|>\n\nHi! How are you?<|eot_id|>\n<|start_header_id|>assistant<|end_header_id|>\n\n" -n 1024 ```
appvoid/palmer-002
appvoid
"2024-05-07T22:56:34Z"
2,112
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:appvoid/no-prompt-15k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-12T16:54:03Z"
--- license: apache-2.0 language: - en pipeline_tag: text-generation datasets: - appvoid/no-prompt-15k --- ![palmer](https://huggingface.co/appvoid/palmer-001/resolve/main/new-logo.jpg) # palmer ### a better base model palmer is a series of ~1b parameters language models fine-tuned to be used as base models instead of using custom prompts for tasks. This means that it can be further fine-tuned on more data with custom prompts as usual or be used for downstream tasks as any base model you can get. The model has the best of both worlds: some "bias" to act as an assistant, but also the abillity to predict the next-word from its internet knowledge base. It's a 1.1b llama 2 model so you can use it with your favorite tools/frameworks. ### evaluation 🧪 note that this is a zero-shot setting as opposite to open llm leaderboard's few-shot evals ``` Model ARC_C HellaSwag PIQA Winogrande Average tinyllama-2 | 0.2807 | 0.5463 | 0.7067 | 0.5683 | 0.5255 | palmer-001 | 0.2807 | 0.5524 | 0.7106 | 0.5896 | 0.5333 | babbage-001 | 0.2944 | 0.5448 | 0.7410 | 0.5935 | 0.5434 | deacon-1b | 0.2944 | 0.5727 | 0.7040 | 0.5801 | 0.5434 | tinyllama-2.5 | 0.3191 | 0.5896 | 0.7307 | 0.5872 | 0.5566 | palmer-002 | 0.3242 | 0.5956 | 0.7345 | 0.5888 | 0.5607 | babbage-002 | 0.3285 | 0.6380 | 0.7606 | 0.6085 | 0.5839 | ``` This model shows exceptional performance and as of now is the best tinyllama-size base model. Furthermore, this proves LIMA paper point and serves as a good open-source alternative to openai's `babbage-002`. ### training 🦾 Training took ~3.5 P100 gpu hours. It was trained on 15,000 gpt-4 shuffled samples. palmer was fine-tuned using lower learning rates ensuring it keeps as much general knowledge as possible. ### prompt 📝 ``` no prompt 🚀 ``` Choose this if you prefer a base model without too much fine-tuning. <a href="https://ko-fi.com/appvoid" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 48px !important;width: 180px !important; filter: invert(70%);" ></a>
mlabonne/NeuralLlama-3-8B-Instruct-abliterated
mlabonne
"2024-05-27T15:01:24Z"
2,112
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "abliterated", "conversational", "dataset:mlabonne/orpo-dpo-mix-40k", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-26T21:50:02Z"
--- license: other datasets: - mlabonne/orpo-dpo-mix-40k tags: - abliterated --- # Llama-3-8B-Instruct-abliterated-dpomix This model is an experimental DPO fine-tune of an abliterated Llama 3 8B Instruct model on the full [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k) dataset. It improves Llama 3 8B Instruct's performance while being uncensored. ## 🔎 Applications This is an uncensored model. You can use it for any application that doesn't require alignment, like role-playing. Tested on LM Studio using the "Llama 3" preset. ## ⚡ Quantization * **GGUF**: https://huggingface.co/mlabonne/Llama-3-8B-Instruct-abliterated-dpomix-GGUF ## 🏆 Evaluation ### Open LLM Leaderboard This model improves the performance of the abliterated source model and recovers the MMLU that was lost in the abliteration process. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/sCO69BltMkGrq6u7yCIcP.png) ### Nous | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**mlabonne/Llama-3-8B-Instruct-abliterated-dpomix**](https://huggingface.co/mlabonne/Llama-3-8B-Instruct-abliterated-dpomix) [📄](https://gist.github.com/mlabonne/d711548df70e2c04771cc68ab33fe2b9) | **52.26** | **41.6** | **69.95** | **54.22** | **43.26** | | [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) [📄](https://gist.github.com/mlabonne/8329284d86035e6019edb11eb0933628) | 51.34 | 41.22 | 69.86 | 51.65 | 42.64 | | [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) [📄](https://gist.github.com/mlabonne/f46cce0262443365e4cce2b6fa7507fc) | 51.21 | 40.23 | 69.5 | 52.44 | 42.69 | | [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B) [📄](https://gist.github.com/mlabonne/91369d9c372f80b6a42a978b454d3b5e) | 49.65 | 37.15 | 69.12 | 51.66 | 40.67 | | [mlabonne/OrpoLlama-3-8B](https://huggingface.co/mlabonne/OrpoLlama-3-8B) [📄](https://gist.github.com/mlabonne/22896a1ae164859931cc8f4858c97f6f) | 48.63 | 34.17 | 70.59 | 52.39 | 37.36 | | [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) [📄](https://gist.github.com/mlabonne/616b6245137a9cfc4ea80e4c6e55d847) | 45.42 | 31.1 | 69.95 | 43.91 | 36.7 | ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mlabonne/Llama-3-8B-Instruct-abliterated-dpomix" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
shadowjourney/shadowjourney
shadowjourney
"2024-06-28T06:03:44Z"
2,112
0
diffusers
[ "diffusers", "safetensors", "imagetotext", "AI Image", "Diffusion", "text-to-image", "en", "license:mit", "region:us" ]
text-to-image
"2024-06-27T05:21:33Z"
--- license: mit language: - en library_name: diffusers pipeline_tag: text-to-image tags: - imagetotext - AI Image - Diffusion --- * Recommended Size: 1024x1024 * Recommended Steps: 60 # Shared by * Ichate # Trained by * Meow
x2bee/POLAR-14B-v0.2
x2bee
"2024-05-03T06:39:24Z"
2,111
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-25T08:17:51Z"
--- license: apache-2.0 language: - ko library_name: transformers --- # Model Details ![logo](./Plateer_image.png) ## Model Description <!-- Provide a longer summary of what this model is/does. --> POLAR is a Korean LLM developed by Plateer's AI-lab. It was inspired by Upstage's SOLAR. We will continue to evolve this model and hope to contribute to the Korean LLM ecosystem. - **Developed by:** AI-Lab of Plateer(Woomun Jung, Eunsoo Ha, MinYoung Joo, Seongjun Son) - **Model type:** Language model - **Language(s) (NLP):** ko - **License:** apache-2.0 - Parent Model: upstage/SOLAR-10.7B-v1.0 # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("x2bee/POLAR-14B-v0.2") model = AutoModelForCausalLM.from_pretrained("x2bee/POLAR-14B-v0.2") ``` ## Downstream Use [Optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." --> # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> More information on training data needed ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing More information needed ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> More information needed # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> More information needed ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> More information needed ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> More information needed ## Results More information needed # Model Examination More information needed # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** More information needed - **Hours used:** More information needed - **Cloud Provider:** More information needed - **Compute Region:** More information needed - **Carbon Emitted:** More information needed # Technical Specifications [optional] ## Model Architecture and Objective More information needed ## Compute Infrastructure More information needed ### Hardware More information needed ### Software More information needed # Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** More information needed **APA:** More information needed # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> More information needed # More Information [optional] If you would like more information about our company, please visit the link below. [tech.x2bee.com](https://tech.x2bee.com/) # Model Card Authors [optional] <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. --> Woomun Jung, MinYoung Joo, Eunsu Ha, Seungjun Son # Model Card Contact More information needed # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> More information needed </details>
MaziyarPanahi/mergekit-slerp-ueqsixf-GGUF
MaziyarPanahi
"2024-06-18T13:49:54Z"
2,111
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:Equall/Saul-Base", "base_model:HuggingFaceH4/zephyr-7b-beta", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-ueqsixf" ]
text-generation
"2024-06-18T13:27:13Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:Equall/Saul-Base - base_model:HuggingFaceH4/zephyr-7b-beta - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-ueqsixf-GGUF base_model: mergekit-community/mergekit-slerp-ueqsixf inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-ueqsixf-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-ueqsixf-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-ueqsixf](https://huggingface.co/mergekit-community/mergekit-slerp-ueqsixf) ## Description [MaziyarPanahi/mergekit-slerp-ueqsixf-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-ueqsixf-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-ueqsixf](https://huggingface.co/mergekit-community/mergekit-slerp-ueqsixf). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
OpenBuddy/openbuddy-mistral-7b-v17.1-32k
OpenBuddy
"2024-01-28T17:21:44Z"
2,110
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-28T16:59:47Z"
--- language: - zh - en - fr - de - ja - ko - it - ru pipeline_tag: text-generation inference: false library_name: transformers license: apache-2.0 --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice Base model: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 License: Apache 2.0 ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF
mradermacher
"2024-06-11T16:21:56Z"
2,110
0
transformers
[ "transformers", "gguf", "en", "base_model:KoboldAI/LLAMA2-13B-Holodeck-1", "license:other", "endpoints_compatible", "region:us" ]
null
"2024-06-11T11:07:18Z"
--- base_model: KoboldAI/LLAMA2-13B-Holodeck-1 language: en library_name: transformers license: other quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/KoboldAI/LLAMA2-13B-Holodeck-1 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-IQ2_S.gguf) | i1-IQ2_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-IQ2_M.gguf) | i1-IQ2_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-Q2_K.gguf) | i1-Q2_K | 5.0 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-IQ3_S.gguf) | i1-IQ3_S | 5.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-IQ3_M.gguf) | i1-IQ3_M | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-Q4_0.gguf) | i1-Q4_0 | 7.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.5 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 8.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA2-13B-Holodeck-1-i1-GGUF/resolve/main/LLAMA2-13B-Holodeck-1.i1-Q6_K.gguf) | i1-Q6_K | 10.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his hardware for calculating the imatrix for these quants. <!-- end -->
allenai/longformer-large-4096-finetuned-triviaqa
allenai
"2022-10-03T22:04:43Z"
2,109
6
transformers
[ "transformers", "pytorch", "tf", "longformer", "question-answering", "en", "endpoints_compatible", "region:us" ]
question-answering
"2022-03-02T23:29:05Z"
--- language: en ---
somq/fantassified_icons_v2
somq
"2023-12-12T12:31:51Z"
2,109
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "finetune", "icons", "art", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-08-23T18:32:54Z"
--- license: other tags: - text-to-image - stable-diffusion - finetune - icons - art language: - en --- ## new and shiny 。・:*:・゚’☆ [<img src="https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/resolve/main/animatedicons.gif">](https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/blob/main/animatedicons.gif) # about - updated version of [v1](https://huggingface.co/proxima/fantassified_icons), made with a dataset consisting of mostly the old version's dataset, but it's a lot better because I learned a few things since the dreambooth days - generates icons inspired by fantasy games with mostly plain backgrounds - no trigger words - either my local hires fix isn't working well or potions look weird when hires is turned on, will have to test that another time (probably needs low denoising strength) - i don't recommend this for people and faces as these were of 0% concern while training, the focus was on items, but you do you - examples are [made with this VAE](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.ckpt) at 20 steps, 512x512, CFG 7, Euler a (try DPM ++2M for a look that is a bit sharper) --- ## examples [<img src="https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/resolve/main/examples/1-3.png">](https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/blob/main/examples/1-3.png) [<img src="https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/resolve/main/examples/4-6.png">](https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/blob/main/examples/4-6.png) [<img src="https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/resolve/main/examples/7-9.png">](https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/blob/main/examples/7-9.png) --- if you enjoy this consider buying me a coffee (ノ◕ヮ◕)ノ*:・゚✧ <a href='https://ko-fi.com/S6S6FUYKY' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a> ---- ## Use with diffusers How to use it with [diffusers](https://github.com/huggingface/diffusers) ```python import torch from diffusers import StableDiffusionPipeline, DDIMScheduler scheduler = DDIMScheduler.from_pretrained("proximasanfinetuning/fantassified_icons_v2", subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained("proximasanfinetuning/fantassified_icons_v2", scheduler=scheduler).to("cuda") prompt = "A lemon themed high quality hamburger" images = pipe(prompt, num_images_per_prompt=6, num_inference_steps=25).images images[0] ``` --- # license This model is licensed under a modified CreativeML OpenRAIL-M license. * Utilizing and hosting the Fantassified Icons 1.0 model and its derivatives on platforms that earn, will earn, or plan to earn revenue or donations requires prior authorization. **To request permission, please email [email protected].** * You are permitted to host the model card and files on both commercial and non-commercial websites, apps, etc. as long as you properly credit the model by stating its full name and providing a link to the model card (https://huggingface.co/proximasanfinetuning/fantassified_icons_v2), without performing any actual inference or finetuning. * The Fantassified Icons 1.0 model and its derivatives can be hosted on non-commercial websites, apps, etc. as long as no revenue or donations are received. Proper credit must be given by stating the full model name and including a link to the model card (https://huggingface.co/proximasanfinetuning/fantassified_icons_v2). * **The outputs of the model or its derivatives can be used for commercial purposes as long as the usage is limited to teams of 10 or fewer individuals.** * You can't use the model to deliberately produce nor share illegal or harmful outputs or content * The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license * You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/proximasanfinetuning/fantassified_icons_v2/blob/main/license.txt
microsoft/swin-base-patch4-window12-384-in22k
microsoft
"2022-05-16T18:01:06Z"
2,108
1
transformers
[ "transformers", "pytorch", "tf", "swin", "image-classification", "vision", "dataset:imagenet-21k", "arxiv:2103.14030", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-21k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Swin Transformer (large-sized model) Swin Transformer model pre-trained on ImageNet-21k (14 million images, 21,841 classes) at resolution 384x384. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer). Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png) [Source](https://paperswithcode.com/method/swin-transformer) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, SwinForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/swin-base-patch4-window12-384-in22k") model = SwinForImageClassification.from_pretrained("microsoft/swin-base-patch4-window12-384-in22k") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2103-14030, author = {Ze Liu and Yutong Lin and Yue Cao and Han Hu and Yixuan Wei and Zheng Zhang and Stephen Lin and Baining Guo}, title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows}, journal = {CoRR}, volume = {abs/2103.14030}, year = {2021}, url = {https://arxiv.org/abs/2103.14030}, eprinttype = {arXiv}, eprint = {2103.14030}, timestamp = {Thu, 08 Apr 2021 07:53:26 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
TweebankNLP/bertweet-tb2_ewt-pos-tagging
TweebankNLP
"2022-05-05T00:23:51Z"
2,108
7
transformers
[ "transformers", "pytorch", "roberta", "token-classification", "arxiv:2201.07281", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-05-03T16:15:03Z"
--- license: cc-by-nc-4.0 --- ## Model Specification - This is the **state-of-the-art Twitter POS tagging model (with 95.38\% Accuracy)** on Tweebank V2's NER benchmark (also called `Tweebank-NER`), trained on the corpus combining both Tweebank-NER and English-EWT training data. - For more details about the `TweebankNLP` project, please refer to this [our paper](https://arxiv.org/pdf/2201.07281.pdf) and [github](https://github.com/social-machines/TweebankNLP) page. - In the paper, it is referred as `HuggingFace-BERTweet (TB2+EWT)` in the POS table. ## How to use the model - **PRE-PROCESSING**: when you apply the model on tweets, please make sure that tweets are preprocessed by the [TweetTokenizer](https://github.com/VinAIResearch/BERTweet/blob/master/TweetNormalizer.py) to get the best performance. ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("TweebankNLP/bertweet-tb2_ewt-pos-tagging") model = AutoModelForTokenClassification.from_pretrained("TweebankNLP/bertweet-tb2_ewt-pos-tagging") ``` ## References If you use this repository in your research, please kindly cite [our paper](https://arxiv.org/pdf/2201.07281.pdf): ```bibtex @article{jiang2022tweetnlp, title={Annotating the Tweebank Corpus on Named Entity Recognition and Building NLP Models for Social Media Analysis}, author={Jiang, Hang and Hua, Yining and Beeferman, Doug and Roy, Deb}, journal={In Proceedings of the 13th Language Resources and Evaluation Conference (LREC)}, year={2022} } ```
gemmathon/gemma-pro-2.8b-ko-v0
gemmathon
"2024-04-05T05:00:04Z"
2,108
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-05T03:07:45Z"
--- license: other library_name: transformers license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ddh0/Yi-6B-GGUF-fp16
ddh0
"2024-06-27T20:05:41Z"
2,107
1
null
[ "gguf", "text-generation", "license:apache-2.0", "region:us" ]
text-generation
"2023-11-06T07:11:06Z"
--- pipeline_tag: text-generation license: apache-2.0 --- This is 01-ai's [Yi-6B](https://huggingface.co/01-ai/Yi-6B), converted to GGUF without quantization. No other changes were made. The model was converted using `convert.py` from Georgi Gerganov's llama.cpp repo as it appears [here](https://github.com/ggerganov/llama.cpp/blob/898aeca90a9bb992f506234cf3b8b7f7fa28a1df/convert.py) (that is, the last change to the file was in commit `#898aeca`.) All credit belongs to [01-ai](https://huggingface.co/01-ai) for training and releasing this model. Thank you!
meta-llama/CodeLlama-70b-Instruct-hf
meta-llama
"2024-03-14T18:41:22Z"
2,107
7
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-2", "conversational", "code", "arxiv:2308.12950", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-13T21:50:19Z"
--- extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: >- ### LLAMA 2 COMMUNITY LICENSE AGREEMENT "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity's behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Llama 2" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/. "Llama Materials" means, collectively, Meta's proprietary Llama 2 and documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non- transferable and royalty-free limited license under Meta's intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved." iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee's affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta's ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. USE POLICY ### Llama 2 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy). #### Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Llama 2 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [[email protected]](mailto:[email protected]) extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit language: - code pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-2 license: llama2 widget: - example_title: Hello world (Python) messages: - role: system content: You are a helpful and honest code assistant - role: user content: Print a hello world in Python - example_title: Sum of sublists (Python) messages: - role: system content: You are a helpful and honest code assistant expert in JavaScript. Please, provide all answers to programming questions in JavaScript - role: user content: Write a function that computes the set of sums of all contiguous sublists of a given list. inference: parameters: max_new_tokens: 200 stop: - </s> - <step> --- # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B instruct-tuned version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [meta-llama/CodeLlama-7b-hf](https://huggingface.co/meta-llama/CodeLlama-7b-hf) | [meta-llama/CodeLlama-7b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Python-hf) | [meta-llama/CodeLlama-7b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-7b-Instruct-hf) | | 13B | [meta-llama/CodeLlama-13b-hf](https://huggingface.co/meta-llama/CodeLlama-13b-hf) | [meta-llama/CodeLlama-13b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Python-hf) | [meta-llama/CodeLlama-13b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-13b-Instruct-hf) | | 34B | [meta-llama/CodeLlama-34b-hf](https://huggingface.co/meta-llama/CodeLlama-34b-hf) | [meta-llama/CodeLlama-34b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Python-hf) | [meta-llama/CodeLlama-34b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-34b-Instruct-hf) | | 70B | [meta-llama/CodeLlama-70b-hf](https://huggingface.co/meta-llama/CodeLlama-70b-hf) | [meta-llama/CodeLlama-70b-Python-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Python-hf) | [meta-llama/CodeLlama-70b-Instruct-hf](https://huggingface.co/meta-llama/CodeLlama-70b-Instruct-hf) | Model capabilities: - [x] Code completion. - [ ] Infilling. - [x] Instructions / chat. - [ ] Python specialist. ## Model Use To use this model, please make sure to install transformers: ```bash pip install transformers accelerate ``` **Chat use:** The 70B Instruct model uses a [different prompt template](#chat_prompt) than the smaller versions. To use it with `transformers`, we recommend you use the built-in chat template: ```py from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model_id = "meta-llama/CodeLlama-70b-Instruct-hf" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map="auto", ) chat = [ {"role": "system", "content": "You are a helpful and honest code assistant expert in JavaScript. Please, provide all answers to programming questions in JavaScript"}, {"role": "user", "content": "Write a function that computes the set of sums of all contiguous sublists of a given list."}, ] inputs = tokenizer.apply_chat_template(chat, return_tensors="pt").to("cuda") output = model.generate(input_ids=inputs, max_new_tokens=200) output = output[0].to("cpu") print(tokenizer.decode(output)) ``` You can also use the model for **text or code completion**. This examples uses transformers' `pipeline` interface: ```py from transformers import AutoTokenizer import transformers import torch model_id = "meta-llama/CodeLlama-70b-hf" tokenizer = AutoTokenizer.from_pretrained(model_id) pipeline = transformers.pipeline( "text-generation", model=model_id, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'def fibonacci(', do_sample=True, temperature=0.2, top_p=0.9, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=100, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` <a name="chat_prompt"></a> ## Chat prompt CodeLlama 70B Instruct uses a different format for the chat prompt than previous Llama 2 or CodeLlama models. As mentioned above, the easiest way to use it is with the help of the tokenizer's chat template. If you need to build the string or tokens, manually, here's how to do it. We'll do our tests with the following made-up dialog: ```py chat = [ {"role": "system", "content": "System prompt "}, {"role": "user", "content": "First user query"}, {"role": "assistant", "content": "Model response to first query"}, {"role": "user", "content": "Second user query"}, ] ``` First, let's see what the prompt looks like if we use the chat template: ```py tokenizer.apply_chat_template(chat, tokenize=False) ``` ``` '<s>Source: system\n\n System prompt <step> Source: user\n\n First user query <step> Source: assistant\n\n Model response to first query <step> Source: user\n\n Second user query <step> Source: assistant\nDestination: user\n\n ' ``` So each turn of the conversation has a `Source` (`system`, `user`, or `assistant`), and then the content appears after two newlines and a space. Turns are separated with the special token ` <step> `. After the last turn (which must necessarily come from the `user`), we invite the model to respond by using the special syntax `Source: assistant\nDestination: user\n\n `. Let's see how we can build the same string ourselves: ```py output = "<s>" for m in chat: output += f"Source: {m['role']}\n\n {m['content'].strip()}" output += " <step> " output += "Source: assistant\nDestination: user\n\n " output ``` ``` '<s>Source: system\n\n System prompt <step> Source: user\n\n First user query <step> Source: assistant\n\n Model response to first query <step> Source: user\n\n Second user query <step> Source: assistant\nDestination: user\n\n ' ``` To verify that we got it right, we'll compare against the [reference code in the original GitHub repo](https://github.com/facebookresearch/codellama/blob/1af62e1f43db1fa5140fa43cb828465a603a48f3/llama/generation.py#L506). We used the same dialog and tokenized it with the `dialog_prompt_tokens` function and got the following tokens: ```py reference_tokens = [1, 7562, 29901, 1788, 13, 13, 2184, 9508, 32015, 7562, 29901, 1404, 13, 13, 3824, 1404, 2346, 32015, 7562, 29901, 20255, 13, 13, 8125, 2933, 304, 937, 2346, 32015, 7562, 29901, 1404, 13, 13, 6440, 1404, 2346, 32015, 7562, 29901, 20255, 13, 14994, 3381, 29901, 1404, 13, 13, 29871] ``` Let's see what we get with the string we built using our Python loop. Note that we don't add "special tokens" because the string already starts with `<s>`, the beginning of sentence token: ```py tokens = tokenizer.encode(output, add_special_tokens=False) assert reference_tokens == tokens ``` Similarly, let's verify that the chat template produces the same token sequence: ```py assert reference_tokens == tokenizer.apply_chat_template(chat) ``` As a final detail, please note that if the dialog does not start with a `system` turn, the [original code will insert one with an empty content string](https://github.com/facebookresearch/codellama/blob/1af62e1f43db1fa5140fa43cb828465a603a48f3/llama/generation.py#L418). ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in four model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B, 34B, and 70B parameters. **This repository contains the Instruct version of the 70B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. It was fine-tuned with up to 16k tokens. This variant **does not** support long context of up to 100k tokens. **Model Dates** Code Llama and its variants have been trained between January 2023 and January 2024. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants are intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 12 Code Llama models required 1400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 228.55 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).
jondurbin/airoboros-l2-13b-gpt4-m2.0
jondurbin
"2023-08-14T10:09:50Z"
2,105
28
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-m2.0", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-28T06:20:23Z"
--- license: other datasets: - jondurbin/airoboros-gpt4-m2.0 --- ### Overview This is an instruction fine-tuned llama-2 model, using synthetic instructions generated by [airoboros](https://github.com/jondurbin/airoboros) - The 2.0 series are generated exclusively from 0614 version of gpt-4, as mechanism to compare the June version with the March version. - The m2.0 series have the 1.4.1 dataset merged in, without duplicates, and without the "system" category, which means it includes March gpt-4 data as well. - 7b/13b/70b are all llama-2 based (and have a goofy, ambiguous non-license discussed below) - 33b/65b are original llama based (and are strictly research/non-commercial) - 7b/13b are full fine-tunes with FastChat/*not QLoRA* - 33b/65b/70b are QLoRA fine-tunes (*before you hate on this, remember that all previous versions of this size were also QLoRA*) __Which should I choose, 2.0 or m2.0?__ I have no idea, try them both and see which is better. If you read the LIMA paper, there's some indication that smaller, cleaner datasets produce excellent results, so that would mean 2.0 is probably a better choice. If you really enjoyed 1.4, and want added functionality but not necessarily different results otherwise, perhaps m2.0. ### Prompt format ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). Why the "regardless of ..." part? - laws vary widely based on time and location - language model may conflate certain words with laws, e.g. it may think "stealing eggs from a chicken" is illegal - these models just produce text, what you do with that text is your resonsibility - many people and industries deal with "sensitive" content; imagine if a court stenographer's eqipment filtered illegal content - it would be useless So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ### Dataset Dataset links: - 2.0 series https://hf.co/datasets/jondurbin/airoboros-gpt4-2.0 - merged/m2.0 series https://hf.co/datasets/jondurbin/airoboros-gpt4-m2.0 Dataset creation details/configuration: https://gist.github.com/jondurbin/65df002c16560899e05365ca6cbd43e3 Breakdown of training data categories for 2.0/m2.0 datasets: ![categories](categories.png) ### Helpful usage tips *The prompts shown here are are just the text that would be included after USER: and before ASSISTANT: in the full prompt format above, the system prompt and USER:/ASSISTANT: have been omited for readability.* #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 2.0/m2.0 models are built on top of either llama or llama-2. Any model with `-l2-` in the name uses llama2, `..-33b-...` and `...-65b-...` are based on the original llama. #### Llama (original) models If the model was based on the original llama (33b/65b), the license is __cc-by-nc-4.0__ and is for research/academic use only -- no commercial usage whatsoever! #### Llama-2 models Base model has a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The fine-tuning data was generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely idnemnify me.
TheBloke/Thespis-Mistral-7B-v0.6-GGUF
TheBloke
"2023-11-15T22:27:00Z"
2,104
4
transformers
[ "transformers", "gguf", "mistral", "not-for-all-audiences", "base_model:cgato/Thespis-Mistral-7b-v0.6", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
"2023-11-15T22:10:47Z"
--- base_model: cgato/Thespis-Mistral-7b-v0.6 inference: false license: apache-2.0 model_creator: c.gato model_name: Thespis Mistral 7B v0.6 model_type: mistral prompt_template: "{system_message}\n\nUsername: {prompt}\nBotName: \n" quantized_by: TheBloke tags: - not-for-all-audiences --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Thespis Mistral 7B v0.6 - GGUF - Model creator: [c.gato](https://huggingface.co/cgato) - Original model: [Thespis Mistral 7B v0.6](https://huggingface.co/cgato/Thespis-Mistral-7b-v0.6) <!-- description start --> ## Description This repo contains GGUF format model files for [c.gato's Thespis Mistral 7B v0.6](https://huggingface.co/cgato/Thespis-Mistral-7b-v0.6). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF) * [c.gato's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/cgato/Thespis-Mistral-7b-v0.6) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Thespis ``` {system_message} Username: {prompt} BotName: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [thespis-mistral-7b-v0.6.Q2_K.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [thespis-mistral-7b-v0.6.Q3_K_S.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [thespis-mistral-7b-v0.6.Q3_K_M.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [thespis-mistral-7b-v0.6.Q3_K_L.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [thespis-mistral-7b-v0.6.Q4_0.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [thespis-mistral-7b-v0.6.Q4_K_S.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [thespis-mistral-7b-v0.6.Q4_K_M.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [thespis-mistral-7b-v0.6.Q5_0.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [thespis-mistral-7b-v0.6.Q5_K_S.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [thespis-mistral-7b-v0.6.Q5_K_M.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [thespis-mistral-7b-v0.6.Q6_K.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [thespis-mistral-7b-v0.6.Q8_0.gguf](https://huggingface.co/TheBloke/Thespis-Mistral-7B-v0.6-GGUF/blob/main/thespis-mistral-7b-v0.6.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Thespis-Mistral-7B-v0.6-GGUF and below it, a specific filename to download, such as: thespis-mistral-7b-v0.6.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Thespis-Mistral-7B-v0.6-GGUF thespis-mistral-7b-v0.6.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Thespis-Mistral-7B-v0.6-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Thespis-Mistral-7B-v0.6-GGUF thespis-mistral-7b-v0.6.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m thespis-mistral-7b-v0.6.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{system_message}\n\nUsername: {prompt}\nBotName:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Thespis-Mistral-7B-v0.6-GGUF", model_file="thespis-mistral-7b-v0.6.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: c.gato's Thespis Mistral 7B v0.6 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/ZXmxNKGaHUrqjdS1I3GkL.png) ## Mistral 7b Version of v0.6 ( with some extras ) This version has some additional data vs the llama2 13b version. Making it something of a 0.6 and a half or something. But that's confusing. The 7b version of my Thespis finetune. From my testing it seems to perform somewhere between llama 7b and llama 13b. This model is a personal project. It uses a vanilla chat template and is focused on providing multiturn sfw and nsfw RP experience. This model works best with internet style RP using standard markup with asterisks surrounding actions and no quotes around dialogue. It uses the following data: * 2200 samples from Pure-Dove Dataset ( 90 token length or greater. ) * 2200 samples from Claude Multiround 30k ( 90 token length or greater. ) * 700 samples from Airoboros 3.1 ( Writing samples longer than 1500 tokens only. ) * 900 samples from the Augmental Dataset ( 90 token length or greater ) * 6000 samples of hand curated RP conversation with various characters. Works with standard chat format for Ooba or SillyTavern. ## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template ) ``` {System Prompt} Username: {Input} BotName: {Response} Username: {Input} BotName: {Response} ``` ## Ooba ( Set it to Chat, select a character and go. ) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/HTl7QlAZcqe2hV8rwh4DG.png) ## Silly Tavern Settings ( Default ) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64dd7cda3d6b954bf7cdd922/ajny8P0LdW0nFtghpPbfB.png) ## Turn Template (for Ooba Instruct if making a Discord bot or Some other Many to one Chat): You can either bake usernames into the prompt directly for ease of use or programatically add them if running through the API to use as a chatbot. ``` User string: ( Leave empty if populating username into prompt through a script. Put in your username if its a 1 on 1 convo.) Ex. "DiscordUser1: " Bot String: ( The bots name, followed by a colon and a space.) Ex. "Mayo: " Context: ( Your bots system prompt, follow by a newline. ) <|user|><|user-message|>\n<|bot|><|bot-message|>\n ``` <!-- original-model-card end -->
ibivibiv/multimaster-7b-v6
ibivibiv
"2024-04-19T23:26:11Z"
2,104
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "en", "arxiv:1803.05457", "arxiv:1905.07830", "arxiv:2009.03300", "arxiv:2109.07958", "arxiv:1907.10641", "arxiv:2110.14168", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-24T05:49:50Z"
--- language: - en license: apache-2.0 library_name: transformers model-index: - name: multimaster-7b-v6 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.78 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.77 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 70.89 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 86.42 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 70.36 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibivibiv/multimaster-7b-v6 name: Open LLM Leaderboard --- # Multi Master 7Bx5 v6 ![img](./multimaster.png) A quick multi-disciplinary moe model. This is part of a series of models built to test the gate tuning for mixtral style moe models. # Prompting ## Prompt Template for alpaca style ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ## Sample Code ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer torch.set_default_device("cuda") model = AutoModelForCausalLM.from_pretrained("ibivibiv/multimaster-7b-v6", torch_dtype="auto", device_config='auto') tokenizer = AutoTokenizer.from_pretrained("ibivibiv/multimaster-7b-v6") inputs = tokenizer("### Instruction: Who would when in an arm wrestling match between Abraham Lincoln and Chuck Norris?\nA. Abraham Lincoln \nB. Chuck Norris\n### Response:\n", return_tensors="pt", return_attention_mask=False) outputs = model.generate(**inputs, max_length=200) text = tokenizer.batch_decode(outputs)[0] print(text) ``` # Model Details * **Trained by**: [ibivibiv](https://huggingface.co/ibivibiv) * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers) * **Model type:** **multimaster-7b** is a lora tuned version of openchat/openchat-3.5-0106 with the adapter merged back into the main model * **Language(s)**: English * **Purpose**: This model is a focus on multi-disciplinary model tuning # Benchmark Scores coming soon ## Citations ``` @misc{open-llm-leaderboard, author = {Edward Beeching and Clémentine Fourrier and Nathan Habib and Sheon Han and Nathan Lambert and Nazneen Rajani and Omar Sanseviero and Lewis Tunstall and Thomas Wolf}, title = {Open LLM Leaderboard}, year = {2023}, publisher = {Hugging Face}, howpublished = "\url{https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard}" } ``` ``` @software{eval-harness, author = {Gao, Leo and Tow, Jonathan and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and McDonell, Kyle and Muennighoff, Niklas and Phang, Jason and Reynolds, Laria and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy}, title = {A framework for few-shot language model evaluation}, month = sep, year = 2021, publisher = {Zenodo}, version = {v0.0.1}, doi = {10.5281/zenodo.5371628}, url = {https://doi.org/10.5281/zenodo.5371628} } ``` ``` @misc{clark2018think, title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge}, author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord}, year={2018}, eprint={1803.05457}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` ``` @misc{zellers2019hellaswag, title={HellaSwag: Can a Machine Really Finish Your Sentence?}, author={Rowan Zellers and Ari Holtzman and Yonatan Bisk and Ali Farhadi and Yejin Choi}, year={2019}, eprint={1905.07830}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{hendrycks2021measuring, title={Measuring Massive Multitask Language Understanding}, author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt}, year={2021}, eprint={2009.03300}, archivePrefix={arXiv}, primaryClass={cs.CY} } ``` ``` @misc{lin2022truthfulqa, title={TruthfulQA: Measuring How Models Mimic Human Falsehoods}, author={Stephanie Lin and Jacob Hilton and Owain Evans}, year={2022}, eprint={2109.07958}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{DBLP:journals/corr/abs-1907-10641, title={{WINOGRANDE:} An Adversarial Winograd Schema Challenge at Scale}, author={Keisuke Sakaguchi and Ronan Le Bras and Chandra Bhagavatula and Yejin Choi}, year={2019}, eprint={1907.10641}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{DBLP:journals/corr/abs-2110-14168, title={Training Verifiers to Solve Math Word Problems}, author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Mark Chen and Heewoo Jun and Lukasz Kaiser and Matthias Plappert and Jerry Tworek and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman}, year={2021}, eprint={2110.14168}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ibivibiv__multimaster-7b-v6) | Metric |Value| |---------------------------------|----:| |Avg. |75.66| |AI2 Reasoning Challenge (25-Shot)|72.78| |HellaSwag (10-Shot) |88.77| |MMLU (5-Shot) |64.74| |TruthfulQA (0-shot) |70.89| |Winogrande (5-shot) |86.42| |GSM8k (5-shot) |70.36|
cognitivecomputations/WizardLM-30B-Uncensored
cognitivecomputations
"2024-03-04T16:03:43Z"
2,102
138
transformers
[ "transformers", "pytorch", "llama", "text-generation", "uncensored", "dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered", "license:other", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-22T11:53:45Z"
--- license: other tags: - uncensored datasets: - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered model-index: - name: WizardLM-30B-Uncensored results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 60.24 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 82.93 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 56.8 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 51.57 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 12.89 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/WizardLM-30B-Uncensored name: Open LLM Leaderboard --- This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA. Shout out to the open source AI/ML community, and everyone who helped me out. Note: An uncensored model has no guardrails. You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-30B-Uncensored) | Metric | Value | |-----------------------|---------------------------| | Avg. | 52.32 | | ARC (25-shot) | 60.24 | | HellaSwag (10-shot) | 82.93 | | MMLU (5-shot) | 56.8 | | TruthfulQA (0-shot) | 51.57 | | Winogrande (5-shot) | 74.35 | | GSM8K (5-shot) | 12.89 | | DROP (3-shot) | 27.45 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__WizardLM-30B-Uncensored) | Metric |Value| |---------------------------------|----:| |Avg. |56.46| |AI2 Reasoning Challenge (25-Shot)|60.24| |HellaSwag (10-Shot) |82.93| |MMLU (5-Shot) |56.80| |TruthfulQA (0-shot) |51.57| |Winogrande (5-shot) |74.35| |GSM8k (5-shot) |12.89|
RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf
RichardErkhov
"2024-06-15T04:34:58Z"
2,102
0
null
[ "gguf", "region:us" ]
null
"2024-06-15T03:39:37Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Quantum-Citrus-9B - GGUF - Model creator: https://huggingface.co/ABX-AI/ - Original model: https://huggingface.co/ABX-AI/Quantum-Citrus-9B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Quantum-Citrus-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q2_K.gguf) | Q2_K | 3.13GB | | [Quantum-Citrus-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.IQ3_XS.gguf) | IQ3_XS | 3.48GB | | [Quantum-Citrus-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.IQ3_S.gguf) | IQ3_S | 3.67GB | | [Quantum-Citrus-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q3_K_S.gguf) | Q3_K_S | 3.65GB | | [Quantum-Citrus-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.IQ3_M.gguf) | IQ3_M | 3.79GB | | [Quantum-Citrus-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q3_K.gguf) | Q3_K | 4.05GB | | [Quantum-Citrus-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q3_K_M.gguf) | Q3_K_M | 4.05GB | | [Quantum-Citrus-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q3_K_L.gguf) | Q3_K_L | 4.41GB | | [Quantum-Citrus-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.IQ4_XS.gguf) | IQ4_XS | 4.55GB | | [Quantum-Citrus-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q4_0.gguf) | Q4_0 | 4.74GB | | [Quantum-Citrus-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.IQ4_NL.gguf) | IQ4_NL | 4.79GB | | [Quantum-Citrus-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q4_K_S.gguf) | Q4_K_S | 4.78GB | | [Quantum-Citrus-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q4_K.gguf) | Q4_K | 5.04GB | | [Quantum-Citrus-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q4_K_M.gguf) | Q4_K_M | 5.04GB | | [Quantum-Citrus-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q4_1.gguf) | Q4_1 | 5.26GB | | [Quantum-Citrus-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q5_0.gguf) | Q5_0 | 5.77GB | | [Quantum-Citrus-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q5_K_S.gguf) | Q5_K_S | 5.77GB | | [Quantum-Citrus-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q5_K.gguf) | Q5_K | 5.93GB | | [Quantum-Citrus-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q5_K_M.gguf) | Q5_K_M | 5.93GB | | [Quantum-Citrus-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q5_1.gguf) | Q5_1 | 6.29GB | | [Quantum-Citrus-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q6_K.gguf) | Q6_K | 6.87GB | | [Quantum-Citrus-9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/ABX-AI_-_Quantum-Citrus-9B-gguf/blob/main/Quantum-Citrus-9B.Q8_0.gguf) | Q8_0 | 8.89GB | Original model description: --- license: other library_name: transformers tags: - mergekit - merge - mistral - not-for-all-audiences base_model: - ABX-AI/Cerebral-Infinity-7B - ABX-AI/Starfinite-Laymospice-v2-7B model-index: - name: Quantum-Citrus-9B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.19 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.75 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 55.96 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 50.57 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d936ad52eca001fdcd3245/J0snW3yfLadLuMYERL6X5.png) # Quantum-Citrus-9B This merge is another attempt at making and intelligent, refined and unaligned model. Based on my tests so far, it has accomplished the goals, and I am continuing to experiment with my interactions with it. It includes previous merges of Starling, Cerebrum, LemonadeRP, InfinityRP, and deep down has a base of layla v0.1, as I am not that happy with the result form using v0.2. The model is intended for fictional storytelling and roleplaying and may not be intended for all audences. [GGUF / IQ / Imatrix](https://huggingface.co/ABX-AI/Quantum-Citrus-9B-GGUF-IQ-Imatrix) ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * ABX-AI/Starfinite-Laymospice-v2-7B * ABX-AI/Cerebral-Infinity-7B ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: ABX-AI/Cerebral-Infinity-7B layer_range: [0, 20] - sources: - model: ABX-AI/Starfinite-Laymospice-v2-7B layer_range: [12, 32] merge_method: passthrough dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ABX-AI__Quantum-Citrus-9B) | Metric |Value| |---------------------------------|----:| |Avg. |66.74| |AI2 Reasoning Challenge (25-Shot)|65.19| |HellaSwag (10-Shot) |84.75| |MMLU (5-Shot) |64.58| |TruthfulQA (0-shot) |55.96| |Winogrande (5-shot) |79.40| |GSM8k (5-shot) |50.57|
xinsir/controlnet-depth-sdxl-1.0
xinsir
"2024-06-27T02:19:06Z"
2,102
15
diffusers
[ "diffusers", "safetensors", "license:apache-2.0", "region:us" ]
null
"2024-06-26T15:14:36Z"
--- license: apache-2.0 --- # ***ControlNet Depth SDXL, support zoe, midias*** ![images](./masonry.webp) # Example ![images_0)](./000000_depth_concat.webp) ![images_1)](./000003_depth_concat.webp) ![images_2)](./000005_depth_concat.webp) ![images_3)](./000006_depth_concat.webp) ![images_4)](./000014_depth_concat.webp) ![images_5)](./000015_depth_concat.webp) ![images_6)](./000022_depth_concat.webp) ![images_7)](./000023_depth_concat.webp) ![images_8)](./000026_depth_concat.webp) ![images_9)](./000031_depth_concat.webp) # How to use it ```python from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL from diffusers import DDIMScheduler, EulerAncestralDiscreteScheduler from PIL import Image import torch import random import numpy as np import cv2 from controlnet_aux import MidasDetector, ZoeDetector processor_zoe = ZoeDetector.from_pretrained('lllyasviel/ControlNet') processor_midas = MidasDetector.from_pretrained('lllyasviel/ControlNet') controlnet_conditioning_scale = 1.0 prompt = "your prompt, the longer the better, you can describe it as detail as possible" negative_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' eulera_scheduler = EulerAncestralDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler") controlnet = ControlNetModel.from_pretrained( "xinsir/controlnet-depth-sdxl-1.0", torch_dtype=torch.float16 ) # when test with other base model, you need to change the vae also. vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipe = StableDiffusionXLControlNetPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, safety_checker=None, torch_dtype=torch.float16, scheduler=eulera_scheduler, ) # need to resize the image resolution to 1024 * 1024 or same bucket resolution to get the best performance img = cv2.imread("your original image path") if random.random() > 0.5: controlnet_img = processor_zoe(img, output_type='cv2') else: controlnet_img = processor_midas(img, output_type='cv2') height, width, _ = controlnet_img.shape ratio = np.sqrt(1024. * 1024. / (width * height)) new_width, new_height = int(width * ratio), int(height * ratio) controlnet_img = cv2.resize(controlnet_img, (new_width, new_height)) controlnet_img = Image.fromarray(controlnet_img) images = pipe( prompt, negative_prompt=negative_prompt, image=controlnet_img, controlnet_conditioning_scale=controlnet_conditioning_scale, width=new_width, height=new_height, num_inference_steps=30, ).images images[0].save(f"your image save path, png format is usually better than jpg or webp in terms of image quality but got much bigger") ```
swl-models/toooajk-yagurumagiku-v3-dreambooth
swl-models
"2023-02-01T01:47:41Z"
2,101
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-02-01T01:26:25Z"
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers ---
liuhaotian/LLaVA-Lightning-MPT-7B-preview
liuhaotian
"2023-11-05T02:04:28Z"
2,100
51
transformers
[ "transformers", "pytorch", "llava_mpt", "text-generation", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "region:us" ]
text-generation
"2023-05-06T15:36:58Z"
--- license: cc-by-nc-sa-4.0 inference: false --- **NOTE: This is a research preview of the LLaVA-Lightning based on MPT-7B-chat checkpoint. The usage of the model should comply with MPT-7B-chat license and agreements.** **NOTE: Unlike other LLaVA models, this model can (should) be used directly without delta weights conversion!** <br> <br> # LLaVA Model Card ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna/MPT on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. **Model date:** LLaVA-Lightning-MPT was trained in May 2023. **Paper or resources for more information:** https://llava-vl.github.io/ **License:** CC-BY-NC-SA 4.0 **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. 80K GPT-generated multimodal instruction-following data. ## Evaluation dataset A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs. We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset. See https://llava-vl.github.io/ for more details.
RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf
RichardErkhov
"2024-06-17T08:16:17Z"
2,100
0
null
[ "gguf", "region:us" ]
null
"2024-06-17T06:48:29Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Copium-Cola-9B - GGUF - Model creator: https://huggingface.co/Nitral-AI/ - Original model: https://huggingface.co/Nitral-AI/Copium-Cola-9B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Copium-Cola-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q2_K.gguf) | Q2_K | 3.13GB | | [Copium-Cola-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.IQ3_XS.gguf) | IQ3_XS | 3.48GB | | [Copium-Cola-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.IQ3_S.gguf) | IQ3_S | 3.67GB | | [Copium-Cola-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q3_K_S.gguf) | Q3_K_S | 3.65GB | | [Copium-Cola-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.IQ3_M.gguf) | IQ3_M | 3.79GB | | [Copium-Cola-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q3_K.gguf) | Q3_K | 4.05GB | | [Copium-Cola-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q3_K_M.gguf) | Q3_K_M | 4.05GB | | [Copium-Cola-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q3_K_L.gguf) | Q3_K_L | 4.41GB | | [Copium-Cola-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.IQ4_XS.gguf) | IQ4_XS | 4.55GB | | [Copium-Cola-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q4_0.gguf) | Q4_0 | 4.74GB | | [Copium-Cola-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.IQ4_NL.gguf) | IQ4_NL | 4.79GB | | [Copium-Cola-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q4_K_S.gguf) | Q4_K_S | 4.78GB | | [Copium-Cola-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q4_K.gguf) | Q4_K | 5.04GB | | [Copium-Cola-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q4_K_M.gguf) | Q4_K_M | 5.04GB | | [Copium-Cola-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q4_1.gguf) | Q4_1 | 5.26GB | | [Copium-Cola-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q5_0.gguf) | Q5_0 | 5.77GB | | [Copium-Cola-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q5_K_S.gguf) | Q5_K_S | 5.77GB | | [Copium-Cola-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q5_K.gguf) | Q5_K | 5.93GB | | [Copium-Cola-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q5_K_M.gguf) | Q5_K_M | 5.93GB | | [Copium-Cola-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q5_1.gguf) | Q5_1 | 6.29GB | | [Copium-Cola-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q6_K.gguf) | Q6_K | 6.87GB | | [Copium-Cola-9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Nitral-AI_-_Copium-Cola-9B-gguf/blob/main/Copium-Cola-9B.Q8_0.gguf) | Q8_0 | 8.89GB | Original model description: --- base_model: - ChaoticNeutrals/Eris_7B library_name: transformers tags: - mergekit - merge license: other --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/rEj8wf7Vkq_Lf8H30uE-J.png) This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [ChaoticNeutrals/Eris_7B](https://huggingface.co/ChaoticNeutrals/Eris_7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: ChaoticNeutrals/Eris_7B layer_range: [0, 20] - sources: - model: ChaoticNeutrals/Eris_7B layer_range: [12, 32] merge_method: passthrough dtype: float16 ```
legraphista/K2-IMat-GGUF
legraphista
"2024-05-31T15:34:17Z"
2,099
2
gguf
[ "gguf", "nlp", "llm", "quantized", "GGUF", "imatrix", "quantization", "imat", "static", "16bit", "8bit", "6bit", "5bit", "4bit", "3bit", "2bit", "1bit", "text-generation", "en", "base_model:LLM360/K2", "license:apache-2.0", "region:us" ]
text-generation
"2024-05-31T08:59:15Z"
--- base_model: LLM360/K2 inference: false language: - en library_name: gguf license: apache-2.0 pipeline_tag: text-generation quantized_by: legraphista tags: - nlp - llm - quantized - GGUF - imatrix - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- # K2-IMat-GGUF _Llama.cpp imatrix quantization of LLM360/K2_ Original Model: [LLM360/K2](https://huggingface.co/LLM360/K2) Original dtype: `FP16` (`float16`) Quantized by: llama.cpp [b3051](https://github.com/ggerganov/llama.cpp/releases/tag/b3051) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [K2.Q8_0/*](https://huggingface.co/legraphista/K2-IMat-GGUF/tree/main/K2.Q8_0) | Q8_0 | 69.37GB | ✅ Available | ⚪ Static | ✂ Yes | [K2.Q6_K/*](https://huggingface.co/legraphista/K2-IMat-GGUF/tree/main/K2.Q6_K) | Q6_K | 53.56GB | ✅ Available | ⚪ Static | ✂ Yes | [K2.Q4_K.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.Q4_K.gguf) | Q4_K | 39.35GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.Q3_K.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.Q3_K.gguf) | Q3_K | 31.63GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.Q2_K.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.Q2_K.gguf) | Q2_K | 24.11GB | ✅ Available | 🟢 IMatrix | 📦 No ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [K2.FP16/*](https://huggingface.co/legraphista/K2-IMat-GGUF/tree/main/K2.FP16) | F16 | 130.58GB | ✅ Available | ⚪ Static | ✂ Yes | [K2.Q8_0/*](https://huggingface.co/legraphista/K2-IMat-GGUF/tree/main/K2.Q8_0) | Q8_0 | 69.37GB | ✅ Available | ⚪ Static | ✂ Yes | [K2.Q6_K/*](https://huggingface.co/legraphista/K2-IMat-GGUF/tree/main/K2.Q6_K) | Q6_K | 53.56GB | ✅ Available | ⚪ Static | ✂ Yes | [K2.Q5_K/*](https://huggingface.co/legraphista/K2-IMat-GGUF/tree/main/K2.Q5_K) | Q5_K | 46.24GB | ✅ Available | ⚪ Static | ✂ Yes | [K2.Q5_K_S.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.Q5_K_S.gguf) | Q5_K_S | 44.92GB | ✅ Available | ⚪ Static | 📦 No | [K2.Q4_K.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.Q4_K.gguf) | Q4_K | 39.35GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.Q4_K_S.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.Q4_K_S.gguf) | Q4_K_S | 37.06GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ4_NL.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ4_NL.gguf) | IQ4_NL | 36.80GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ4_XS.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ4_XS.gguf) | IQ4_XS | 34.76GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.Q3_K.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.Q3_K.gguf) | Q3_K | 31.63GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.Q3_K_L.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.Q3_K_L.gguf) | Q3_K_L | 34.65GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.Q3_K_S.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.Q3_K_S.gguf) | Q3_K_S | 28.16GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ3_M.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ3_M.gguf) | IQ3_M | 29.83GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ3_S.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ3_S.gguf) | IQ3_S | 28.16GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ3_XS.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ3_XS.gguf) | IQ3_XS | 26.64GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ3_XXS.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ3_XXS.gguf) | IQ3_XXS | 24.67GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.Q2_K.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.Q2_K.gguf) | Q2_K | 24.11GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.Q2_K_S.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.Q2_K_S.gguf) | Q2_K_S | 21.98GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ2_M.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ2_M.gguf) | IQ2_M | 22.41GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ2_S.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ2_S.gguf) | IQ2_S | 20.78GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ2_XS.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ2_XS.gguf) | IQ2_XS | 19.27GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ2_XXS.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ2_XXS.gguf) | IQ2_XXS | 17.47GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ1_M.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ1_M.gguf) | IQ1_M | 15.43GB | ✅ Available | 🟢 IMatrix | 📦 No | [K2.IQ1_S.gguf](https://huggingface.co/legraphista/K2-IMat-GGUF/blob/main/K2.IQ1_S.gguf) | IQ1_S | 14.21GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/K2-IMat-GGUF --include "K2.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/K2-IMat-GGUF --include "K2.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Llama.cpp ``` llama.cpp/main -m K2.Q8_0.gguf --color -i -p "prompt here" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `K2.Q8_0`) 3. Run `gguf-split --merge K2.Q8_0/K2.Q8_0-00001-of-XXXXX.gguf K2.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!
digiplay/ValMix2-byHemlok
digiplay
"2024-05-15T17:54:24Z"
2,098
4
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-05-04T05:38:27Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: Beautiful anime model by Hemlok... more detailed : https://huggingface.co/Hemlok/VaLMix Sample image generated by AUTOMATIC1111 : ![tmpb_iej4j5 (1).png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/6-pbk7dL_PG5ff9Z6EWf9.png)
Gameselo/STS-multilingual-mpnet-base-v2
Gameselo
"2024-06-12T15:04:59Z"
2,098
1
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "mteb", "sentence-similarity", "feature-extraction", "dataset_size:100K<n<1M", "loss:AnglELoss", "arxiv:1908.10084", "arxiv:2309.12871", "base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2024-06-07T15:06:34Z"
--- language: [] library_name: sentence-transformers tags: - mteb - sentence-transformers - sentence-similarity - feature-extraction - dataset_size:100K<n<1M - loss:AnglELoss base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2 widget: - source_sentence: 有些人在路上溜达。 sentences: - Folk går - Otururken gitar çalan adam. - ארה"ב קבעה שסוריה השתמשה בנשק כימי - source_sentence: 緬甸以前稱為緬甸。 sentences: - 缅甸以前叫缅甸。 - This is very contradictory. - 한 남자가 아기를 안고 의자에 앉아 잠들어 있다. - source_sentence: אדם כותב. sentences: - האדם כותב. - questa non è una risposta. - 7 שוטרים נהרגו ו-4 שוטרים נפצעו. - source_sentence: הם מפחדים. sentences: - liên quan đến rủi ro đáng kể; - A man is playing a guitar. - A man is playing a piano. - source_sentence: 一个女人正在洗澡。 sentences: - A woman is taking a bath. - En jente børster håret sitt - אדם מחלק תפוח אדמה. pipeline_tag: sentence-similarity model-index: - name: Gameselo/STS-multilingual-mpnet-base-v2 results: - dataset: config: it name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.6847049462613332 task: type: STS - dataset: config: es name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.6620948502618977 task: type: STS - dataset: config: fr name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.7875616631597785 task: type: STS - dataset: config: pl-en name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.7510805416538202 task: type: STS - dataset: config: ar name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.6265329479575293 task: type: STS - dataset: config: pl name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.4335552432730643 task: type: STS - dataset: config: de name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.5774252131250034 task: type: STS - dataset: config: tr name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.6383757017928495 task: type: STS - dataset: config: es-it name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.6624635951676386 task: type: STS - dataset: config: ru name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.5866853707548388 task: type: STS - dataset: config: en name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.6385354535483773 task: type: STS - dataset: config: zh-en name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.6537294853166558 task: type: STS - dataset: config: zh name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.6319430830291571 task: type: STS - dataset: config: fr-pl name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.8451542547285167 task: type: STS - dataset: config: de-fr name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.5798716781400349 task: type: STS - dataset: config: es-en name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.7518021273920814 task: type: STS - dataset: config: de-en name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.5749790581441845 task: type: STS - dataset: config: de-pl name: MTEB STS22 revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 split: test type: mteb/sts22-crosslingual-sts metrics: - type: cosine_spearman value: 0.44220332625465214 task: type: STS - dataset: config: default name: MTEB STSBenchmark revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831 split: test type: mteb/stsbenchmark-sts metrics: - type: cosine_spearman value: 0.9762486352335524 task: type: STS - dataset: config: en-tr name: MTEB STS17 revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_spearman value: 0.7987027653005363 task: type: STS - dataset: config: ko-ko name: MTEB STS17 revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_spearman value: 0.9766336939338607 task: type: STS - dataset: config: fr-en name: MTEB STS17 revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_spearman value: 0.9067607122592818 task: type: STS - dataset: config: en-ar name: MTEB STS17 revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_spearman value: 0.7703365842088069 task: type: STS - dataset: config: nl-en name: MTEB STS17 revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_spearman value: 0.9114826394926738 task: type: STS - dataset: config: it-en name: MTEB STS17 revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_spearman value: 0.9246785886944904 task: type: STS - dataset: config: ar-ar name: MTEB STS17 revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_spearman value: 0.8124393788492182 task: type: STS - dataset: config: es-es name: MTEB STS17 revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_spearman value: 0.872701191632785 task: type: STS - dataset: config: en-de name: MTEB STS17 revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_spearman value: 0.9109414091487618 task: type: STS - dataset: config: es-en name: MTEB STS17 revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_spearman value: 0.8553203530552356 task: type: STS - dataset: config: en-en name: MTEB STS17 revision: faeb762787bd10488a50c8b5be4a3b82e411949c split: test type: mteb/sts17-crosslingual-sts metrics: - type: cosine_spearman value: 0.9378741534997558 task: type: STS --- ## State-of-the-Art Results Comparison (MTEB STS Multilingual Leaderboard) | Dataset | State-of-the-art (Multi) | STSb-XLM-RoBERTa-base | STS Multilingual MPNet base v2 | |-------------------|--------------------------|-----------------------|--------------------------------------| | Average | 73.17 | 71.68 | **73.89** | | STS17 (ar-ar) | **81.87** | 80.43 | 81.24 | | STS17 (en-ar) | **81.22** | 76.3 | 77.03 | | STS17 (en-de) | 87.3 | 91.06 | **91.09** | | STS17 (en-tr) | 77.18 | **80.74** | 79.87 | | STS17 (es-en) | **88.24** | 83.09 | 85.53 | | STS17 (es-es) | **88.25** | 84.16 | 87.27 | | STS17 (fr-en) | 88.06 | **91.33** | 90.68 | | STS17 (it-en) | 89.68 | **92.87** | 92.47 | | STS17 (ko-ko) | 83.69 | **97.67** | 97.66 | | STS17 (nl-en) | 88.25 | **92.13** | 91.15 | | STS22 (ar) | 58.67 | 58.67 | **62.66** | | STS22 (de) | **60.12** | 52.17 | 57.74 | | STS22 (de-en) | **60.92** | 58.5 | 57.5 | | STS22 (de-fr) | **67.79** | 51.28 | 57.99 | | STS22 (de-pl) | **58.69** | 44.56 | 44.22 | | STS22 (es) | **68.57** | 63.68 | 66.21 | | STS22 (es-en) | **78.8** | 70.65 | 75.18 | | STS22 (es-it) | **75.04** | 60.88 | 66.25 | | STS22 (fr) | **83.75** | 76.46 | 78.76 | | STS22 (fr-pl) | 84.52 | 84.52 | **84.52** | | STS22 (it) | **79.28** | 66.73 | 68.47 | | STS22 (pl) | 42.08 | 41.18 | **43.36** | | STS22 (pl-en) | **77.5** | 64.35 | 75.11 | | STS22 (ru) | **61.71** | 58.59 | 58.67 | | STS22 (tr) | **68.72** | 57.52 | 63.84 | | STS22 (zh-en) | **71.88** | 60.69 | 65.37 | | STSb | 89.86 | 95.05 | **95.15** | **Bold** indicates the best result in each row. # SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 79f2382ceacceacdf38563d7c5d16b9ff8d725d6 --> - **Maximum Sequence Length:** 128 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("Gameselo/STS-multilingual-mpnet-base-v2") # Run inference sentences = [ '一个女人正在洗澡。', 'A woman is taking a bath.', 'En jente børster håret sitt', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `sts-dev` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.9551 | | **spearman_cosine** | **0.9593** | | pearson_manhattan | 0.927 | | spearman_manhattan | 0.9383 | | pearson_euclidean | 0.9278 | | spearman_euclidean | 0.9394 | | pearson_dot | 0.876 | | spearman_dot | 0.8865 | | pearson_max | 0.9551 | | spearman_max | 0.9593 | #### Evalutation results vs SOTA results * Dataset: `sts-test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:--------------------|:-----------| | pearson_cosine | 0.948 | | **spearman_cosine** | **0.9515** | | pearson_manhattan | 0.9252 | | spearman_manhattan | 0.9352 | | pearson_euclidean | 0.9258 | | spearman_euclidean | 0.9364 | | pearson_dot | 0.8443 | | spearman_dot | 0.8435 | | pearson_max | 0.948 | | spearman_max | 0.9515 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 226,547 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | label | |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 3 tokens</li><li>mean: 20.05 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 19.94 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 1.92</li><li>max: 398.6</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:-------------------------------------------------------------------|:----------------------------------------------------------------|:---------------------------------| | <code>Bir kadın makineye dikiş dikiyor.</code> | <code>Bir kadın biraz et ekiyor.</code> | <code>0.12</code> | | <code>Snowden 'gegeven vluchtelingendocument door Ecuador'.</code> | <code>Snowden staat op het punt om uit Moskou te vliegen</code> | <code>0.24000000953674316</code> | | <code>Czarny pies idzie mostem przez wodę</code> | <code>Czarny pies nie idzie mostem przez wodę</code> | <code>0.74000000954</code> | * Loss: [<code>AnglELoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#angleloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "pairwise_angle_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `num_train_epochs`: 10 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `prediction_loss_only`: True - `per_device_train_batch_size`: 256 - `per_device_eval_batch_size`: 256 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | sts-dev_spearman_cosine | sts-test_spearman_cosine | |:------:|:----:|:-------------:|:-----------------------:|:------------------------:| | 0.5650 | 500 | 10.9426 | - | - | | 1.0 | 885 | - | 0.9202 | - | | 1.1299 | 1000 | 9.7184 | - | - | | 1.6949 | 1500 | 9.5348 | - | - | | 2.0 | 1770 | - | 0.9400 | - | | 2.2599 | 2000 | 9.4412 | - | - | | 2.8249 | 2500 | 9.3097 | - | - | | 3.0 | 2655 | - | 0.9489 | - | | 3.3898 | 3000 | 9.2357 | - | - | | 3.9548 | 3500 | 9.1594 | - | - | | 4.0 | 3540 | - | 0.9528 | - | | 4.5198 | 4000 | 9.0963 | - | - | | 5.0 | 4425 | - | 0.9553 | - | | 5.0847 | 4500 | 9.0382 | - | - | | 5.6497 | 5000 | 8.9837 | - | - | | 6.0 | 5310 | - | 0.9567 | - | | 6.2147 | 5500 | 8.9403 | - | - | | 6.7797 | 6000 | 8.8841 | - | - | | 7.0 | 6195 | - | 0.9581 | - | | 7.3446 | 6500 | 8.8513 | - | - | | 7.9096 | 7000 | 8.81 | - | - | | 8.0 | 7080 | - | 0.9582 | - | | 8.4746 | 7500 | 8.8069 | - | - | | 9.0 | 7965 | - | 0.9589 | - | | 9.0395 | 8000 | 8.7616 | - | - | | 9.6045 | 8500 | 8.7521 | - | - | | 10.0 | 8850 | - | 0.9593 | 0.6266 | ### Framework Versions - Python: 3.9.7 - Sentence Transformers: 3.0.0 - Transformers: 4.40.1 - PyTorch: 2.3.0+cu121 - Accelerate: 0.29.3 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### AnglELoss ```bibtex @misc{li2023angleoptimized, title={AnglE-optimized Text Embeddings}, author={Xianming Li and Jing Li}, year={2023}, eprint={2309.12871}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
KoboldAI/OPT-2.7B-Nerybus-Mix
KoboldAI
"2023-02-10T05:38:20Z"
2,097
11
transformers
[ "transformers", "pytorch", "opt", "text-generation", "en", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-02-09T10:45:38Z"
--- license: other language: - en inference: false --- # OPT-2.7B-Nerybus-Mix This is an experimental model containing a ***parameter-wise 50/50 blend (weighted average)*** of the weights of *NerysV2-2.7B* and *ErebusV1-2.7B* Preliminary testing produces pretty coherent outputs, it appears to retain the NSFWness of Erebus but with a Nerys-esque twist in terms of prose. # License The two models used for this blend, *NerysV2-2.7B* and *ErebusV1-2.7B* are made by **Mr. Seeker**. - https://huggingface.co/KoboldAI/OPT-2.7B-Erebus - https://huggingface.co/KoboldAI/OPT-2.7B-Nerys-v2 The base OPT-2.7B model is licensed under the OPT-175B license, Copyright (c) Meta Platforms, Inc. All Rights Reserved. # Evaluation Results As the original datasets used for the source models are not publically available, I use my own datasets for this evaluation, which may not provide accurate comparison. Eval parameters: 32000 characters extracted from the middle of the corpus, tested in blocks of 1024 tokens each, same dataset used for each test batch. ``` Literotica Dataset Eval (Randomly selected stories) {'eval_loss': 2.571258306503296, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'} {'eval_loss': 2.5491442680358887, 'name': 'KoboldAI_OPT-2.7B-Erebus'} {'eval_loss': 2.6158597469329834, 'name': 'KoboldAI_OPT-2.7B-Nerys'} {'eval_loss': 2.614469051361084, 'name': 'facebook_opt-2.7b'} {'eval_loss': 2.4960227012634277, 'name': '(Unreleased 2.7B ModronAI Model)'} ASSTR Dataset Eval (Randomly selected stories) {'eval_loss': 2.664412498474121, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'} {'eval_loss': 2.6451029777526855, 'name': 'KoboldAI_OPT-2.7B-Erebus'} {'eval_loss': 2.7259647846221924, 'name': 'KoboldAI_OPT-2.7B-Nerys'} {'eval_loss': 2.6675195693969727, 'name': 'facebook_opt-2.7b'} {'eval_loss': 2.962111473083496, 'name': '(Unreleased 2.7B ModronAI Model)'} Sexstories Dataset Eval (Random highly rated stories) {'eval_loss': 2.2352423667907715, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'} {'eval_loss': 2.194378137588501, 'name': 'KoboldAI_OPT-2.7B-Erebus'} {'eval_loss': 2.307469129562378, 'name': 'KoboldAI_OPT-2.7B-Nerys'} {'eval_loss': 2.293961763381958, 'name': 'facebook_opt-2.7b'} {'eval_loss': 2.0103421211242676, 'name': '(Unreleased 2.7B ModronAI Model)'} Harry Potter Dataset Eval (Canon books) {'eval_loss': 2.473742961883545, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'} {'eval_loss': 2.480600357055664, 'name': 'KoboldAI_OPT-2.7B-Erebus'} {'eval_loss': 2.506237506866455, 'name': 'KoboldAI_OPT-2.7B-Nerys'} {'eval_loss': 2.5074169635772705, 'name': 'facebook_opt-2.7b'} {'eval_loss': 2.273703098297119, 'name': '(Unreleased 2.7B ModronAI Model)'} Star Wars Dataset Eval (Rogue One Novel) {'eval_loss': 2.5031676292419434, 'name': 'Concedo_OPT-2.7B-Nerybus-Mix'} {'eval_loss': 2.5239150524139404, 'name': 'KoboldAI_OPT-2.7B-Erebus'} {'eval_loss': 2.526801586151123, 'name': 'KoboldAI_OPT-2.7B-Nerys'} {'eval_loss': 2.473283529281616, 'name': 'facebook_opt-2.7b'} {'eval_loss': 2.955465793609619, 'name': '(Unreleased 2.7B ModronAI Model)'} ``` It is recommend to use this model with the KoboldAI software. All feedback and comments can be directed to Concedo on the KoboldAI discord.
TheBloke/Nous-Capybara-34B-GGUF
TheBloke
"2023-11-18T12:38:30Z"
2,097
160
transformers
[ "transformers", "gguf", "yi", "sft", "Yi-34B-200K", "eng", "dataset:LDJnr/LessWrong-Amplify-Instruct", "dataset:LDJnr/Pure-Dove", "dataset:LDJnr/Verified-Camel", "base_model:NousResearch/Nous-Capybara-34B", "license:mit", "region:us" ]
null
"2023-11-13T18:35:48Z"
--- base_model: NousResearch/Nous-Capybara-34B datasets: - LDJnr/LessWrong-Amplify-Instruct - LDJnr/Pure-Dove - LDJnr/Verified-Camel inference: false language: - eng license: - mit model_creator: NousResearch model_name: Nous Capybara 34B model_type: yi prompt_template: 'USER: {prompt} ASSISTANT: ' quantized_by: TheBloke tags: - sft - Yi-34B-200K --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Nous Capybara 34B - GGUF - Model creator: [NousResearch](https://huggingface.co/NousResearch) - Original model: [Nous Capybara 34B](https://huggingface.co/NousResearch/Nous-Capybara-34B) <!-- description start --> ## Description This repo contains GGUF format model files for [NousResearch's Nous Capybara 34B](https://huggingface.co/NousResearch/Nous-Capybara-34B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Capybara-34B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Capybara-34B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF) * [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Capybara-34B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: User-Assistant ``` USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [nous-capybara-34b.Q2_K.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q2_K.gguf) | Q2_K | 2 | 14.56 GB| 17.06 GB | smallest, significant quality loss - not recommended for most purposes | | [nous-capybara-34b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.96 GB| 17.46 GB | very small, high quality loss | | [nous-capybara-34b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q3_K_M.gguf) | Q3_K_M | 3 | 16.64 GB| 19.14 GB | very small, high quality loss | | [nous-capybara-34b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q3_K_L.gguf) | Q3_K_L | 3 | 18.14 GB| 20.64 GB | small, substantial quality loss | | [nous-capybara-34b.Q4_0.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q4_0.gguf) | Q4_0 | 4 | 19.47 GB| 21.97 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [nous-capybara-34b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q4_K_S.gguf) | Q4_K_S | 4 | 19.54 GB| 22.04 GB | small, greater quality loss | | [nous-capybara-34b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q4_K_M.gguf) | Q4_K_M | 4 | 20.66 GB| 23.16 GB | medium, balanced quality - recommended | | [nous-capybara-34b.Q5_0.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q5_0.gguf) | Q5_0 | 5 | 23.71 GB| 26.21 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [nous-capybara-34b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q5_K_S.gguf) | Q5_K_S | 5 | 23.71 GB| 26.21 GB | large, low quality loss - recommended | | [nous-capybara-34b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q5_K_M.gguf) | Q5_K_M | 5 | 24.32 GB| 26.82 GB | large, very low quality loss - recommended | | [nous-capybara-34b.Q6_K.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q6_K.gguf) | Q6_K | 6 | 28.21 GB| 30.71 GB | very large, extremely low quality loss | | [nous-capybara-34b.Q8_0.gguf](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF/blob/main/nous-capybara-34b.Q8_0.gguf) | Q8_0 | 8 | 36.54 GB| 39.04 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Nous-Capybara-34B-GGUF and below it, a specific filename to download, such as: nous-capybara-34b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Nous-Capybara-34B-GGUF nous-capybara-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Nous-Capybara-34B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Nous-Capybara-34B-GGUF nous-capybara-34b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m nous-capybara-34b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: {prompt} ASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Nous-Capybara-34B-GGUF", model_file="nous-capybara-34b.Q4_K_M.gguf", model_type="yi", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: NousResearch's Nous Capybara 34B ## **Nous-Capybara-34B V1.9** **This is trained on the Yi-34B model with 200K context length, for 3 epochs on the Capybara dataset!** **First 34B Nous model and first 200K context length Nous model!** The Capybara series is the first Nous collection of models made by fine-tuning mostly on data created by Nous in-house. We leverage our novel data synthesis technique called Amplify-instruct (Paper coming soon), the seed distribution and synthesis method are comprised of a synergistic combination of top performing existing data synthesis techniques and distributions used for SOTA models such as Airoboros, Evol-Instruct(WizardLM), Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed methodology for the dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly regarded datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain in-house multi-turn datasets like Dove(A successor to Puffin). While performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing current models, this is signficant when it comes to scaling implications for our next generation of models once we scale our novel syntheiss methods to significantly more examples. ## Process of creation and special thank yous! This model was fine-tuned by Nous Research as part of the Capybara/Amplify-Instruct project led by Luigi D.(LDJ) (Paper coming soon), as well as significant dataset formation contributions by J-Supha and general compute and experimentation management by Jeffrey Q. during ablations. Special thank you to **A16Z** for sponsoring our training, as well as **Yield Protocol** for their support in financially sponsoring resources during the R&D of this project. ## Thank you to those of you that have indirectly contributed! While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds that are used to generate the multi-turn data as part of the Amplify-Instruct synthesis. The datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project. Datasets in Blue are in-house curations that previously existed prior to Capybara. ![Capybara](https://i.imgur.com/yB58OoD.jpeg) ## Prompt Format The reccomended model usage is: Prefix: ``USER:`` Suffix: ``ASSISTANT:`` Stop token: ``</s>`` ## Mutli-Modality! - We currently have a Multi-modal model based on Capybara V1.9! https://huggingface.co/NousResearch/Obsidian-3B-V0.5 it is currently only available as a 3B sized model but larger versions coming! ## Notable Features: - Uses Yi-34B model as the base which is trained for 200K context length! - Over 60% of the dataset is comprised of multi-turn conversations.(Most models are still only trained for single-turn conversations and no back and forths!) - Over 1,000 tokens average per conversation example! (Most models are trained on conversation data that is less than 300 tokens per example.) - Able to effectively do complex summaries of advanced topics and studies. (trained on hundreds of advanced difficult summary tasks developed in-house) - Ability to recall information upto late 2022 without internet. - Includes a portion of conversational data synthesized from less wrong posts, discussing very in-depth details and philosophies about the nature of reality, reasoning, rationality, self-improvement and related concepts. ## Example Outputs from Capybara V1.9 7B version! (examples from 34B coming soon): ![Capybara](https://img001.prntscr.com/file/img001/T9yYxR1xQSaK_UGdy3t2Cw.png) ![Capybara](https://img001.prntscr.com/file/img001/DQXqmKbsQQOIcgny1eoGNA.png) ![Capybara](https://img001.prntscr.com/file/img001/85X3L9ZxTsOKo3fUQ7GRVA.png) ## Benchmarks! (Coming soon!) ## Future model sizes Capybara V1.9 now currently has a 3B, 7B and 34B size, and we plan to eventually have a 13B and 70B version in the future, as well as a potential 1B version based on phi-1.5 or Tiny Llama. ## How you can help! In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! ## Dataset contamination. We have checked the capybara dataset for contamination for several of the most popular datasets and can confirm that there is no contaminaton found. We leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level. The following are benchmarks we checked for contamination against our dataset: - HumanEval - AGIEval - TruthfulQA - MMLU - GPT4All <!-- original-model-card end -->
pszemraj/gibberish_detector_onnx-quant-avx2
pszemraj
"2024-02-25T09:41:37Z"
2,097
0
transformers
[ "transformers", "onnx", "distilbert", "text-classification", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-02-21T23:52:26Z"
--- license: apache-2.0 --- # gibberish_detector_onnx: quantized avx2 ```py # pip install 'optimum[onnxruntime,exporters]' from optimum.pipelines import pipeline classifier = pipeline( "text-classification", model="pszemraj/gibberish_detector_onnx-quant-avx2", accelerator="ort", ) classifier("ayy waddup") # [{'label': 'noise', 'score': 0.38642483949661255}] ``` ## differences between quant params the one with `-pc` suffix means `per_channel=True` ```py >>> src = 'quant_onnx_gibberish_detector' # avx2 >>> classifier = pipeline('text-classification', model=src, accelerator='ort') >>> classifier('ayy waddup') [{'label': 'noise', 'score': 0.34829846024513245}] >>> src = 'quant_onnx_gibberish_detector-pc' # avx2 per channel (this model) >>> classifier = pipeline('text-classification', model=src, accelerator='ort') >>> classifier('ayy waddup') [{'label': 'noise', 'score': 0.38642483949661255}] >>> src = 'onnx_gibberish_detector' # unquantized onnx >>> classifier = pipeline('text-classification', model=src, accelerator='ort') >>> classifier('ayy waddup') [{'label': 'noise', 'score': 0.6847617626190186}] ```
digiplay/KawaiiRealisticAnimeMix_A0.3
digiplay
"2023-07-29T19:14:25Z"
2,095
3
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-29T18:46:32Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/104100?modelVersionId=128610 Sample image and prompt : 1girl, anime key visual, outdoor,vibrant color,very close-up,tiny smile ![b916088a-20fc-4c50-a7f0-e35defe59834.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/EPWfFfFVsFFlBIvIkVszU.jpeg)
Yntec/Looking-Glass
Yntec
"2024-01-17T06:12:43Z"
2,095
5
diffusers
[ "diffusers", "safetensors", "Art", "RPG", "General Purpose", "Animation", "Fantasy", "Zovya", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-01-16T04:16:22Z"
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - Art - RPG - General Purpose - Animation - Fantasy - Zovya - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Looking-Glass A mix of A-Zovya RPG Artist Tools V2 Art and Wonderland to get the best of these models together! Comparison: ![Looking Glass Free Text To Image Comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/GHzNl-H40ySDyoebNcl07.png) (Click for larger) Samples and prompts: ![Looking Glass free ai generater samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/Lu8_Q6b7UkwVLyxyCziNJ.png) (Click for larger) Top left: Pretty CUTE girls sitting, CHIBI EYES, gorgeous detailed hair, playing chess, Magazine ad, iconic, 1949, sharp focus. acrylic art ​on canvas by ROSSDRAWS and Clay Mann and Hayao Miyazaki and artgerm Top right: cartoon pretty cute little girls, 16k ultra realistic, color high quality, guitar, tending on artstation, front lighting, focused, extreme details, unreal engine 5, cinematic, masterpiece, art by Peter Mohrbacher, Hajime Sorayama, Moebius Bottom left: Father with daughter holding Coca Cola. Santa Claus sitting with a pretty cute little girl, Art Christmas Theme by Haddon_Sundblom and Gil_Elvgren Bottom right: an illustration of a baby boar with headphones holding a fire umbrella in the rain Original pages: https://civitai.com/models/8124?modelVersionId=42992 (A-Zovya RPG Artist Tools V2 Art) https://huggingface.co/Yntec/Wonderland # Recipe: - SuperMerger Weight sum Use MBW 1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1 Model A: a-ZovyaRPGV2Art Model B: Wonderland Output Model: Looking-Glass
MRAIRR/7emotion_cls_in_context
MRAIRR
"2024-05-02T13:42:17Z"
2,095
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:klue/roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-05-02T07:01:15Z"
--- base_model: klue/roberta-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: emotion_classification_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_classification_model This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2954 - Accuracy: 0.9079 - F1: 0.9074 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4771 | 1.0 | 1829 | 0.3789 | 0.8669 | 0.8650 | | 0.2378 | 2.0 | 3658 | 0.2954 | 0.9079 | 0.9074 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
Xwin-LM/Xwin-LM-13B-V0.1
Xwin-LM
"2023-09-21T05:42:20Z"
2,094
62
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-15T14:03:35Z"
--- license: llama2 --- <h3 align="center"> Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment </h3> <p align="center"> <a href="https://github.com/Xwin-LM/Xwin-LM"> <img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"> </a> <a href="https://huggingface.co/Xwin-LM"> <img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"> </a> </p> **Step up your LLM alignment with Xwin-LM!** Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the Llama2 base models, ranked **TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. The project will be continuously updated. ## News - 💥 [Sep, 2023] We released [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), which has achieved a win-rate against Davinci-003 of **95.57%** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark, ranking as **TOP-1** on AlpacaEval. **It was the FIRST model surpassing GPT-4** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Also note its winrate v.s. GPT-4 is **60.61**. - 🔍 [Sep, 2023] RLHF plays crucial role in the strong performance of Xwin-LM-V0.1 release! - 💥 [Sep, 2023] We released [Xwin-LM-13B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1), which has achieved **91.76%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 13B models. - 💥 [Sep, 2023] We released [Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1), which has achieved **87.82%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 7B models. ## Model Card | Model | Checkpoint | Report | License | |------------|------------|-------------|------------------| |Xwin-LM-7B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1" target="_blank">HF Link</a> | 📃**Coming soon (Stay tuned)** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-13B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| |Xwin-LM-70B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License| ## Benchmarks ### Xwin-LM performance on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). The table below displays the performance of Xwin-LM on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), where evaluates its win-rate against Text-Davinci-003 across 805 questions. To provide a comprehensive evaluation, we present, for the first time, the win-rate against ChatGPT and GPT-4 as well. Our Xwin-LM model family establish a new state-of-the-art performance across all metrics. Notably, Xwin-LM-70B-V0.1 has eclipsed GPT-4 for the first time, achieving an impressive win-rate of **95.57%** to Text-Davinci-003 and **60.61%** to GPT-4. | **Model** | **AlpacaEval (winrate %)** | **AlpacaEval (winrate %)** |**AlpacaEval (winrate %)** | |----------------------------------|------------|----------|-------------| | | **v.s. Text-Davinci-003** | **v.s. ChatGPT** | **v.s. GPT4**| | **Xwin-LM-70B-V0.1** | **95.57** | **87.50** | **60.61** | | GPT-4 | 95.28 | 84.66 | 50.00 | | WizardLM-70B-V1.0 | 92.91 | 80.19 | 46.70 | | Llama-2-70B-Chat | 92.66 | 81.97 | 51.19 | | **Xwin-LM-13B-V0.1** | **91.76** | **81.79** | **55.30** | | ChatGPT | 89.37 | 50.00 | 16.60 | | WizardLM-13B-V1.2 | 89.17 | 75.75 | 41.91 | | **Xwin-LM-7B-V0.1** | **87.35** | **76.40** | **47.57** | | Llama-2-13B-Chat | 81.09 | 64.22 | 30.92 | ### Case study on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) | Prompts | GTP4 response | Xwin-LM response | |----------------|------------|--------| |I've put a plate on top of an apple, and take the plate from kitchen to dinning room. Where is the apple now|The apple is now in the dining room, as it was underneath the plate when you moved the plate from the kitchen to the dining room.|The apple is still in the kitchen, as you have only moved the plate. The apple is under the plate, so if you want to bring the apple to the dining room, you should either move the apple directly or put the apple on the plate and then move the plate to the dining room.| | Answer the following question.<br> How do you say "good evening" in French. | In French, "good evening" is said as "bonsoir." | In French, "good evening" is translated to "bonsoir." This greeting is used in the evening and nighttime hours, typically after 6 p.m. It is a combination of "bon," which means "good," and "soir," which means "evening." To pronounce it, say "bone-swahr." | ### Xwin-LM performance on NLP foundation tasks. The following table provides a comparison of Xwin-LMs with other LLMs on NLP foundation tasks in [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | Model | MMLU 5-shot | ARC 25-shot | TruthfulQA 0-shot | HellaSwag 10-shot | Average | |------------------|-------------|-------------|-------------------|-------------------|------------| | Text-davinci-003 | 56.9 | **85.2** | 59.3 | 82.2 | 70.9 | |Vicuna-13b 1.1 | 51.3 | 53.0 | 51.8 | 80.1 | 59.1 | |Guanaco 30B | 57.6 | 63.7 | 50.7 | 85.1 | 64.3 | | WizardLM-7B 1.0 | 42.7 | 51.6 | 44.7 | 77.7 | 54.2 | | WizardLM-13B 1.0 | 52.3 | 57.2 | 50.5 | 81.0 | 60.2 | | WizardLM-30B 1.0 | 58.8 | 62.5 | 52.4 | 83.3 | 64.2| | Llama-2-7B-Chat | 48.3 | 52.9 | 45.6 | 78.6 | 56.4 | | Llama-2-13B-Chat | 54.6 | 59.0 | 44.1 | 81.9 | 59.9 | | Llama-2-70B-Chat | 63.9 | 64.6 | 52.8 | 85.9 | 66.8 | | **Xwin-LM-7B-V0.1** | 49.7 | 56.2 | 48.1 | 79.5 | 58.4 | | **Xwin-LM-13B-V0.1** | 56.6 | 62.4 | 45.5 | 83.0 | 61.9 | | **Xwin-LM-70B-V0.1** | **69.6** | 70.5 | **60.1** | **87.1** | **71.8** | ## Inference ### Conversation templates To obtain desired results, please strictly follow the conversation templates when utilizing our model for inference. Our model adopts the prompt format established by [Vicuna](https://github.com/lm-sys/FastChat) and is equipped to support **multi-turn** conversations. ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi! ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am Xwin-LM.</s>...... ``` ### HuggingFace Example ```python from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1") ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) inputs = tokenizer(prompt, return_tensors="pt") samples = model.generate(**inputs, max_new_tokens=4096, temperature=0.7) output = tokenizer.decode(samples[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True) print(output) # Of course! I'm here to help. Please feel free to ask your question or describe the issue you're having, and I'll do my best to assist you. ``` ### vllm Example Because Xwin-LM is based on Llama2, it also offers support for rapid inference using [vllm](https://github.com/vllm-project/vllm). Please refer to [vllm](https://github.com/vllm-project/vllm) for detailed installation instructions. ```python from vllm import LLM, SamplingParams ( prompt := "A chat between a curious user and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the user's questions. " "USER: Hello, can you help me? " "ASSISTANT:" ) sampling_params = SamplingParams(temperature=0.7, max_tokens=4096) llm = LLM(model="Xwin-LM/Xwin-LM-7B-V0.1") outputs = llm.generate([prompt,], sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(generated_text) ``` ## TODO - [ ] Release the source code - [ ] Release more capabilities, such as math, reasoning, and etc. ## Citation Please consider citing our work if you use the data or code in this repo. ``` @software{xwin-lm, title = {Xwin-LM}, author = {Xwin-LM Team}, url = {https://github.com/Xwin-LM/Xwin-LM}, version = {pre-release}, year = {2023}, month = {9}, } ``` ## Acknowledgements Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm), and [vllm](https://github.com/vllm-project/vllm).
OpenBuddy/openbuddy-mistral2-7b-v20.3-32k
OpenBuddy
"2024-04-08T07:05:06Z"
2,094
9
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-27T13:35:38Z"
--- language: - zh - en - fr - de - ja - ko - it - ru pipeline_tag: text-generation inference: false library_name: transformers license: apache-2.0 --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice Base model: https://huggingface.co/mistralai/Mistral-7B-v0.2 License: Apache 2.0 ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
CoprolaliacPress/Autolycus-Q6_K-GGUF
CoprolaliacPress
"2024-07-01T08:05:13Z"
2,094
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:CoprolaliacPress/Autolycus", "endpoints_compatible", "region:us" ]
null
"2024-07-01T08:04:47Z"
--- base_model: CoprolaliacPress/Autolycus library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # CoprolaliacPress/Autolycus-Q6_K-GGUF This model was converted to GGUF format from [`CoprolaliacPress/Autolycus`](https://huggingface.co/CoprolaliacPress/Autolycus) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/CoprolaliacPress/Autolycus) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CoprolaliacPress/Autolycus-Q6_K-GGUF --hf-file autolycus-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CoprolaliacPress/Autolycus-Q6_K-GGUF --hf-file autolycus-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CoprolaliacPress/Autolycus-Q6_K-GGUF --hf-file autolycus-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CoprolaliacPress/Autolycus-Q6_K-GGUF --hf-file autolycus-q6_k.gguf -c 2048 ```
ChaoticNeutrals/mlm-filter-llava-llama-3-8b-gpt4v-mmproj-outdated
ChaoticNeutrals
"2024-04-28T20:10:25Z"
2,093
4
null
[ "gguf", "region:us" ]
null
"2024-04-24T03:43:23Z"
This file does work, but since it is based on the Llava 1.5 data it is often not as accurate as one would hope. This is the first GGUF mmproj file created for Llama 3 models. Chaotic Neutrals is dedicated to bringing the best multimodal experience to our users and will keep trying to provide the best methods available. I recommend using KoboldCPP in the same way you would have applied the Mistral mmproj in the "Model Files" tab. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
ILKT/2024-06-15_10-09-42
ILKT
"2024-06-15T17:52:03Z"
2,093
0
transformers
[ "transformers", "safetensors", "ILKT", "feature-extraction", "mteb", "custom_code", "model-index", "region:us" ]
feature-extraction
"2024-06-15T10:34:05Z"
--- tags: - mteb model-index: - name: 2024-06-15_10-09-42 results: - task: type: Classification dataset: name: MTEB MassiveIntentClassification type: mteb/amazon_massive_intent config: pl split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 0.09270342972427706 - task: type: Classification dataset: name: MTEB MassiveIntentClassification type: mteb/amazon_massive_intent config: pl split: validation revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 0.09493359567142154 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification type: mteb/amazon_massive_scenario config: pl split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 0.13839946200403497 - task: type: Classification dataset: name: MTEB MassiveScenarioClassification type: mteb/amazon_massive_scenario config: pl split: validation revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 0.13300541072306934 - task: type: Classification dataset: name: MTEB CBD type: PL-MTEB/cbd config: default split: test revision: 36ddb419bcffe6a5374c3891957912892916f28d metrics: - type: accuracy value: 0.49749999999999994 - task: type: Classification dataset: name: MTEB PolEmo2.0-IN type: PL-MTEB/polemo2_in config: default split: test revision: d90724373c70959f17d2331ad51fb60c71176b03 metrics: - type: accuracy value: 0.3736842105263158 - task: type: Classification dataset: name: MTEB PolEmo2.0-OUT type: PL-MTEB/polemo2_out config: default split: test revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4 metrics: - type: accuracy value: 0.2955465587044534 - task: type: Classification dataset: name: MTEB AllegroReviews type: PL-MTEB/allegro-reviews config: default split: test revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6 metrics: - type: accuracy value: 0.22713717693836974 - task: type: Classification dataset: name: MTEB PAC type: laugustyniak/abusive-clauses-pl config: default split: test revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543 metrics: - type: accuracy value: 0.5633651896901245 - task: type: Clustering dataset: name: MTEB EightTagsClustering type: PL-MTEB/8tags-clustering config: default split: test revision: 78b962b130c6690659c65abf67bf1c2f030606b6 metrics: - type: v_measure value: 0.011462589437473216 - task: type: PairClassification dataset: name: MTEB SICK-E-PL type: PL-MTEB/sicke-pl-pairclassification config: default split: test revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9 metrics: - type: ap value: 0.5523380294645331 - task: type: PairClassification dataset: name: MTEB CDSC-E type: PL-MTEB/cdsce-pairclassification config: default split: test revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d metrics: - type: ap value: 0.43095490774008566 - task: type: PairClassification dataset: name: MTEB PSC type: PL-MTEB/psc-pairclassification config: default split: test revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669 metrics: - type: ap value: 0.4035899331798161 - task: type: STS dataset: name: MTEB STS22 type: mteb/sts22-crosslingual-sts config: pl split: test revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3 metrics: - type: cosine_spearman value: -0.1003112640780357 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS type: mteb/stsb_multi_mt config: pl split: dev revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_spearman value: 0.3621828967981027 - task: type: STS dataset: name: MTEB STSBenchmarkMultilingualSTS type: mteb/stsb_multi_mt config: pl split: test revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c metrics: - type: cosine_spearman value: 0.36999667618778675 - task: type: STS dataset: name: MTEB SICK-R-PL type: PL-MTEB/sickr-pl-sts config: default split: test revision: fd5c2441b7eeff8676768036142af4cfa42c1339 metrics: - type: cosine_spearman value: 0.43158198285416055 - task: type: STS dataset: name: MTEB CDSC-R type: PL-MTEB/cdscr-sts config: default split: test revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd metrics: - type: cosine_spearman value: 0.4616750744735728 - task: type: Clustering dataset: name: MTEB PlscClusteringS2S type: PL-MTEB/plsc-clustering-s2s config: default split: test revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a metrics: - type: v_measure value: 0.17960532151287045 - task: type: Clustering dataset: name: MTEB PlscClusteringP2P type: PL-MTEB/plsc-clustering-p2p config: default split: test revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b metrics: - type: v_measure value: 0.1815461918111827 ---
dccuchile/distilbert-base-spanish-uncased
dccuchile
"2022-04-28T19:56:51Z"
2,092
9
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "spanish", "OpenCENIA", "es", "dataset:large_spanish_corpus", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:04Z"
--- language: - es tags: - distilbert - spanish - OpenCENIA datasets: - large_spanish_corpus ---
RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf
RichardErkhov
"2024-06-15T10:26:13Z"
2,092
1
null
[ "gguf", "region:us" ]
null
"2024-06-15T08:22:02Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) kukulemon-spiked-9B - GGUF - Model creator: https://huggingface.co/grimjim/ - Original model: https://huggingface.co/grimjim/kukulemon-spiked-9B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [kukulemon-spiked-9B.Q2_K.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q2_K.gguf) | Q2_K | 3.13GB | | [kukulemon-spiked-9B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.IQ3_XS.gguf) | IQ3_XS | 3.48GB | | [kukulemon-spiked-9B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.IQ3_S.gguf) | IQ3_S | 3.67GB | | [kukulemon-spiked-9B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q3_K_S.gguf) | Q3_K_S | 3.65GB | | [kukulemon-spiked-9B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.IQ3_M.gguf) | IQ3_M | 3.79GB | | [kukulemon-spiked-9B.Q3_K.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q3_K.gguf) | Q3_K | 4.05GB | | [kukulemon-spiked-9B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q3_K_M.gguf) | Q3_K_M | 4.05GB | | [kukulemon-spiked-9B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q3_K_L.gguf) | Q3_K_L | 4.41GB | | [kukulemon-spiked-9B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.IQ4_XS.gguf) | IQ4_XS | 4.55GB | | [kukulemon-spiked-9B.Q4_0.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q4_0.gguf) | Q4_0 | 4.74GB | | [kukulemon-spiked-9B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.IQ4_NL.gguf) | IQ4_NL | 4.79GB | | [kukulemon-spiked-9B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q4_K_S.gguf) | Q4_K_S | 4.78GB | | [kukulemon-spiked-9B.Q4_K.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q4_K.gguf) | Q4_K | 5.04GB | | [kukulemon-spiked-9B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q4_K_M.gguf) | Q4_K_M | 5.04GB | | [kukulemon-spiked-9B.Q4_1.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q4_1.gguf) | Q4_1 | 5.26GB | | [kukulemon-spiked-9B.Q5_0.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q5_0.gguf) | Q5_0 | 5.77GB | | [kukulemon-spiked-9B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q5_K_S.gguf) | Q5_K_S | 5.77GB | | [kukulemon-spiked-9B.Q5_K.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q5_K.gguf) | Q5_K | 5.93GB | | [kukulemon-spiked-9B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q5_K_M.gguf) | Q5_K_M | 5.93GB | | [kukulemon-spiked-9B.Q5_1.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q5_1.gguf) | Q5_1 | 6.29GB | | [kukulemon-spiked-9B.Q6_K.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q6_K.gguf) | Q6_K | 6.87GB | | [kukulemon-spiked-9B.Q8_0.gguf](https://huggingface.co/RichardErkhov/grimjim_-_kukulemon-spiked-9B-gguf/blob/main/kukulemon-spiked-9B.Q8_0.gguf) | Q8_0 | 8.89GB | Original model description: --- base_model: - grimjim/kukulemon-7B library_name: transformers tags: - mergekit - merge license: cc-by-nc-4.0 pipeline_tag: text-generation --- # kululemon-spiked-9B This is a frankenmerge of a pre-trained language model created using [mergekit](https://github.com/cg123/mergekit). As an experiment, this appears to be a partial success. Lightly tested with temperature 1-1.2 and minP 0.01 with ChatML prompts; the model supports Alpaca prompts and has 8K context length, a result of its Mistral v0.1 provenance. The model's output has been coherent and stable during aforementioned testing. The merge formula for this frankenmerge is below. It is conjectured that the shorter first section is not key to variation, the middle segment is key to balancing reasoning and variation, and that the lengthy final section is required for convergence and eventual stability. The internal instability is probably better suited for narrative involving unstable and/or unhinged characters and situations. Quants available: - [GGUF](https://huggingface.co/grimjim/kukulemon-spiked-9B-GGUF) - [8.0bpw h8 exl2](https://huggingface.co/grimjim/kukulemon-spiked-9B-8.0bpw_h8_exl2) ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: grimjim/kukulemon-7B layer_range: [0, 12] - sources: - model: grimjim/kukulemon-7B layer_range: [8, 16] - sources: - model: grimjim/kukulemon-7B layer_range: [12, 32] merge_method: passthrough dtype: float16 ```