modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
QuantFactory/Hathor-L3-8B-v.02-GGUF
QuantFactory
"2024-06-12T01:02:30Z"
1,491
0
null
[ "gguf", "text-generation", "en", "base_model:Nitral-AI/Hathor-L3-8B-v.02", "license:other", "region:us" ]
text-generation
"2024-06-11T05:21:01Z"
--- license: other language: - en base_model: Nitral-AI/Hathor-L3-8B-v.02 pipeline_tag: text-generation --- # QuantFactory/Hathor-L3-8B-v.02-GGUF This is quantized version of [Nitral-AI/Hathor-L3-8B-v.02](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.02) created using llama.cpp # Model Description ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/kJF-ER-uPDH6O2m6qB9wg.jpeg) # "Hathor-v0.2 is a model based on the LLaMA 3 architecture: Designed to seamlessly integrate the qualities of creativity, intelligence, and robust performance. Making it an ideal tool for a wide range of applications; such as creative writing, educational support and human/computer interaction." # Recomended ST Presets: [Hathor Presets](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.01/tree/main/Hathor%20Presets) --- # Notes: Hathor is trained on 3 epochs of private data, synthetic opus instructons, a mix of light/classical novel data, roleplaying chat pairs over llama 3 8B instruct. (expanded) --- - If you want to use vision functionality: * You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). - To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16) * You can load the **mmproj** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png) ---
procit001/mms-tts-nl-v6
procit001
"2024-06-18T23:51:53Z"
1,491
0
transformers
[ "transformers", "safetensors", "vits", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
"2024-06-18T23:49:34Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🀗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
flaubert/flaubert_large_cased
flaubert
"2024-05-14T12:38:43Z"
1,490
2
transformers
[ "transformers", "pytorch", "safetensors", "flaubert", "fill-mask", "bert", "language-model", "flue", "french", "bert-large", "flaubert-large", "cased", "fr", "dataset:flaubert", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: fr license: mit datasets: - flaubert metrics: - flue tags: - bert - language-model - flaubert - flue - french - bert-large - flaubert-large - cased --- # FlauBERT: Unsupervised Language Model Pre-training for French **FlauBERT** is a French BERT trained on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) [Jean Zay](http://www.idris.fr/eng/jean-zay/ ) supercomputer. Along with FlauBERT comes [**FLUE**](https://github.com/getalp/Flaubert/tree/master/flue): an evaluation setup for French NLP systems similar to the popular GLUE benchmark. The goal is to enable further reproducible experiments in the future and to share models and progress on the French language.For more details please refer to the [official website](https://github.com/getalp/Flaubert). ## FlauBERT models | Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters | | :------: | :---: | :---: | :---: | :---: | | `flaubert-small-cased` | 6 | 8 | 512 | 54 M | | `flaubert-base-uncased` | 12 | 12 | 768 | 137 M | | `flaubert-base-cased` | 12 | 12 | 768 | 138 M | | `flaubert-large-cased` | 24 | 16 | 1024 | 373 M | **Note:** `flaubert-small-cased` is partially trained so performance is not guaranteed. Consider using it for debugging purpose only. ## Using FlauBERT with Hugging Face's Transformers ```python import torch from transformers import FlaubertModel, FlaubertTokenizer # Choose among ['flaubert/flaubert_small_cased', 'flaubert/flaubert_base_uncased', # 'flaubert/flaubert_base_cased', 'flaubert/flaubert_large_cased'] modelname = 'flaubert/flaubert_base_cased' # Load pretrained model and tokenizer flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True) flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False) # do_lowercase=False if using cased models, True if using uncased ones sentence = "Le chat mange une pomme." token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)]) last_layer = flaubert(token_ids)[0] print(last_layer.shape) # torch.Size([1, 8, 768]) -> (batch size x number of tokens x embedding dimension) # The BERT [CLS] token correspond to the first hidden state of the last layer cls_embedding = last_layer[:, 0, :] ``` **Notes:** if your `transformers` version is <=2.10.0, `modelname` should take one of the following values: ``` ['flaubert-small-cased', 'flaubert-base-uncased', 'flaubert-base-cased', 'flaubert-large-cased'] ``` ## References If you use FlauBERT or the FLUE Benchmark for your scientific publication, or if you find the resources in this repository useful, please cite one of the following papers: [LREC paper](http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.302.pdf) ``` @InProceedings{le2020flaubert, author = {Le, Hang and Vial, Lo\"{i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb\'{e}, Beno\^{i}t and Besacier, Laurent and Schwab, Didier}, title = {FlauBERT: Unsupervised Language Model Pre-training for French}, booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference}, month = {May}, year = {2020}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2479--2490}, url = {https://www.aclweb.org/anthology/2020.lrec-1.302} } ``` [TALN paper](https://hal.archives-ouvertes.fr/hal-02784776/) ``` @inproceedings{le2020flaubert, title = {FlauBERT: des mod{\`e}les de langue contextualis{\'e}s pr{\'e}-entra{\^\i}n{\'e}s pour le fran{\c{c}}ais}, author = {Le, Hang and Vial, Lo{\"\i}c and Frej, Jibril and Segonne, Vincent and Coavoux, Maximin and Lecouteux, Benjamin and Allauzen, Alexandre and Crabb{\'e}, Beno{\^\i}t and Besacier, Laurent and Schwab, Didier}, booktitle = {Actes de la 6e conf{\'e}rence conjointe Journ{\'e}es d'{\'E}tudes sur la Parole (JEP, 31e {\'e}dition), Traitement Automatique des Langues Naturelles (TALN, 27e {\'e}dition), Rencontre des {\'E}tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R{\'E}CITAL, 22e {\'e}dition). Volume 2: Traitement Automatique des Langues Naturelles}, pages = {268--278}, year = {2020}, organization = {ATALA} } ```
readerbench/RoBERT-base
readerbench
"2023-06-21T10:08:18Z"
1,490
8
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
Model card for RoBERT-base --- language: - ro --- # RoBERT-base ## Pretrained BERT model for Romanian Pretrained model on Romanian language using a masked language modeling (MLM) and next sentence prediction (NSP) objective. It was introduced in this [paper](https://www.aclweb.org/anthology/2020.coling-main.581/). Three BERT models were released: RoBERT-small, **RoBERT-base** and RoBERT-large, all versions uncased. | Model | Weights | L | H | A | MLM accuracy | NSP accuracy | |----------------|:---------:|:------:|:------:|:------:|:--------------:|:--------------:| | RoBERT-small | 19M | 12 | 256 | 8 | 0.5363 | 0.9687 | | *RoBERT-base* | *114M* | *12* | *768* | *12* | *0.6511* | *0.9802* | | RoBERT-large | 341M | 24 | 1024 | 24 | 0.6929 | 0.9843 | All models are available: * [RoBERT-small](https://huggingface.co/readerbench/RoBERT-small) * [RoBERT-base](https://huggingface.co/readerbench/RoBERT-base) * [RoBERT-large](https://huggingface.co/readerbench/RoBERT-large) #### How to use ```python # tensorflow from transformers import AutoModel, AutoTokenizer, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-base") model = TFAutoModel.from_pretrained("readerbench/RoBERT-base") inputs = tokenizer("exemplu de propoziție", return_tensors="tf") outputs = model(inputs) # pytorch from transformers import AutoModel, AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("readerbench/RoBERT-base") model = AutoModel.from_pretrained("readerbench/RoBERT-base") inputs = tokenizer("exemplu de propoziție", return_tensors="pt") outputs = model(**inputs) ``` ## Training data The model is trained on the following compilation of corpora. Note that we present the statistics after the cleaning process. | Corpus | Words | Sentences | Size (GB)| |-----------|:---------:|:---------:|:--------:| | Oscar | 1.78B | 87M | 10.8 | | RoTex | 240M | 14M | 1.5 | | RoWiki | 50M | 2M | 0.3 | | **Total** | **2.07B** | **103M** | **12.6** | ## Downstream performance ### Sentiment analysis We report Macro-averaged F1 score (in %) | Model | Dev | Test | |------------------|:--------:|:--------:| | multilingual-BERT| 68.96 | 69.57 | | XLM-R-base | 71.26 | 71.71 | | BERT-base-ro | 70.49 | 71.02 | | RoBERT-small | 66.32 | 66.37 | | *RoBERT-base* | *70.89* | *71.61* | | RoBERT-large | **72.48**| **72.11**| ### Moldavian vs. Romanian Dialect and Cross-dialect Topic identification We report results on [VarDial 2019](https://sites.google.com/view/vardial2019/campaign) Moldavian vs. Romanian Cross-dialect Topic identification Challenge, as Macro-averaged F1 score (in %). | Model | Dialect Classification | MD to RO | RO to MD | |-------------------|:----------------------:|:--------:|:--------:| | 2-CNN + SVM | 93.40 | 65.09 | 75.21 | | Char+Word SVM | 96.20 | 69.08 | 81.93 | | BiGRU | 93.30 | **70.10**| 80.30 | | multilingual-BERT | 95.34 | 68.76 | 78.24 | | XLM-R-base | 96.28 | 69.93 | 82.28 | | BERT-base-ro | 96.20 | 69.93 | 78.79 | | RoBERT-small | 95.67 | 69.01 | 80.40 | | *RoBERT-base* | *97.39* | *68.30* | *81.09* | | RoBERT-large | **97.78** | 69.91 | **83.65**| ### Diacritics Restoration Challenge can be found [here](https://diacritics-challenge.speed.pub.ro/). We report results on the official test set, as accuracies in %. | Model | word level | char level | |-----------------------------|:----------:|:----------:| | BiLSTM | 99.42 | - | | CharCNN | 98.40 | 99.65 | | CharCNN + multilingual-BERT | 99.72 | 99.94 | | CharCNN + XLM-R-base | 99.76 | **99.95** | | CharCNN + BERT-base-ro | **99.79** | **99.95** | | CharCNN + RoBERT-small | 99.73 | 99.94 | | *CharCNN + RoBERT-base* | *99.78* | **99.95** | | CharCNN + RoBERT-large | 99.76 | **99.95** | ### BibTeX entry and citation info ```bibtex @inproceedings{masala2020robert, title={RoBERT--A Romanian BERT Model}, author={Masala, Mihai and Ruseti, Stefan and Dascalu, Mihai}, booktitle={Proceedings of the 28th International Conference on Computational Linguistics}, pages={6626--6637}, year={2020} } ```
digiplay/NextGenMix_R2.8VAE
digiplay
"2023-08-20T18:18:18Z"
1,490
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-08-19T07:59:13Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info : https://civitai.com/models/77751?modelVersionId=112876 Sample image I made thru Huggingface's API : ![1000068306.jpg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/rrdCp_Ej8edxN99kZV15n.jpeg) Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/97d06de2-4028-4869-ba66-c664496b0f37/width=960/00040-%20.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/348a6f9d-45e8-4726-b367-cec181bca440/width=960/00051-%20.jpeg)
SciPhi/SciPhi-Self-RAG-Mistral-7B-32k
SciPhi
"2023-11-01T12:25:44Z"
1,490
84
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-27T13:18:51Z"
--- license: mit --- # SciPhi-Self-RAG-Mistral-7B-32k Model Card SciPhi-Self-RAG-Mistral-7B-32k is a Large Language Model (LLM) fine-tuned from Mistral-7B-v0.1. This model underwent the fine-tuning process described in the [SciPhi-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Mistral-7B-32k) model card. It then underwent further fine-tuning on the recently released [self-rag](https://arxiv.org/abs//2310.11511) dataset. Other RAG-related instruct datasets were mixed in during this process in an effort to keep the tone of the current model. This model benchmarks well, but it needs further tuning to be an excellent conversationalist. Benchmark Results: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c806dc4515835c4d7b0b6d/_KV_hXZ0SPkmJUnHudoFz.png) SciPhi-AI is available via a free hosted API, though the exposed model can vary. Currently, SciPhi-Self-RAG-Mistral-7B-32k is available. More details can be found in the docs [here](https://sciphi.readthedocs.io/en/latest/setup/quickstart.html). ## Recommended Chat Formatting ``` We recommend mapping such that messages = [ { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", }, {"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, ] goes to ---> ### System: You are a friendly chatbot who always responds in the style of a pirate ### Instruction: How many helicopters can a human eat in one sitting? ### Response: ... Here is a sample implementation that does this and combines with RAG context retrieval. def get_chat_completion( self, conversation: list[dict], generation_config: GenerationConfig ) -> str: self._check_stop_token(generation_config.stop_token) prompt = "" added_system_prompt = False for message in conversation: if message["role"] == "system": prompt += f"### System:\n{SciPhiLLMInterface.ALPACA_CHAT_SYSTEM_PROMPT}. Further, the assistant is given the following additional instructions - {message['content']}\n\n" added_system_prompt = True elif message["role"] == "user": last_user_message = message["content"] prompt += f"### Instruction:\n{last_user_message}\n\n" elif message["role"] == "assistant": prompt += f"### Response:\n{message['content']}\n\n" if not added_system_prompt: prompt = f"### System:\n{SciPhiLLMInterface.ALPACA_CHAT_SYSTEM_PROMPT}.\n\n{prompt}" context = self.rag_interface.get_contexts([last_user_message])[0] prompt += f"### Response:\n{SciPhiFormatter.RETRIEVAL_TOKEN} {SciPhiFormatter.INIT_PARAGRAPH_TOKEN}{context}{SciPhiFormatter.END_PARAGRAPH_TOKEN}" latest_completion = self.model.get_instruct_completion( prompt, generation_config ).strip() return SciPhiFormatter.remove_cruft(latest_completion) ``` ## Model Architecture Base Model: Mistral-7B-v0.1 **Architecture Features:** - Transformer-based model - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## References 1. Asai, A., Wu, Z., Wang, Y., Sil, A., & Hajishirzi, H. (2023). Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection. arXiv preprint arXiv:2310.11511. 2. Lian, W., Goodson, B., Wang, G., Pentland, E., Cook, A., Vong, C., & Teknium. (2023). MistralOrca: Mistral-7B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset. *HuggingFace repository*. [Link](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) 3. Mukherjee, S., Mitra, A., Jawahar, G., Agarwal, S., Palangi, H., & Awadallah, A. (2023). Orca: Progressive Learning from Complex Explanation Traces of GPT-4. *arXiv preprint arXiv:2306.02707*. 4. Longpre, S., Hou, L., Vu, T., Webson, A., Chung, H. W., Tay, Y., Zhou, D., Le, Q. V., Zoph, B., Wei, J., & Roberts, A. (2023). The Flan Collection: Designing Data and Methods for Effective Instruction Tuning. *arXiv preprint arXiv:2301.13688*. 5. Mistral AI. (2023). Model Card for Mistral-7B-v0.1. The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks tested. For full details, please refer to the paper and release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. [Link](https://huggingface.co/mistralai/Mistral-7B-v0.1) ## Acknowledgements Thank you to the [AI Alignment Lab](https://huggingface.co/Alignment-Lab-AI), [vikp](https://huggingface.co/vikp), [jph00](https://huggingface.co/jph00) and others who contributed to this work.
QuantFactory/UltimateANJIR-8B-L3-Blackroot-GGUF
QuantFactory
"2024-06-04T09:25:54Z"
1,490
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "text-generation", "base_model:Hastagaras/UltimateANJIR-8B-L3-Blackroot", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-02T10:18:45Z"
--- base_model: Hastagaras/UltimateANJIR-8B-L3-Blackroot library_name: transformers tags: - mergekit - merge license: llama3 pipeline_tag: text-generation --- # UltimateANJIR-8B-L3-Blackroot-GGUF This is quantized version of [Hastagaras/UltimateANJIR-8B-L3-Blackroot](https://huggingface.co/Hastagaras/UltimateANJIR-8B-L3-Blackroot) created using llama.cpp # Model Description So this is a merge of merges. Based on this introverted model: [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) and this extroverted model: [Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1](https://huggingface.co/Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1) I merged both with my model to add some RP ability, and it turns out the extroverted one is winning. And also, please don't mind the unusual tokenizer config and the template for all of my models, it's just my preference, you know...for easier testing. And the quantized GGUF might not work with Ollama Chat or Backyard AI... so you have to write the template manually, as it is still the llama 3 instruct template.
RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf
RichardErkhov
"2024-06-15T01:57:01Z"
1,490
0
null
[ "gguf", "region:us" ]
null
"2024-06-15T00:53:50Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Pelican-9b-v0.1 - GGUF - Model creator: https://huggingface.co/ConvexAI/ - Original model: https://huggingface.co/ConvexAI/Pelican-9b-v0.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Pelican-9b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q2_K.gguf) | Q2_K | 3.43GB | | [Pelican-9b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.81GB | | [Pelican-9b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.IQ3_S.gguf) | IQ3_S | 4.02GB | | [Pelican-9b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.99GB | | [Pelican-9b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.IQ3_M.gguf) | IQ3_M | 4.15GB | | [Pelican-9b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q3_K.gguf) | Q3_K | 4.44GB | | [Pelican-9b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.44GB | | [Pelican-9b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.84GB | | [Pelican-9b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.98GB | | [Pelican-9b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q4_0.gguf) | Q4_0 | 5.2GB | | [Pelican-9b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.IQ4_NL.gguf) | IQ4_NL | 5.25GB | | [Pelican-9b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q4_K_S.gguf) | Q4_K_S | 5.23GB | | [Pelican-9b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q4_K.gguf) | Q4_K | 5.53GB | | [Pelican-9b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.53GB | | [Pelican-9b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q4_1.gguf) | Q4_1 | 5.76GB | | [Pelican-9b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q5_0.gguf) | Q5_0 | 6.33GB | | [Pelican-9b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q5_K_S.gguf) | Q5_K_S | 6.33GB | | [Pelican-9b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q5_K.gguf) | Q5_K | 6.5GB | | [Pelican-9b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q5_K_M.gguf) | Q5_K_M | 6.5GB | | [Pelican-9b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q5_1.gguf) | Q5_1 | 6.9GB | | [Pelican-9b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q6_K.gguf) | Q6_K | 7.53GB | | [Pelican-9b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/ConvexAI_-_Pelican-9b-v0.1-gguf/blob/main/Pelican-9b-v0.1.Q8_0.gguf) | Q8_0 | 9.76GB | Original model description: --- license: apache-2.0 tags: - mergekit - merge base_model: - flemmingmiguel/MBX-7B - flemmingmiguel/MBX-7B-v3 model-index: - name: Pelican-9b-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 47.95 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 66.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 62.85 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 50.61 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.66 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 0.0 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ConvexAI/Pelican-9b-v0.1 name: Open LLM Leaderboard --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ⚠**Warning** ⚠ Model is broken and outputs only broken german. Possibly obsessed with Fußball. âšœ ### Merge Method This model was merged using the passthrough merge method and only speaks german, somewhat obsessed with football. ### Models Merged The following models were included in the merge: * [flemmingmiguel/MBX-7B](https://huggingface.co/flemmingmiguel/MBX-7B) * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: flemmingmiguel/MBX-7B-v3 layer_range: [0, 32] - sources: - model: flemmingmiguel/MBX-7B layer_range: [20, 32] merge_method: passthrough dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ConvexAI__Pelican-9b-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |50.38| |AI2 Reasoning Challenge (25-Shot)|47.95| |HellaSwag (10-Shot) |66.22| |MMLU (5-Shot) |62.85| |TruthfulQA (0-shot) |50.61| |Winogrande (5-shot) |74.66| |GSM8k (5-shot) | 0.00|
microsoft/resnet-26
microsoft
"2023-04-24T09:51:07Z"
1,489
1
transformers
[ "transformers", "pytorch", "tf", "safetensors", "resnet", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-16T15:41:05Z"
Entry not found
Norm/ERNIE-Layout-Pytorch
Norm
"2023-11-14T13:34:59Z"
1,489
13
transformers
[ "transformers", "pytorch", "xlnet", "arxiv:2210.06155", "license:mit", "endpoints_compatible", "region:us" ]
null
"2022-12-12T02:21:36Z"
--- license: mit --- # ERNIE-Layout_Pytorch - **Model type:** [ERNIE-Layout](https://arxiv.org/abs/2210.06155) - **Repository:** [source code](https://github.com/NormXU/ERNIE-Layout-Pytorch): an unofficial ERNIE-Layout implementation in Pytorch - **Converted from:** [PaddlePaddle/ernie-layoutx-base-uncased](https://huggingface.co/PaddlePaddle/ernie-layoutx-base-uncased) The ERNIE-Layout-Pytorch model is initially released by [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP). To make Pytorch users easy to use, the model has been converted into PyTorch format with the [tools/convert2torch.py](https://github.com/NormXU/ERNIE-Layout-Pytorch/blob/main/tools/convert2torch.py) script. Please feel free to make any changes you need. For more details and use cases, please check the repo. **A Quick Example** ```python import torch from PIL import Image import torch.nn.functional as F from networks import ErnieLayoutConfig, ErnieLayoutForQuestionAnswering, \ ErnieLayoutProcessor, ErnieLayoutTokenizerFast from transformers.models.layoutlmv3 import LayoutLMv3ImageProcessor pretrain_torch_model_or_path = "Norm/ERNIE-Layout-Pytorch" doc_imag_path = "./dummy_input.jpeg" context = ['This is an example sequence', 'All ocr boxes are inserted into this list'] layout = [[381, 91, 505, 115], [738, 96, 804, 122]] # make sure all boxes are normalized between 0 - 1000 pil_image = Image.open(doc_imag_path).convert("RGB") # initialize tokenizer tokenizer = ErnieLayoutTokenizerFast.from_pretrained(pretrained_model_name_or_path=pretrain_torch_model_or_path) # initialize feature extractor feature_extractor = LayoutLMv3ImageProcessor(apply_ocr=False) processor = ErnieLayoutProcessor(image_processor=feature_extractor, tokenizer=tokenizer) # Tokenize context & questions question = "what is it?" encoding = processor(pil_image, question, context, boxes=layout, return_tensors="pt") # dummy answer start && end index start_positions = torch.tensor([6]) end_positions = torch.tensor([12]) # initialize config config = ErnieLayoutConfig.from_pretrained(pretrained_model_name_or_path=pretrain_torch_model_or_path) config.num_classes = 2 # start and end # initialize ERNIE for VQA model = ErnieLayoutForQuestionAnswering.from_pretrained( pretrained_model_name_or_path=pretrain_torch_model_or_path, config=config, ) output = model(**encoding, start_positions=start_positions, end_positions=end_positions) # decode output start_max = torch.argmax(F.softmax(output.start_logits, dim=-1)) end_max = torch.argmax(F.softmax(output.end_logits, dim=-1)) + 1 # add one ##because of python list indexing answer = tokenizer.decode(encoding.input_ids[0][start_max: end_max]) print(answer) ```
TheBloke/Noromaid-20B-v0.1.1-GGUF
TheBloke
"2023-11-23T08:58:48Z"
1,489
20
transformers
[ "transformers", "gguf", "llama", "base_model:NeverSleep/Noromaid-20b-v0.1.1", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
"2023-11-23T08:47:36Z"
--- base_model: NeverSleep/Noromaid-20b-v0.1.1 inference: false license: cc-by-nc-4.0 model_creator: IkariDev and Undi model_name: Noromaid 20B v0.1.1 model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Noromaid 20B v0.1.1 - GGUF - Model creator: [IkariDev and Undi](https://huggingface.co/NeverSleep) - Original model: [Noromaid 20B v0.1.1](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1) <!-- description start --> ## Description This repo contains GGUF format model files for [IkariDev and Undi's Noromaid 20B v0.1.1](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF) * [IkariDev and Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [IkariDev and Undi's Noromaid 20B v0.1.1](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [noromaid-20b-v0.1.1.Q2_K.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q2_K.gguf) | Q2_K | 2 | 8.31 GB| 10.81 GB | smallest, significant quality loss - not recommended for most purposes | | [noromaid-20b-v0.1.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q3_K_S.gguf) | Q3_K_S | 3 | 8.66 GB| 11.16 GB | very small, high quality loss | | [noromaid-20b-v0.1.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q3_K_M.gguf) | Q3_K_M | 3 | 9.70 GB| 12.20 GB | very small, high quality loss | | [noromaid-20b-v0.1.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q3_K_L.gguf) | Q3_K_L | 3 | 10.63 GB| 13.13 GB | small, substantial quality loss | | [noromaid-20b-v0.1.1.Q4_0.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q4_0.gguf) | Q4_0 | 4 | 11.29 GB| 13.79 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [noromaid-20b-v0.1.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q4_K_S.gguf) | Q4_K_S | 4 | 11.34 GB| 13.84 GB | small, greater quality loss | | [noromaid-20b-v0.1.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q4_K_M.gguf) | Q4_K_M | 4 | 12.04 GB| 14.54 GB | medium, balanced quality - recommended | | [noromaid-20b-v0.1.1.Q5_0.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q5_0.gguf) | Q5_0 | 5 | 13.77 GB| 16.27 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [noromaid-20b-v0.1.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q5_K_S.gguf) | Q5_K_S | 5 | 13.77 GB| 16.27 GB | large, low quality loss - recommended | | [noromaid-20b-v0.1.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q5_K_M.gguf) | Q5_K_M | 5 | 14.16 GB| 16.66 GB | large, very low quality loss - recommended | | [noromaid-20b-v0.1.1.Q6_K.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q6_K.gguf) | Q6_K | 6 | 16.40 GB| 18.90 GB | very large, extremely low quality loss | | [noromaid-20b-v0.1.1.Q8_0.gguf](https://huggingface.co/TheBloke/Noromaid-20B-v0.1.1-GGUF/blob/main/noromaid-20b-v0.1.1.Q8_0.gguf) | Q8_0 | 8 | 21.25 GB| 23.75 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Noromaid-20B-v0.1.1-GGUF and below it, a specific filename to download, such as: noromaid-20b-v0.1.1.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Noromaid-20B-v0.1.1-GGUF noromaid-20b-v0.1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Noromaid-20B-v0.1.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Noromaid-20B-v0.1.1-GGUF noromaid-20b-v0.1.1.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m noromaid-20b-v0.1.1.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Noromaid-20B-v0.1.1-GGUF", model_file="noromaid-20b-v0.1.1.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik BjÀreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: IkariDev and Undi's Noromaid 20B v0.1.1 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/VKX2Z2yjZX5J8kXzgeCYO.png) --- # Disclaimer: ## This is a ***TEST*** version, don't expect everything to work!!! You may use our custom **prompting format**(scroll down to download them!), or simple alpaca. **(Choose which fits best for you!)** --- # This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)! Tired of the same merges everytime? Here it is, the Noromaid-20b-v0.1.1 model. Suitable for RP, ERP and general stuff. [Recommended settings - No settings yet(Please suggest some over in the Community tab!)] <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains fp16 files of Noromaid-20b-v0.1.1. [FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1) <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)--> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)--> <!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)--> <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)--> <!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)--> [GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)--> ## Ratings: Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". <!-- description end --> <!-- prompt-template start --> ## Prompt template: Custom format, or Alpaca ### Custom format: UPDATED!! SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). OLD SillyTavern config files: [Context](https://files.catbox.moe/x85uy1.json), [Instruct](https://files.catbox.moe/ttw1l9.json). ### Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## Training data used: - [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output. - [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it! ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek <!-- original-model-card end -->
jisukim8873/falcon-7B-case-5
jisukim8873
"2024-03-04T01:49:37Z"
1,489
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "en", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-04T00:52:31Z"
--- license: apache-2.0 datasets: - Open-Orca/SlimOrca language: - en --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Jisu Kim * Model Type: Large Language Model # Model Architecture This model is based on falcon-7B. We fine-tuning this model for data ordering task. falcon-7B is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample Open-Orca dataset. (We finetune the 100,000 dataset) # Guthub https://github.com/trailerAI # License Apache License 2.0
stefan-it/wav2vec2-large-xlsr-53-basque
stefan-it
"2021-03-29T15:54:40Z"
1,488
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "eu", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: eu datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Basque Stefan Schweter results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice eu type: common_voice args: eu metrics: - name: Test WER type: wer value: 18.272625 --- # Wav2Vec2-Large-XLSR-53-Basque Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Basque using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "eu", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("stefan-it/wav2vec2-large-xlsr-53-basque") model = Wav2Vec2ForCTC.from_pretrained("stefan-it/wav2vec2-large-xlsr-53-basque") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Basque test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "eu", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("stefan-it/wav2vec2-large-xlsr-53-basque") model = Wav2Vec2ForCTC.from_pretrained("stefan-it/wav2vec2-large-xlsr-53-basque") model.to("cuda") chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\‘\\\\”\\\\ᅵ]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 18.272625% ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found here, hopefully very soon! ## Acknowledgements Many thanks to the [OVH team](https://www.ovhcloud.com) for providing access to a V-100 instance. Without their help, fine-tuning would not be possible! I would also thank [Manuel Romero](https://github.com/mrm8488) (mrm8488) for helping with the fine-tuning script!
TheBloke/Pygmalion-2-7B-GGUF
TheBloke
"2023-09-27T12:48:01Z"
1,488
25
transformers
[ "transformers", "gguf", "llama", "text generation", "instruct", "text-generation", "en", "dataset:PygmalionAI/PIPPA", "dataset:Open-Orca/OpenOrca", "dataset:Norquinal/claude_multiround_chat_30k", "dataset:jondurbin/airoboros-gpt4-1.4.1", "dataset:databricks/databricks-dolly-15k", "base_model:PygmalionAI/pygmalion-2-7b", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
"2023-09-05T22:02:07Z"
--- language: - en license: llama2 tags: - text generation - instruct datasets: - PygmalionAI/PIPPA - Open-Orca/OpenOrca - Norquinal/claude_multiround_chat_30k - jondurbin/airoboros-gpt4-1.4.1 - databricks/databricks-dolly-15k model_name: Pygmalion 2 7B base_model: PygmalionAI/pygmalion-2-7b inference: false model_creator: PygmalionAI model_type: llama pipeline_tag: text-generation prompt_template: 'The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history. The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here''s an example: ``` <|system|>Enter RP mode. Pretend to be {{char}} whose persona follows: {{persona}} You shall reply to the user while staying in character, and generate long responses. ``` ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Pygmalion 2 7B - GGUF - Model creator: [PygmalionAI](https://huggingface.co/PygmalionAI) - Original model: [Pygmalion 2 7B](https://huggingface.co/PygmalionAI/pygmalion-2-7b) <!-- description start --> ## Description This repo contains GGUF format model files for [PygmalionAI's Pygmalion 2 7B](https://huggingface.co/PygmalionAI/pygmalion-2-7b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Pygmalion-2-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF) * [PygmalionAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/PygmalionAI/pygmalion-2-7b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Custom The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history. The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example: ``` <|system|>Enter RP mode. Pretend to be {{char}} whose persona follows: {{persona}} You shall reply to the user while staying in character, and generate long responses. ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [pygmalion-2-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [pygmalion-2-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [pygmalion-2-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [pygmalion-2-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [pygmalion-2-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [pygmalion-2-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [pygmalion-2-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [pygmalion-2-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [pygmalion-2-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [pygmalion-2-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [pygmalion-2-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [pygmalion-2-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Pygmalion-2-7B-GGUF/blob/main/pygmalion-2-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Pygmalion-2-7B-GGUF and below it, a specific filename to download, such as: pygmalion-2-7b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Pygmalion-2-7B-GGUF pygmalion-2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Pygmalion-2-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Pygmalion-2-7B-GGUF pygmalion-2-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m pygmalion-2-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|system|>Enter RP mode. Pretend to be {{char}} whose persona follows:\n{{persona}}\n\nYou shall reply to the user while staying in character, and generate long responses." ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Pygmalion-2-7B-GGUF", model_file="pygmalion-2-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik BjÀreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 쀀교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: PygmalionAI's Pygmalion 2 7B <h1 style="text-align: center">Pygmalion-2 7B</h1> <h2 style="text-align: center">An instruction-tuned Llama-2 biased towards fiction writing and conversation.</h2> ## Model Details The long-awaited release of our new models based on Llama-2 is finally here. Pygmalion-2 7B (formerly known as Metharme) is based on [Llama-2 7B](https://huggingface.co/meta-llama/llama-2-7b-hf) released by Meta AI. The Metharme models were an experiment to try and get a model that is usable for conversation, roleplaying and storywriting, but which can be guided using natural language like other instruct models. After much deliberation, we reached the conclusion that the Metharme prompting format is superior (and easier to use) compared to the classic Pygmalion. This model was trained by doing supervised fine-tuning over a mixture of regular instruction data alongside roleplay, fictional stories and conversations with synthetically generated instructions attached. This model is freely available for both commercial and non-commercial use, as per the Llama-2 license. ## Prompting The model has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history. ### Prompting example The system prompt has been designed to allow the model to "enter" various modes and dictate the reply length. Here's an example: ``` <|system|>Enter RP mode. Pretend to be {{char}} whose persona follows: {{persona}} You shall reply to the user while staying in character, and generate long responses. ``` ## Dataset The dataset used to fine-tune this model includes our own [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA), along with several other instruction datasets, and datasets acquired from various RP forums. ## Limitations and biases The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope. As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading. ## Acknowledgements We would like to thank [SpicyChat](https://spicychat.ai/) for sponsoring the training for this model. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <!-- original-model-card end -->
DILAB-HYU/koquality-polyglot-12.8b
DILAB-HYU
"2023-11-17T16:35:16Z"
1,488
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "polyglot-ko", "gpt-neox", "KoQuality", "ko", "dataset:DILAB-HYU/KoQuality", "base_model:EleutherAI/polyglot-ko-12.8b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-12T16:30:20Z"
--- license: apache-2.0 datasets: - DILAB-HYU/KoQuality language: - ko pipeline_tag: text-generation tags: - polyglot-ko - gpt-neox - KoQuality base_model: EleutherAI/polyglot-ko-12.8b --- This model is a instruct-tuned poylglot-ko-12.8b model, using koquality. -> 18step ## Training hyperparameters - learning_rate: 5e-5 - seed: 42 - distributed_type: multi-GPU (A100 80G) - num_devices: 6 - train_batch_size: 4 - gradient_accumulation_steps: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ## Framework versions - Transformers 4.35.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.6 - deepspeed 0.11.1 - accelerate 0.24.1
Yntec/C-.-_-.-Aravaggio
Yntec
"2023-12-21T01:37:47Z"
1,488
3
diffusers
[ "diffusers", "safetensors", "Base model", "General", "Everything", "Redigleb_Doppler2482", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-12-21T01:09:00Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Base model - General - Everything - Redigleb_Doppler2482 - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # C++AravaggioV0.9 - an Answer to both Dall-E and Kandinsky 2.1 Original page: https://civitai.com/models/93155/caravaggiov09-an-answer-to-both-dall-e-and-kandinsky-21?modelVersionId=99323 Samples and prompts: ![Caravaggio Samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/z4ITnI-xOnS3Lo6SuK3qe.png) Top left: Anime fine details portrait of joyful cute little girl lay school class room, bokeh. anime masterpiece by studio ghibli. 8k, sharp high quality classic anime from 1990 in style of hayao miyazaki. Wikipedia. hugging. OIL PAINTING. DOCTOR with short hair in coat BEAUTIFUL girl eyes. she has pigtails Top right: House with a waterwheel built into the roots of a giant tree, next to games, a colorful river landscape painting from a fantasy point and click 2 d graphic adventure game, art inspired by ROSSDRAWS and larry elmore and john shroades, king's quest, sierra entertainment Bottom left: An underwater world with vibrant coral reefs and schools of colorful fish. The artistic style is pop art, with bold and bright colors and graphic shapes. The light setting is filtered through the water, creating a surreal and dreamy effect. The mood of the image is energetic and lively, capturing the movement and vitality of the underwater environment. Bottom right: pretty young girl riding bike down the ocean streets of japan, teddy bear hour
Yntec/Toonify2
Yntec
"2023-08-10T14:06:21Z"
1,487
6
diffusers
[ "diffusers", "safetensors", "anime", "comic", "art", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "BetterThanNothing", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-08-10T13:46:20Z"
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - anime - comic - art - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - BetterThanNothing --- # Toonify Preview and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/JDxweoQGlpRmcLZuRLIns.png) ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/XfGlgXedKBdmta899YCWn.png) sitting elementary girl, Pretty CUTE, gorgeous hair, DETAILED EYES, Futuristic city of tokyo japan, Magazine ad, iconic, 1943, sharp focus, 4k. (Sweaty). visible comic art by ROSSDRAWS and Clay Mann and kyoani Original page: https://civitai.com/models/36281
mtgv/MobileLLaMA-1.4B-Base
mtgv
"2023-12-29T06:11:55Z"
1,487
15
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:togethercomputer/RedPajama-Data-1T", "arxiv:2312.16886", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-28T09:52:27Z"
--- license: apache-2.0 datasets: - togethercomputer/RedPajama-Data-1T tags: - llama --- # Model Summery MobileLLaMA-1.4B-Base is a Transformer with 1.4B billon paramters. We downscale LLaMA to facilitate the off-the-shelf deployment. To make our work reproducible, all the models are trained on 1.3T tokens from the [RedPajama v1](https://www.together.ai/blog/redpajama) dataset only. This benefits further research by enabling controlled experiments. We extensively assess our models on two standard natural language benchmarks, for language understanding and common sense reasoning respectively. Experimental results show that our MobileLLaMA 1.4B is on par with the most recent opensource models. # Model Sources - Repository: https://github.com/Meituan-AutoML/MobileVLM - Paper: https://arxiv.org/abs/2312.16886 # How to Get Started with the Model Model weights can be loaded with Hugging Face Transformers. Examples can be found at [Github](https://github.com/Meituan-AutoML/MobileVLM). # Training Details please refer to our paper in section 4.1: [MobileVLM: A Fast, Strong and Open Vision Language Assistant for Mobile Devices](https://arxiv.org/pdf/2312.16886.pdf).
mrm8488/spanbert-large-finetuned-squadv2
mrm8488
"2021-05-20T00:59:58Z"
1,486
1
transformers
[ "transformers", "pytorch", "jax", "bert", "en", "arxiv:1907.10529", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: en thumbnail: --- # SpanBERT large fine-tuned on SQuAD v2 [SpanBERT](https://github.com/facebookresearch/SpanBERT) created by [Facebook Research](https://github.com/facebookresearch) and fine-tuned on [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) for **Q&A** downstream task ([by them](https://github.com/facebookresearch/SpanBERT#finetuned-models-squad-1120-relation-extraction-coreference-resolution)). ## Details of SpanBERT [SpanBERT: Improving Pre-training by Representing and Predicting Spans](https://arxiv.org/abs/1907.10529) ## Details of the downstream task (Q&A) - Dataset 📚 🧐 ❓ [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. | Dataset | Split | # samples | | -------- | ----- | --------- | | SQuAD2.0 | train | 130k | | SQuAD2.0 | eval | 12.3k | ## Model fine-tuning 🏋‍ You can get the fine-tuning script [here](https://github.com/facebookresearch/SpanBERT) ```bash python code/run_squad.py \ --do_train \ --do_eval \ --model spanbert-large-cased \ --train_file train-v2.0.json \ --dev_file dev-v2.0.json \ --train_batch_size 32 \ --eval_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 4 \ --max_seq_length 512 \ --doc_stride 128 \ --eval_metric best_f1 \ --output_dir squad2_output \ --version_2_with_negative \ --fp16 ``` ## Results Comparison 📝 | | SQuAD 1.1 | SQuAD 2.0 | Coref | TACRED | | ---------------------- | ------------- | --------- | ------- | ------ | | | F1 | F1 | avg. F1 | F1 | | BERT (base) | 88.5* | 76.5* | 73.1 | 67.7 | | SpanBERT (base) | [92.4*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv1) | [83.6*](https://huggingface.co/mrm8488/spanbert-base-finetuned-squadv2) | 77.4 | [68.2](https://huggingface.co/mrm8488/spanbert-base-finetuned-tacred) | | BERT (large) | 91.3 | 83.3 | 77.1 | 66.4 | | SpanBERT (large) | [94.6](https://huggingface.co/mrm8488/spanbert-large-finetuned-squadv1) | **88.7** (this) | 79.6 | [70.8](https://huggingface.co/mrm8488/spanbert-large-finetuned-tacred) | Note: The numbers marked as * are evaluated on the development sets because those models were not submitted to the official SQuAD leaderboard. All the other numbers are test numbers. ## Model in action Fast usage with **pipelines**: ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="mrm8488/spanbert-large-finetuned-squadv2", tokenizer="SpanBERT/spanbert-large-cased" ) qa_pipeline({ 'context': "Manuel Romero has been working very hard in the repository hugginface/transformers lately", 'question': "How has been working Manuel Romero lately?" }) # Output: {'answer': 'very hard', 'end': 40, 'score': 0.9052708846768347, 'start': 31} ``` > Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) > Made with <span style="color: #e25555;">&hearts;</span> in Spain
levimorin/5GnEaJaYDbuM2RGCPNA26cbEu7o3Bj7PAeqpDQhVeRYWDNEz_vgg
levimorin
"2024-03-08T19:10:10Z"
1,486
0
keras
[ "keras", "region:us" ]
null
"2024-03-03T04:59:33Z"
Entry not found
TheDrummer/free-use
TheDrummer
"2024-04-28T06:20:59Z"
1,486
3
null
[ "gguf", "license:cc-by-nc-4.0", "region:us" ]
null
"2024-03-17T19:01:26Z"
--- license: cc-by-nc-4.0 license_link: LICENSE ---
dbmdz/distilbert-base-german-europeana-cased
dbmdz
"2022-06-09T07:27:27Z"
1,485
7
transformers
[ "transformers", "pytorch", "tf", "distilbert", "historic german", "de", "license:mit", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: de license: mit tags: - "historic german" --- # 🀗 + 📚 dbmdz DistilBERT model In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources a German Europeana DistilBERT model 🎉 # German Europeana DistilBERT We use the open source [Europeana newspapers](http://www.europeana-newspapers.eu/) that were provided by *The European Library*. The final training corpus has a size of 51GB and consists of 8,035,986,369 tokens. Detailed information about the data and pretraining steps can be found in [this repository](https://github.com/stefan-it/europeana-bert). ## Results For results on Historic NER, please refer to [this repository](https://github.com/stefan-it/europeana-bert). ## Usage With Transformers >= 4.3 our German Europeana DistilBERT model can be loaded like: ```python from transformers import AutoModel, AutoTokenizer model_name = "dbmdz/distilbert-base-german-europeana-cased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) ``` # Huggingface model hub All other German Europeana models are available on the [Huggingface model hub](https://huggingface.co/dbmdz). # Contact (Bugs, Feedback, Contribution and more) For questions about our Europeana BERT, ELECTRA and ConvBERT models just open a new discussion [here](https://github.com/stefan-it/europeana-bert/discussions) 🀗 # Acknowledgments Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❀ Thanks to the generous support from the [Hugging Face](https://huggingface.co/) team, it is possible to download both cased and uncased models from their S3 storage 🀗
digiplay/BeenYouLiteL11_diffusers
digiplay
"2024-06-02T23:15:10Z"
1,485
14
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-05-29T12:59:47Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Cute realistic amine style model. Model info: https://civitai.com/models/34440/beenyou-lite Sample image I made ![䞋茉 - 2023-05-30T013227.975.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/-GwmzLRAN1H4AKAwJM4sG.png)
osiria/distilbert-italian-cased-ner
osiria
"2023-06-11T11:07:45Z"
1,485
2
transformers
[ "transformers", "pytorch", "safetensors", "distilbert", "token-classification", "it", "arxiv:1910.01108", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-06-01T16:12:27Z"
--- license: apache-2.0 language: - it widget: - text: "Mi chiamo Marco Rossi, vivo a Roma e lavoro per l'Agenzia Spaziale Italiana" example_title: "Example 1" --- -------------------------------------------------------------------------------------------------- <body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">    Task: Named Entity Recognition</span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: DistilBERT</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body> -------------------------------------------------------------------------------------------------- <h3>Model description</h3> This is a <b>DistilBERT</b> <b>[1]</b> cased model for the <b>Italian</b> language, fine-tuned for <b>Named Entity Recognition</b> (<b>Person</b>, <b>Location</b>, <b>Organization</b> and <b>Miscellanea</b> classes) on the [WikiNER](https://figshare.com/articles/dataset/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) dataset <b>[2]</b>, using <b>DistilBERT-ITALIAN</b> ([distilbert-base-italian-cased](https://huggingface.co/osiria/distilbert-base-italian-cased)) as a pre-trained model. This is a cased DistilBERT model. If you are looking for a more accurate (but heavier) cased model, you can refer to: https://huggingface.co/osiria/bert-italian-cased-ner If you are looking for an uncased model, you can refer to: https://huggingface.co/osiria/bert-italian-uncased-ner <h3>Training and Performances</h3> The model is trained to perform entity recognition over 4 classes: <b>PER</b> (persons), <b>LOC</b> (locations), <b>ORG</b> (organizations), <b>MISC</b> (miscellanea, mainly events, products and services). It has been fine-tuned for Named Entity Recognition, using the WikiNER Italian dataset plus an additional custom dataset of manually annotated Wikipedia paragraphs. The WikiNER dataset has been splitted in 102.352 training instances and 25.588 test instances. The performances on the test set are reported in the following table: | Recall | Precision | F1 | | ------ | ------ | ------ | | 91.49 | 91.62 | 91.54 | The metrics have been computed at the token level and then macro-averaged over the 4 classes. Then, since WikiNER is an automatically annotated (silver standard) dataset, which sometimes contains imperfect annotations, an additional fine-tuning on ~3.500 manually annotated paragraphs has been performed. <h3>Quick usage</h3> ```python from transformers import DistilBertTokenizerFast, DistilBertForTokenClassification tokenizer = DistilBertTokenizerFast.from_pretrained("osiria/distilbert-italian-cased-ner") model = DistilBertForTokenClassification.from_pretrained("osiria/distilbert-italian-cased-ner") from transformers import pipeline ner = pipeline("ner", model = model, tokenizer = tokenizer, aggregation_strategy="first") ner("Mi chiamo Marco Rossi, vivo a Roma e lavoro per l'Agenzia Spaziale Italiana nella missione Prisma") [{'entity_group': 'PER', 'score': 0.9958186, 'word': 'Marco Rossi', 'start': 10, 'end': 21}, {'entity_group': 'LOC', 'score': 0.9933206, 'word': 'Roma', 'start': 30, 'end': 34}, {'entity_group': 'ORG', 'score': 0.99841493, 'word': 'Agenzia Spaziale Italiana', 'start': 50, 'end': 75}, {'entity_group': 'MISC', 'score': 0.6150316, 'word': 'Prisma', 'start': 91, 'end': 97}] ``` You can also try the model online using this web app: https://huggingface.co/spaces/osiria/distilbert-italian-cased-ner <h3>References</h3> [1] https://arxiv.org/abs/1910.01108 [2] https://www.sciencedirect.com/science/article/pii/S0004370212000276 <h3>Limitations</h3> This model is mainly trained on Wikipedia, so it's particularly suitable for natively digital text from the world wide web, written in a correct and fluent form (like wikis, web pages, news, etc.). However, it may show limitations when it comes to chaotic text, containing errors and slang expressions (like social media posts) or when it comes to domain-specific text (like medical, financial or legal content). <h3>License</h3> The model is released under <b>Apache-2.0</b> license
TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF
TheBloke
"2023-09-27T12:46:30Z"
1,485
14
transformers
[ "transformers", "gguf", "llama", "en", "dataset:OpenAssistant/oasst1", "dataset:shahules786/orca-best", "base_model:OpenAssistant/codellama-13b-oasst-sft-v10", "license:llama2", "text-generation-inference", "region:us" ]
null
"2023-08-27T14:17:26Z"
--- language: - en license: llama2 datasets: - OpenAssistant/oasst1 - shahules786/orca-best model_name: CodeLlama 13B SFT v10 base_model: OpenAssistant/codellama-13b-oasst-sft-v10 inference: false model_creator: OpenAssistant model_type: llama prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # CodeLlama 13B SFT v10 - GGUF - Model creator: [OpenAssistant](https://huggingface.co/OpenAssistant) - Original model: [CodeLlama 13B SFT v10](https://huggingface.co/OpenAssistant/codellama-13b-oasst-sft-v10) <!-- description start --> ## Description This repo contains GGUF format model files for [OpenAssistant's CodeLlama 13B SFT v10](https://huggingface.co/OpenAssistant/codellama-13b-oasst-sft-v10). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF) * [OpenAssistant's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/codellama-13b-oasst-sft-v10) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [codellama-13b-oasst-sft-v10.Q2_K.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [codellama-13b-oasst-sft-v10.Q3_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [codellama-13b-oasst-sft-v10.Q3_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [codellama-13b-oasst-sft-v10.Q3_K_L.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [codellama-13b-oasst-sft-v10.Q4_0.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [codellama-13b-oasst-sft-v10.Q4_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [codellama-13b-oasst-sft-v10.Q4_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [codellama-13b-oasst-sft-v10.Q5_0.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [codellama-13b-oasst-sft-v10.Q5_K_S.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [codellama-13b-oasst-sft-v10.Q5_K_M.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [codellama-13b-oasst-sft-v10.Q6_K.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [codellama-13b-oasst-sft-v10.Q8_0.gguf](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF/blob/main/codellama-13b-oasst-sft-v10.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF and below it, a specific filename to download, such as: codellama-13b-oasst-sft-v10.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF codellama-13b-oasst-sft-v10.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF codellama-13b-oasst-sft-v10.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m codellama-13b-oasst-sft-v10.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF", model_file="codellama-13b-oasst-sft-v10.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik BjÀreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 쀀교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: OpenAssistant's CodeLlama 13B SFT v10 # Open-Assistant CodeLlama 13B SFT v10 This model is an Open-Assistant fine-tuning of Meta's CodeLlama 13B LLM. **Note**: Due to the new RoPE Theta value (1e6 instead of 1e4), for correct results you must load this model with `trust_remote_code=True` or use the latest main branch of Huggingface transformers (until version 4.33 is released). ## Model Details - **Finetuned from:** [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) via [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) - **Model type:** Causal decoder-only transformer language model - **Language:** English - **Weights & Biases training logs:** 6123 steps, BS 64 [run56_oa_llamacode](https://wandb.ai/open-assistant/public-sft/runs/run56_oa_llamacode) - **Demo:** [Continuations for 250 random prompts (without system message)](https://open-assistant.github.io/oasst-model-eval/?f=https%3A%2F%2Fraw.githubusercontent.com%2FOpen-Assistant%2Foasst-model-eval%2Fmain%2Fsampling_reports%2Foasst-sft%2F2023-08-26_OpenAssistant_codellama-13b-oasst-sft-v10_sampling_noprefix2.json) - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Contact:** [Open-Assistant Discord](https://ykilcher.com/open-assistant-discord) ## Prompting / Prompt Template Due to public demand (see [survey](https://twitter.com/erhartford/status/1682403597525430272)) we changed the prompt-template for this model from custom prompter/assistant tokens to OpenAI's [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) standard prompt format. We hope that this leads to greater compatibility with chat inference/frontend applications. Prompt dialogue template: ``` """ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant """ ``` The model input can contain multiple conversation turns between user and assistant, e.g. ``` <|im_start|>user {prompt 1}<|im_end|> <|im_start|>assistant {reply 1}<|im_end|> <|im_start|>user {prompt 2}<|im_end|> <|im_start|>assistant (...) ``` The model was partly trained with orca system messages. For inference we recommend to use the official [Llama2 system message](https://github.com/facebookresearch/llama/blob/ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7/example_chat_completion.py#L57-L61): ``` <|im_start|>system You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <|im_end|> ``` ### Credits & Special Thanks - Thanks to [Meta AI](https://ai.meta.com/) for training and releasing the CodeLLlama model. - Distributed training support was provided by EPFL's [Machine Learning and Optimization Laboratory](https://www.epfl.ch/labs/mlo/), and [Natural Language Processing Lab](https://nlp.epfl.ch/). - The open-source [epfLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) trainer was used for fine-tuning. - [rombodawg](https://huggingface.co/rombodawg) curated the [LosslessMegaCodeTrainingV2_1m_Evol_Uncensored](https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored) dataset. - [ehartford](https://huggingface.co/ehartford) generated and published the [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin). - [shahules786](https://github.com/shahules786) de-duped and filtered the Dolphin and Megacode dataset with a clustering/controid approach and generated orca-best & bestofmegacode. - [andreaskoepf](https://github.com/andreaskoepf/) prepared & orchestrated the training. ## Ethical Considerations and Limitations Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, the potential outputs of codellama-13b-oasst-sft-v10 cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of codellama-13b-oasst-sft-v10, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/). ## Configuration Details The "pretokenizer" utility used to tokenize the datamix is part of the Open-Assistant github repository and can be found here: [model/pretokenizer](https://github.com/LAION-AI/Open-Assistant/tree/main/model/pretokenizer). ### Pretokenizer Configuration ``` orca_megacode_oasst_best: datasets: - orca-chat: val_split: 0.01 max_val_set: 1000 - bestofmegacode: val_split: 0.01 max_val_set: 1000 - oasst_export: lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" #hf_dataset_name: OpenAssistant/oasst1 input_file_path: 2023-08-25_oasst_ready.jsonl.gz top_k: 1 val_split: 0.025 output_dir: "output/orca_megacode_oasst_best" filename_prefix: "orca_megacode_oasst_best" min_assistant_tokens: 1 ``` <!-- original-model-card end -->
capricornstone/ChiMed-GPT-1.0-Q4_K_M-GGUF
capricornstone
"2024-06-28T06:48:42Z"
1,485
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:SYNLP/ChiMed-GPT-1.0", "license:mit", "region:us" ]
null
"2024-06-28T06:48:05Z"
--- base_model: SYNLP/ChiMed-GPT-1.0 license: mit tags: - llama-cpp - gguf-my-repo --- # capricornstone/ChiMed-GPT-1.0-Q4_K_M-GGUF This model was converted to GGUF format from [`SYNLP/ChiMed-GPT-1.0`](https://huggingface.co/SYNLP/ChiMed-GPT-1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/SYNLP/ChiMed-GPT-1.0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo capricornstone/ChiMed-GPT-1.0-Q4_K_M-GGUF --hf-file chimed-gpt-1.0-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo capricornstone/ChiMed-GPT-1.0-Q4_K_M-GGUF --hf-file chimed-gpt-1.0-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo capricornstone/ChiMed-GPT-1.0-Q4_K_M-GGUF --hf-file chimed-gpt-1.0-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo capricornstone/ChiMed-GPT-1.0-Q4_K_M-GGUF --hf-file chimed-gpt-1.0-q4_k_m.gguf -c 2048 ```
timm/vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k
timm
"2023-05-06T00:08:24Z"
1,484
2
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:laion-2b", "dataset:imagenet-12k", "arxiv:2212.07143", "arxiv:2210.08402", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
"2022-11-01T23:02:08Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - laion-2b - imagenet-12k --- # Model card for vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k A Vision Transformer (ViT) image classification model. Pretrained on LAION-2B image-text pairs using OpenCLIP. Fine-tuned on ImageNet-12k and then ImageNet-1k in `timm`. See recipes in [Reproducible scaling laws](https://arxiv.org/abs/2212.07143). ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 632.0 - GMACs: 162.0 - Activations (M): 95.1 - Image size: 224 x 224 - **Papers:** - OpenCLIP: https://github.com/mlfoundations/open_clip - Reproducible scaling laws for contrastive language-image learning: https://arxiv.org/abs/2212.07143 - LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** - LAION-2B - ImageNet-12k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_huge_patch14_clip_224.laion2b_ft_in12k_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 257, 1280) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` ```bibtex @article{cherti2022reproducible, title={Reproducible scaling laws for contrastive language-image learning}, author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia}, journal={arXiv preprint arXiv:2212.07143}, year={2022} } ``` ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
defog/sqlcoder2
defog
"2023-10-13T16:43:20Z"
1,484
105
transformers
[ "transformers", "pytorch", "gpt_bigcode", "text-generation", "code", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-02T12:13:43Z"
--- license: other language: - en pipeline_tag: text-generation tags: - code --- # Defog SQLCoder Defog's SQLCoder is a state-of-the-art LLM for converting natural language questions to SQL queries. [Interactive Demo](https://defog.ai/sqlcoder-demo/) | [🀗 HF Repo](https://huggingface.co/defog/sqlcoder2) | [♟ Colab](https://colab.research.google.com/drive/1z4rmOEiFkxkMiecAWeTUlPl0OmKgfEu7?usp=sharing) | [🐊 Twitter](https://twitter.com/defogdata) ## TL;DR SQLCoder is a 15B parameter model that outperforms `gpt-3.5-turbo` for natural language to SQL generation tasks on our [sql-eval](https://github.com/defog-ai/sql-eval) framework, and significantly outperforms all popular open-source models. When fine-tuned on a given schema, it also outperforms `gpt-4` SQLCoder is fine-tuned on a base StarCoder model. ## Results on novel datasets not seen in training | model | perc_correct | |-|-| | gpt4-2023-10-04 | 82.0 | | defog-sqlcoder2 | 77.5 | | gpt4-2023-08-28 | 74.0 | | defog-sqlcoder-7b | 71.0 | | gpt-3.5-2023-10-04 | 66.0 | | claude-2 | 64.5 | | gpt-3.5-2023-08-28 | 61.0 | | claude_instant_1 | 61.0 | | text-davinci-003 | 52.5 | ## License The code in this repo (what little there is of it) is Apache-2 licensed. The model weights have a `CC BY-SA 4.0` license, with additional responsible use restrictions added. The TL;DR is that you can use and modify the model for any purpose – including commercial use. However, if you modify the weights (for example, by fine-tuning), you must open-source your modified weights under the same license terms. ## Training Defog was trained on more than 20,000 human-curated questions. These questions were based on 10 different schemas. None of the schemas in the training data were included in our evaluation framework. You can read more about our [training approach](https://defog.ai/blog/open-sourcing-sqlcoder2-7b/) and [evaluation framework](https://defog.ai/blog/open-sourcing-sqleval/). ## Results by question category We classified each generated question into one of 5 categories. The table displays the percentage of questions answered correctly by each model, broken down by category. | query_category | gpt-4 | sqlcoder2-15b | sqlcoder-7b | gpt-3.5 | claude-2 | claude-instant | gpt-3 | |:-----------------|--------:|----------------:|--------------:|----------:|-----------:|-----------------:|--------:| | date | 72 | 80 | 64 | 68 | 52 | 48 | 32 | | group_by | 91.4 | 82.9 | 82.9 | 77.1 | 71.4 | 71.4 | 71.4 | | order_by | 82.9 | 77.1 | 74.3 | 68.6 | 74.3 | 74.3 | 68.6 | | ratio | 80 | 74.3 | 54.3 | 37.1 | 57.1 | 45.7 | 25.7 | | join | 82.9 | 74.3 | 74.3 | 71.4 | 65.7 | 62.9 | 57.1 | | where | 80 | 77.1 | 74.3 | 74.3 | 62.9 | 60 | 54.3 | ## Using SQLCoder You can use SQLCoder via the `transformers` library by downloading our model weights from the Hugging Face repo. We have added sample code for [inference](https://github.com/defog-ai/sqlcoder/blob/main/inference.py) on a [sample database schema](https://github.com/defog-ai/sqlcoder/blob/main/metadata.sql). ```bash python inference.py -q "Question about the sample database goes here" # Sample question: # Do we get more revenue from customers in New York compared to customers in San Francisco? Give me the total revenue for each city, and the difference between the two. ``` You can also use a demo on our website [here](https://defog.ai/sqlcoder-demo), or run SQLCoder in Colab [here](https://colab.research.google.com/drive/13BIKsqHnPOBcQ-ba2p77L5saiepTIwu0#scrollTo=ZpbVgVHMkJvC) ## Hardware Requirements SQLCoder has been tested on an A100 40GB GPU with `bfloat16` weights. You can also load an 8-bit and 4-bit quantized version of the model on consumer GPUs with 20GB or more of memory – like RTX 4090, RTX 3090, and Apple M2 Pro, M2 Max, or M2 Ultra Chips with 20GB or more of memory. ## Todo - [x] Open-source the v1 model weights - [x] Train the model on more data, with higher data variance - [ ] Tune the model further with Reward Modelling and RLHF - [ ] Pretrain a model from scratch that specializes in SQL analysis
FreedomIntelligence/Apollo-7B
FreedomIntelligence
"2024-04-26T11:13:29Z"
1,484
21
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:2403.03640", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-06T13:06:29Z"
--- license: apache-2.0 --- # Multilingual Medicine: Model, Dataset, Benchmark, Code Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far <p align="center"> 👚🏻‍💻<a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Github</a> •📃 <a href="https://arxiv.org/abs/2403.03640" target="_blank">Paper</a> • 🌐 <a href="https://apollo.llmzoo.com/" target="_blank">Demo</a> • 🀗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> • 🀗 <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> <br> <a href="./README_zh.md"> äž­æ–‡ </a> | <a href="./README.md"> English </p> ![Apollo](assets/apollo_medium_final.png) ## 🌈 Update * **[2024.04.25]** [MedJamba](https://huggingface.co/FreedomIntelligence/Apollo-MedJamba) released, train and evaluation code refer to [repo](https://github.com/FreedomIntelligence/MedJamba). * **[2024.03.07]** [Paper](https://arxiv.org/abs/2403.03640) released. * **[2024.02.12]** <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> and <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> is published🎉 * **[2024.01.23]** Apollo repo is published🎉 ## Results 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B" target="_blank">Apollo-0.5B</a> • 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-1.8B" target="_blank">Apollo-1.8B</a> • 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B" target="_blank">Apollo-2B</a> • 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B" target="_blank">Apollo-6B</a> • 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B" target="_blank">Apollo-7B</a> • 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-34B" target="_blank">Apollo-34B</a> • 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-72B" target="_blank">Apollo-72B</a> 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-MedJamba" target="_blank">MedJamba</a> 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B-GGUF" target="_blank">Apollo-0.5B-GGUF</a> • 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B-GGUF" target="_blank">Apollo-2B-GGUF</a> • 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B-GGUF" target="_blank">Apollo-6B-GGUF</a> • 🀗 <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF" target="_blank">Apollo-7B-GGUF</a> ![Apollo](assets/result.png) ## Usage Format User:{query}\nAssistant:{response}<|endoftext|> ## Dataset & Evaluation - Dataset 🀗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> <details><summary>Click to expand</summary> ![Apollo](assets/dataset.png) - [Zip File](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/blob/main/ApolloCorpus.zip) - [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train) - Pretrain: - data item: - json_name: {data_source}_{language}_{data_type}.json - data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki - language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi) - data_type: qa(generated qa from text) - data_type==text: list of string ``` [ "string1", "string2", ... ] ``` - data_type==qa: list of qa pairs(list of string) ``` [ [ "q1", "a1", "q2", "a2", ... ], ... ] ``` - SFT: - json_name: {data_source}_{language}.json - data_type: code, general, math, medicalExam, medicalPatient - data item: list of qa pairs(list of string) ``` [ [ "q1", "a1", "q2", "a2", ... ], ... ] ``` </details> - Evaluation 🀗 <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> <details><summary>Click to expand</summary> - EN: - [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) - [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test) - [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper. - [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - ZH: - [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test) - [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper - Randomly sample 2,000 multiple-choice questions with single answer. - [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu) - Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology - [CExam](https://github.com/williamliujl/CMExam): Not used in the paper - Randomly sample 2,000 multiple-choice questions - ES: [Head_qa](https://huggingface.co/datasets/head_qa) - FR: [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA) - HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - AR: [MMLU_Ara](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine </details> ## Results reproduction <details><summary>Click to expand</summary> **Waiting for Update** </details> ## Citation Please use the following citation if you intend to use our dataset for training or evaluation: ``` @misc{wang2024apollo, title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People}, author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang}, year={2024}, eprint={2403.03640}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
BELLE-2/Belle-whisper-large-v3-zh
BELLE-2
"2024-06-11T03:13:26Z"
1,484
71
transformers
[ "transformers", "pytorch", "whisper", "automatic-speech-recognition", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-03-11T01:40:25Z"
--- license: apache-2.0 metrics: - cer --- ## Welcome If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE and https://github.com/shuaijiang/Whisper-Finetune # Belle-whisper-large-v3-zh Fine tune whisper-large-v3 to enhance Chinese speech recognition capabilities, Belle-whisper-large-v3-zh demonstrates a **24-65%** relative improvement in performance on Chinese ASR benchmarks, including AISHELL1, AISHELL2, WENETSPEECH, and HKUST. ## Usage ```python from transformers import pipeline transcriber = pipeline( "automatic-speech-recognition", model="BELLE-2/Belle-whisper-large-v3-zh" ) transcriber.model.config.forced_decoder_ids = ( transcriber.tokenizer.get_decoder_prompt_ids( language="zh", task="transcribe" ) ) transcription = transcriber("my_audio.wav") ``` ## Fine-tuning | Model | (Re)Sample Rate | Train Datasets | Fine-tuning (full or peft) | |:----------------:|:-------:|:----------------------------------------------------------:|:-----------:| | Belle-whisper-large-v3-zh | 16KHz | [AISHELL-1](https://openslr.magicdatatech.com/resources/33/) [AISHELL-2](https://www.aishelltech.com/aishell_2) [WenetSpeech](https://wenet.org.cn/WenetSpeech/) [HKUST](https://catalog.ldc.upenn.edu/LDC2005S15) | [full fine-tuning](https://github.com/shuaijiang/Whisper-Finetune) | If you want to fine-thuning the model on your datasets, please reference to the [github repo](https://github.com/shuaijiang/Whisper-Finetune) ## CER(%) ↓ | Model | Language Tag | aishell_1_test(↓) |aishell_2_test(↓)| wenetspeech_net(↓) | wenetspeech_meeting(↓) | HKUST_dev(↓)| |:----------------:|:-------:|:-----------:|:-----------:|:--------:|:-----------:|:-------:| | whisper-large-v3 | Chinese | 8.085 | 5.475 | 11.72 | 20.15 | 28.597 | | Belle-whisper-large-v2-zh | Chinese | 2.549 | 3.746 | 8.503 | 14.598 | 16.289 | | Belle-whisper-large-v3-zh | Chinese | 2.781 | 3.786 | 8.865 | **11.246** | 16.440 | It is worth mentioning that compared to Belle-whisper-large-v2-zh, Belle-whisper-large-v3-zh has a significant improvement in complex acoustic scenes(such as wenetspeech_meeting). ## Citation Please cite our paper and github when using our code, data or model. ``` @misc{BELLE, author = {BELLEGroup}, title = {BELLE: Be Everyone's Large Language model Engine}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/LianjiaTech/BELLE}}, } ```
Kukedlc/NeuralLLaMa-3-8b-ORPO-v0.4
Kukedlc
"2024-06-15T07:02:25Z"
1,484
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-16T18:40:01Z"
--- library_name: transformers license: other --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🀗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Koleshjr/mistral_7b_v2_q4_k_m
Koleshjr
"2024-06-26T21:19:35Z"
1,484
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-26T21:05:16Z"
--- base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf --- # Uploaded model - **Developed by:** Koleshjr - **License:** apache-2.0 - **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nlpai-lab/kullm-polyglot-12.8b-v2
nlpai-lab
"2023-05-31T15:48:09Z"
1,483
50
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-31T06:28:28Z"
--- license: apache-2.0 language: - ko --- # KULLM-Polyglot-12.8B-v2 This model is a fine-tuned version of [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) on a KULLM v2 Detail Codes are available at [KULLM Github Repository](https://github.com/nlpai-lab/KULLM) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-4 - train_batch_size: 6 - seed: 42 - distributed_type: multi-GPU (A100 80G) - num_devices: 4 - gradient_accumulation_steps: 21.3 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8.0 ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
Xenova/paraphrase-multilingual-mpnet-base-v2
Xenova
"2024-03-21T12:05:28Z"
1,482
2
transformers.js
[ "transformers.js", "onnx", "xlm-roberta", "feature-extraction", "region:us" ]
feature-extraction
"2023-05-23T14:31:51Z"
--- library_name: "transformers.js" --- https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2 with ONNX weights to be compatible with Transformers.js. Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🀗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
moreh/MoMo-72B-lora-1.8.6-DPO
moreh
"2024-01-22T00:09:36Z"
1,482
32
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "arxiv:2305.18290", "arxiv:2106.09685", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-16T02:11:50Z"
--- license: mit language: - en --- # **Introduction** MoMo-72B-lora-1.8.6-DPO is trained via Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) from [MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) as its base model, with several optimizations in hyperparameters. [MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model. Note that we did not exploit any form of weight merge. For leaderboard submission, the trained weight is realigned for compatibility with llama. MoMo-72B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU. ## Details ### Used Librarys - torch - peft ### Used Datasets - [slimorca](Open-Orca/SlimOrca) - [truthy](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1) - [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) - No other dataset was used - No benchmark test set or the training set are used - [data contamination check](https://github.com/swj0419/detect-pretrain-code-contamination) result | Model | ARC | MMLU | TruthfulQA | GSM8K | |------------------------------|-------|-------|-------|-------| | **V1.8.6(result < 0.1, %)**| TBU |TBU | 0.73 | TBU | ### Used Environments - AMD MI250 & MoAI platform - Please visit https://moreh.io/product for more information about MoAI platform - Or, contact us directly [[email protected]](mailto:[email protected]) ## How to use ```python # pip install transformers==4.35.2 import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-72B-lora-1.8.6-DPO") model = AutoModelForCausalLM.from_pretrained( "moreh/MoMo-72B-lora-1.8.6-DPO" ) ```
QuantFactory/dolphin-2.9-llama3-8b-256k-GGUF
QuantFactory
"2024-05-06T09:32:49Z"
1,481
3
transformers
[ "transformers", "gguf", "text-generation", "base_model:cognitivecomputations/dolphin-2.9-llama3-8b-256k", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-05T14:41:06Z"
--- license: llama3 library_name: transformers pipeline_tag: text-generation base_model: cognitivecomputations/dolphin-2.9-llama3-8b-256k --- # Dolphin-2.9-llama3-8b-256k-GGUF This is quantized version of [cognitivecomputations/dolphin-2.9-llama3-8b-256k]() created using llama.cpp
HooshvareLab/bert-base-parsbert-peymaner-uncased
HooshvareLab
"2021-05-18T20:45:45Z"
1,480
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "token-classification", "fa", "arxiv:2005.12515", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:04Z"
--- language: fa license: apache-2.0 --- ## ParsBERT: Transformer-based Model for Persian Language Understanding ParsBERT is a monolingual language model based on Google’s BERT architecture with the same configurations as BERT-Base. Paper presenting ParsBERT: [arXiv:2005.12515](https://arxiv.org/abs/2005.12515) All the models (downstream tasks) are uncased and trained with whole word masking. (coming soon stay tuned) ## Persian NER [ARMAN, PEYMA, ARMAN+PEYMA] This task aims to extract named entities in the text, such as names and label with appropriate `NER` classes such as locations, organizations, etc. The datasets used for this task contain sentences that are marked with `IOB` format. In this format, tokens that are not part of an entity are tagged as `”O”` the `”B”`tag corresponds to the first word of an object, and the `”I”` tag corresponds to the rest of the terms of the same entity. Both `”B”` and `”I”` tags are followed by a hyphen (or underscore), followed by the entity category. Therefore, the NER task is a multi-class token classification problem that labels the tokens upon being fed a raw text. There are two primary datasets used in Persian NER, `ARMAN`, and `PEYMA`. In ParsBERT, we prepared ner for both datasets as well as a combination of both datasets. ### PEYMA PEYMA dataset includes 7,145 sentences with a total of 302,530 tokens from which 41,148 tokens are tagged with seven different classes. 1. Organization 2. Money 3. Location 4. Date 5. Time 6. Person 7. Percent | Label | # | |:------------:|:-----:| | Organization | 16964 | | Money | 2037 | | Location | 8782 | | Date | 4259 | | Time | 732 | | Person | 7675 | | Percent | 699 | **Download** You can download the dataset from [here](http://nsurl.org/tasks/task-7-named-entity-recognition-ner-for-farsi/) ## Results The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures. | Dataset | ParsBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |---------|----------|------------|--------------|----------|----------------|------------| | PEYMA | 98.79* | - | 90.59 | - | 84.00 | - | ## How to use :hugs: | Notebook | Description | | |:----------|:-------------|------:| | [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | ## Cite Please cite the following paper in your publication if you are using [ParsBERT](https://arxiv.org/abs/2005.12515) in your research: ```markdown @article{ParsBERT, title={ParsBERT: Transformer-based Model for Persian Language Understanding}, author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri}, journal={ArXiv}, year={2020}, volume={abs/2005.12515} } ``` ## Acknowledgments We hereby, express our gratitude to the [Tensorflow Research Cloud (TFRC) program](https://tensorflow.org/tfrc) for providing us with the necessary computation resources. We also thank [Hooshvare](https://hooshvare.com) Research Group for facilitating dataset gathering and scraping online text resources. ## Contributors - Mehrdad Farahani: [Linkedin](https://www.linkedin.com/in/m3hrdadfi/), [Twitter](https://twitter.com/m3hrdadfi), [Github](https://github.com/m3hrdadfi) - Mohammad Gharachorloo: [Linkedin](https://www.linkedin.com/in/mohammad-gharachorloo/), [Twitter](https://twitter.com/MGharachorloo), [Github](https://github.com/baarsaam) - Marzieh Farahani: [Linkedin](https://www.linkedin.com/in/marziehphi/), [Twitter](https://twitter.com/marziehphi), [Github](https://github.com/marziehphi) - Mohammad Manthouri: [Linkedin](https://www.linkedin.com/in/mohammad-manthouri-aka-mansouri-07030766/), [Twitter](https://twitter.com/mmanthouri), [Github](https://github.com/mmanthouri) - Hooshvare Team: [Official Website](https://hooshvare.com/), [Linkedin](https://www.linkedin.com/company/hooshvare), [Twitter](https://twitter.com/hooshvare), [Github](https://github.com/hooshvare), [Instagram](https://www.instagram.com/hooshvare/) + And a special thanks to Sara Tabrizi for her fantastic poster design. Follow her on: [Linkedin](https://www.linkedin.com/in/sara-tabrizi-64548b79/), [Behance](https://www.behance.net/saratabrizi), [Instagram](https://www.instagram.com/sara_b_tabrizi/) ## Releases ### Release v0.1 (May 29, 2019) This is the first version of our ParsBERT NER!
OwenArli/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF
OwenArli
"2024-06-05T03:15:15Z"
1,480
0
null
[ "gguf", "license:llama3", "region:us" ]
null
"2024-06-03T11:56:09Z"
--- license: llama3 --- Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct Base model: https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 SFT fine tune of Meta Llama 3 8B Instruct Abliterated v3 by Failspy using an improved Dolphin and WizardLM dataset intended to remove GPT-isms and make the model follow instructions more exactly while paying attention to details better. Since it is based on the Abliterated version of Llama 3 8B Instruct it should naturally not refuse to answer in the first place and this fine tuning should make it comply even better. We also have it up on our site https://awanllm.com for anyone to try! Best practices: - Be precise and explain what you want the model to do. It has less base "personality" than the OG model but it will act however you tell it to. - This model works best with system prompts that tells it that it is the character, instead of telling it to act as a character. Training: - Full 8192 sequence length - Training duration is around 2.5 days on an RTX 4090 - 1 epoch training with a massive dataset for minimized repetition sickness. - Using 4-bit loading and Qlora 64-rank 64-alpha resulting in ~2% trainable weights. Llama 3 Instruct format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` Quants: FP16: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Dolfin-v1.0 GGUF: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF
sapienzanlp/modello-italia-9b-bf16
sapienzanlp
"2024-06-08T09:46:56Z"
1,480
9
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "conversational", "it", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-06T17:25:22Z"
--- license: mit language: - it --- # Model Card for Modello Italia 9B This is our UNOFFICIAL conversion of the OFFICIAL model checkpoint of *"Modello Italia 9B"*, the first Large Language Model (LLM) developed by [iGenius](https://it.igenius.ai/) in collaboration [CINECA](https://www.cineca.it/). * More information about Modello Italia: [click here](https://it.igenius.ai/language-models). ## 🚚 Disclaimers * This is an UNOFFICIAL conversion of the OFFICIAL model checkpoint released by iGenius. * The original model was developed using LitGPT, therefore, the weights need to be converted before they can be used with Hugging Face transformers. * **Note:** By using this model, you accept the iGenius' [**terms and conditions**](https://secure.igenius.ai/legal/italia_terms_and_conditions.pdf). ## 🚚 Biases and Risks From the terms and conditions of iGenius for Modello Italia: > Modello Italia Ú concepito per essere utilizzato da tutti e per adattarsi a una vasta gamma di casi d'uso. È stato progettato con l'obiettivo di essere accessibile a persone provenienti da background, esperienze e prospettive diverse. Modello Italia si rivolge agli utenti e alle loro esigenze senza inserire giudizi superflui o normative, riconoscendo al contempo che anche contenuti potenzialmente problematici in determinati contesti possono avere scopi validi in altri. Il rispetto per la dignità e l'autonomia di tutti gli utenti, specialmente in termini di libertà di pensiero ed espressione, Ú un pilastro fondamentale del suo design. Tuttavia, essendo una nuova tecnologia, Modello Italia comporta rischi legati al suo utilizzo. I test condotti finora sono stati eseguiti in italiano e non hanno potuto coprire tutte le possibili situazioni. Pertanto, come per tutti gli LLM, non Ú possibile prevedere in anticipo gli output di Modello Italia e il modello potrebbe in alcuni casi generare risposte imprecise, tendenziose o altre risposte discutibili. Prima di utilizzare Modello Italia in qualsiasi contesto, gli sviluppatori sono fortemente incoraggiati a eseguire test di sicurezza e adattamento specifici per le loro applicazioni. We are aware of the biases and potential problematic/toxic content that current pretrained large language models exhibit: more specifically, as probabilistic models of (Italian and English) languages, they reflect and amplify the biases of their training data. For more information about this issue, please refer to our survey paper: * [Biases in Large Language Models: Origins, Inventory, and Discussion](https://dl.acm.org/doi/full/10.1145/3597307) ## Training dataset The following information is based on the information we could gather, that is, it is NOT official. Please take it with a pinch of salt as we continue to study Modello Italia. * **The training data of Modello Italia is unknown;** * Modello Italia is probably trained on around 1T tokens of Italian text; * We know that the training data is mostly Italian text and source code; * We know that the training data includes text from Editoria Nazionale. ## Tokenizer The following information is based on the information we could gather, that is, it is NOT official. Please take it with a pinch of salt as we continue to study Modello Italia. * The tokenizer is **vanilla SentencePiece**, probably trained from scratch on Italian data; * Vocabulary size is 50000. ## Model architecture The following information is based on the information we could gather, that is, it is NOT official. Please take it with a pinch of salt as we continue to study Modello Italia. * The model architecture is **based on GPT-NeoX**. ## Results Modello Italia 9B has not been evaluated on standard benchmarks yet. We will update this model card with the results soon. * **Want to contribute to the evaluation?** Submit a pull request! ## How to use Modello Italia with Hugging Face transformers ```python import torch import transformers as tr device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = tr.AutoTokenizer.from_pretrained("sapienzanlp/modello-italia-9b-bf16") model = tr.AutoModelForCausalLM.from_pretrained( "sapienzanlp/modello-italia-9b-bf16", device_map=device, torch_dtype=torch.bfloat16 ) MY_SYSTEM_PROMPT_SHORT = ( "Tu sei Modello Italia, un modello di linguaggio naturale addestrato da iGenius." ) prompt = "Ciao, chi sei?" messages = [ {"role": "system", "content": MY_SYSTEM_PROMPT_SHORT}, {"role": "user", "content": prompt}, ] tokenized_chat = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt" ).to(device) out = model.generate( tokenized_chat, max_new_tokens=200, do_sample=False ) ```
AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru
AlexKay
"2022-07-19T15:33:20Z"
1,479
41
transformers
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "en", "ru", "multilingual", "arxiv:1912.09723", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
"2022-03-02T23:29:04Z"
--- language: - en - ru - multilingual license: apache-2.0 --- # XLM-RoBERTa large model whole word masking finetuned on SQuAD Pretrained model using a masked language modeling (MLM) objective. Fine tuned on English and Russian QA datasets ## Used QA Datasets SQuAD + SberQuAD [SberQuAD original paper](https://arxiv.org/pdf/1912.09723.pdf) is here! Recommend to read! ## Evaluation results The results obtained are the following (SberQUaD): ``` f1 = 84.3 exact_match = 65.3
Langboat/mengzi-t5-base
Langboat
"2023-05-08T08:43:21Z"
1,479
46
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "zh", "arxiv:2110.06696", "doi:10.57967/hf/0025", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2022-03-02T23:29:04Z"
--- language: - zh license: apache-2.0 --- # Mengzi-T5 model (Chinese) Pretrained model on 300G Chinese corpus. [Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696) ## Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Langboat/mengzi-t5-base") model = T5ForConditionalGeneration.from_pretrained("Langboat/mengzi-t5-base") ``` ## Citation If you find the technical report or resource is useful, please cite the following technical report in your paper. ``` @misc{zhang2021mengzi, title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese}, author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou}, year={2021}, eprint={2110.06696}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
tuman/vit-rugpt2-image-captioning
tuman
"2023-01-26T08:15:02Z"
1,479
12
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "image-to-text", "image-captioning", "ru", "endpoints_compatible", "region:us" ]
image-to-text
"2023-01-18T14:27:20Z"
--- tags: - image-to-text - image-captioning language: - ru metrics: - bleu library_name: transformers --- # First image captioning model for russian language vit-rugpt2-image-captioning This is an image captioning model trained on translated version (en-ru) of dataset COCO2014. # Model Details Model was initialized `google/vit-base-patch16-224-in21k` for encoder and `sberbank-ai/rugpt3large_based_on_gpt2` for decoder. # Metrics on test data * Bleu: 8.672 * Bleu precision 1: 30.567 * Bleu precision 2: 7.895 * Bleu precision 3: 3.261 # Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("vit-rugpt2-image-captioning") feature_extractor = ViTFeatureExtractor.from_pretrained("vit-rugpt2-image-captioning") tokenizer = AutoTokenizer.from_pretrained("vit-rugpt2-image-captioning") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 16 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_caption(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds predict_caption(['train2014/COCO_train2014_000000295442.jpg']) # ['СаЌПлет Ма взлетМП-пПсаЎПчМПй пПлПсе аэрПпПрта.'] ``` # Sample running code using transformers pipeline ```python from transformers import pipeline image_to_text = pipeline("image-to-text", model="vit-rugpt2-image-captioning") image_to_text("train2014/COCO_train2014_000000296754.jpg") # [{'generated_text': 'ЧелПвек ОЎет пП улОце с зПМтПЌ.'}] ``` # Contact for any help * https://huggingface.co/tuman * https://github.com/tumanov-a * https://t.me/tumanov_av
bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF
bartowski
"2024-06-05T22:10:08Z"
1,479
8
transformers
[ "transformers", "gguf", "code", "text-generation", "license:other", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-05T19:09:00Z"
--- library_name: transformers license: other license_name: mnpl license_link: https://mistral.ai/licences/MNPL-0.1.md tags: - code language: - code quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of Codestral-22B-v0.1-abliterated-v3 Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3086">b3086</a> for quantization. Original model: https://huggingface.co/failspy/Codestral-22B-v0.1-abliterated-v3 All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format No chat template specified so default is used. This may be incorrect, check original model card for details. ``` <s>[INST] <<SYS>> {system_prompt} <</SYS>> {prompt}[/INST] </s> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [Codestral-22B-v0.1-abliterated-v3-Q8_0.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q8_0.gguf) | Q8_0 | 23.64GB | Extremely high quality, generally unneeded but max available quant. | | [Codestral-22B-v0.1-abliterated-v3-Q6_K.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q6_K.gguf) | Q6_K | 18.25GB | Very high quality, near perfect, *recommended*. | | [Codestral-22B-v0.1-abliterated-v3-Q5_K_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q5_K_M.gguf) | Q5_K_M | 15.72GB | High quality, *recommended*. | | [Codestral-22B-v0.1-abliterated-v3-Q5_K_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q5_K_S.gguf) | Q5_K_S | 15.32GB | High quality, *recommended*. | | [Codestral-22B-v0.1-abliterated-v3-Q4_K_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q4_K_M.gguf) | Q4_K_M | 13.34GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [Codestral-22B-v0.1-abliterated-v3-Q4_K_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q4_K_S.gguf) | Q4_K_S | 12.66GB | Slightly lower quality with more space savings, *recommended*. | | [Codestral-22B-v0.1-abliterated-v3-IQ4_XS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ4_XS.gguf) | IQ4_XS | 11.93GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [Codestral-22B-v0.1-abliterated-v3-Q3_K_L.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q3_K_L.gguf) | Q3_K_L | 11.73GB | Lower quality but usable, good for low RAM availability. | | [Codestral-22B-v0.1-abliterated-v3-Q3_K_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q3_K_M.gguf) | Q3_K_M | 10.75GB | Even lower quality. | | [Codestral-22B-v0.1-abliterated-v3-IQ3_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ3_M.gguf) | IQ3_M | 10.06GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [Codestral-22B-v0.1-abliterated-v3-Q3_K_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q3_K_S.gguf) | Q3_K_S | 9.64GB | Low quality, not recommended. | | [Codestral-22B-v0.1-abliterated-v3-IQ3_XS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ3_XS.gguf) | IQ3_XS | 9.17GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [Codestral-22B-v0.1-abliterated-v3-IQ3_XXS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ3_XXS.gguf) | IQ3_XXS | 8.59GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [Codestral-22B-v0.1-abliterated-v3-Q2_K.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-Q2_K.gguf) | Q2_K | 8.27GB | Very low quality but surprisingly usable. | | [Codestral-22B-v0.1-abliterated-v3-IQ2_M.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ2_M.gguf) | IQ2_M | 7.61GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [Codestral-22B-v0.1-abliterated-v3-IQ2_S.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ2_S.gguf) | IQ2_S | 7.03GB | Very low quality, uses SOTA techniques to be usable. | | [Codestral-22B-v0.1-abliterated-v3-IQ2_XS.gguf](https://huggingface.co/bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF/blob/main/Codestral-22B-v0.1-abliterated-v3-IQ2_XS.gguf) | IQ2_XS | 6.64GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF --include "Codestral-22B-v0.1-abliterated-v3-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/Codestral-22B-v0.1-abliterated-v3-GGUF --include "Codestral-22B-v0.1-abliterated-v3-Q8_0.gguf/*" --local-dir Codestral-22B-v0.1-abliterated-v3-Q8_0 ``` You can either specify a new local-dir (Codestral-22B-v0.1-abliterated-v3-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
John6666/mala-smooth-v1-sdxl
John6666
"2024-06-26T13:16:12Z"
1,479
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-26T13:10:16Z"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://civitai.com/models/515829/mala-smooth).
timm/eva_large_patch14_196.in22k_ft_in22k_in1k
timm
"2024-02-10T23:27:54Z"
1,478
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2211.07636", "license:mit", "region:us" ]
image-classification
"2022-12-22T07:08:20Z"
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-22k - imagenet-22k --- # Model card for eva_large_patch14_196.in22k_ft_in22k_in1k An EVA image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-22k then on ImageNet-1k by paper authors. NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 304.1 - GMACs: 61.6 - Activations (M): 63.5 - Image size: 196 x 196 - **Papers:** - EVA: Exploring the Limits of Masked Visual Representation Learning at Scale: https://arxiv.org/abs/2211.07636 - **Pretrain Dataset:** - ImageNet-22k - ImageNet-22k - **Dataset:** ImageNet-1k - **Original:** - https://github.com/baaivision/EVA - https://huggingface.co/BAAI/EVA ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('eva_large_patch14_196.in22k_ft_in22k_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'eva_large_patch14_196.in22k_ft_in22k_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 1024) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |top1 |top5 |param_count|img_size| |-----------------------------------------------|------|------|-----------|--------| |eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 | |eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 | |eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 | |eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 | |eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 | |eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 | |eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 | |eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 | |eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 | |eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 | |eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 | |eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 | |eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 | |eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 | |eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 | |eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 | ## Citation ```bibtex @article{EVA, title={EVA: Exploring the Limits of Masked Visual Representation Learning at Scale}, author={Fang, Yuxin and Wang, Wen and Xie, Binhui and Sun, Quan and Wu, Ledell and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2211.07636}, year={2022} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
Dabitron/Qwen2-7B-Instruct-abliterated-Q8_0-GGUF
Dabitron
"2024-07-02T01:33:56Z"
1,478
0
null
[ "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:natong19/Qwen2-7B-Instruct-abliterated", "license:apache-2.0", "region:us" ]
text-generation
"2024-07-02T01:33:21Z"
--- base_model: natong19/Qwen2-7B-Instruct-abliterated language: - en license: apache-2.0 pipeline_tag: text-generation tags: - chat - llama-cpp - gguf-my-repo --- # Dabitron/Qwen2-7B-Instruct-abliterated-Q8_0-GGUF This model was converted to GGUF format from [`natong19/Qwen2-7B-Instruct-abliterated`](https://huggingface.co/natong19/Qwen2-7B-Instruct-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/natong19/Qwen2-7B-Instruct-abliterated) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q8_0-GGUF --hf-file qwen2-7b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q8_0-GGUF --hf-file qwen2-7b-instruct-abliterated-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q8_0-GGUF --hf-file qwen2-7b-instruct-abliterated-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Dabitron/Qwen2-7B-Instruct-abliterated-Q8_0-GGUF --hf-file qwen2-7b-instruct-abliterated-q8_0.gguf -c 2048 ```
Davlan/bert-base-multilingual-cased-finetuned-yoruba
Davlan
"2022-06-27T11:50:30Z"
1,477
0
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:04Z"
Hugging Face's logo --- language: yo datasets: --- # bert-base-multilingual-cased-finetuned-yoruba ## Model description **bert-base-multilingual-cased-finetuned-yoruba** is a **Yoruba BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Yorùbá language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets. Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Yorùbá corpus. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-yoruba') >>> unmasker("Arẹmọ Phillip to jẹ ọkọ [MASK] Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun") [{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Mary Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.1738305538892746, 'token': 12176, 'token_str': 'Mary'}, {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.16382873058319092, 'token': 13704, 'token_str': 'Queen'}, {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.13272495567798615, 'token': 14382, 'token_str': 'ti'}, {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ King Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.12823280692100525, 'token': 11515, 'token_str': 'King'}, {'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Lady Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.07841219753026962, 'token': 14005, 'token_str': 'Lady'}] ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends. ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| mBERT F1 | yo_bert F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 78.97 | 82.58 [BBC Yorùbá Textclass](https://huggingface.co/datasets/yoruba_bbc_topics) | 75.13 | 79.11 ### BibTeX entry and citation info By David Adelani ``` ```
projecte-aina/aguila-7b
projecte-aina
"2024-01-31T11:02:43Z"
1,477
53
transformers
[ "transformers", "pytorch", "safetensors", "RefinedWebModel", "text-generation", "aguila", "falcon", "spanish", "catalan", "custom_code", "en", "es", "ca", "model-index", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-05T13:29:04Z"
--- language: - en - es - ca licence: - apache-2.0 tags: - aguila - falcon - spanish - catalan metrics: - ppl model-index: - name: aguila_7b results: - task: name: Causal Language Modeling type: text-generation metrics: - name: Perplexity type: ppl value: 8.59 pipeline_tag: text-generation widget: - text: |- Respon a la pregunta segÃŒent. Pregunta: "Quina és la capital de SuÚcia?" Resposta: "La capital de SuÚcia és Estocolm." ---- Respon a la pregunta segÃŒent. Pregunta: "Quina beguda es consumeix als matins per despertar-se?" Resposta: "La majoria de gent consumeix cafÚ per despertar-se." ---- Respon a la pregunta segÃŒent. Pregunta: "Explica com funciona un motor de combustió" Resposta: example_title: Pregunta-Resposta - text: |- Extrae las entidades nombradas del siguiente texto: Texto: "Me llamo Wolfgang y vivo en Berlin" Entidades: Wolfgang:PER, Berlin:LOC ---- Extrae las entidades nombradas del siguiente texto: Texto: "Hoy voy a visitar el parc gÃŒell tras salir del barcelona supercomputing center" Entidades: parc gÃŒell:LOC, barcelona supercomputing center:LOC ---- Extrae las entidades nombradas del siguiente texto: Texto: "Maria y Miguel no tienen ningún problema contigo" Entidades: Maria:PER, Miguel:PER ---- Extrae las entidades nombradas del siguiente texto: Texto: "Damián se cortó el pelo" Entidades: Damián:PER ---- Extrae las entidades nombradas del siguiente texto: Texto: "Lo mejor de Barcelona és el bar de mi amigo Pablo" Entidades: Pablo:PER, Barcelona:LOC ---- Extrae las entidades nombradas del siguiente texto: Texto: "Carlos comparte piso con Marc" Entidades: example_title: Entidades-Nombradas --- # Ǎguila-7B ## Table of Contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Language adaptation](#language-adaptation) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Additional information](#additional-information) - [Author](#author) - [Contact](#contact) - [Copyright](#copyright) - [License](#license) - [Funding](#funding) - [Disclaimer](#disclaimer) </details> ## Model description **Ǎguila-7B** is a transformer-based causal language model for Catalan, Spanish, and English. It is based on the [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) model and has been trained on a 26B token trilingual corpus collected from publicly available corpora and crawlers. More information available in the following post from Medium.com: Introducing Ǎguila, a new open-source LLM for Spanish and Catalan (https://medium.com/@mpamies247/introducing-a%CC%8Cguila-a-new-open-source-llm-for-spanish-and-catalan-ee1ebc70bc79) ## Intended uses and limitations The **Ǎguila-7B** model is ready-to-use only for causal language modeling to perform text-generation tasks. However, it is intended to be fine-tuned for downstream tasks. ## How to use Here is how to use this model: ```python import torch from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM input_text = "El mercat del barri és fantàstic, hi pots trobar" model_id = "projecte-aina/aguila-7b" tokenizer = AutoTokenizer.from_pretrained(model_id) generator = pipeline( "text-generation", model=model_id, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) generation = generator( input_text, do_sample=True, top_k=10, eos_token_id=tokenizer.eos_token_id, ) print(f"Result: {generation[0]['generated_text']}") ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias and toxicity embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Language adaptation We adapted the original [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) model to Spanish and Catalan by swapping the tokenizer and adjusting the embedding layer. The adaptation procedure is explained in [this blog post](https://medium.com/@mpamies247/ee1ebc70bc79). ## Training ### Training data The training corpus consists of 26B tokens of several corpora gathered from web crawlings and public domain data. | Dataset | Language | Words (per-epoch) | Epochs | |---------------------|----------|--------------------|--------------| | Wikipedia | en | 2169.97M | 1.428144485 | | C4_es | es | 53709.80M | 0.1049686196 | | Biomedical | es | 455.03M | 0.7140722425 | | Legal | es | 995.70M | 0.7140722425 | | Wikipedia | es | 693.60M | 1.428144485 | | Gutenberg | es | 53.18M | 0.7140722425 | | C4_ca | ca | 2826.00M | 2.142216727 | | Biomedical | ca | 11.80M | 1.428144485 | | RacoCatalà Noticias | ca | 17.16M | 2.142216727 | | RacoCatalà Forums | ca | 333.73M | 2.142216727 | | CaWaC | ca | 57.79M | 2.142216727 | | Wikipedia | ca | 228.01M | 3.570361212 | | Vilaweb | ca | 50.34M | 2.142216727 | The dataset has the following language distribution: |Language|Percentage| |--------|----------| | En | 16.84% | | Es | 41.38% | | Ca | 41.79% | Note: A small amount of English data was kept to avoid catastrophic forgetting. ## Training procedure The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) with a vocabulary size of 50,257 tokens. After training a new tokenizer and adapting [falcon-7b](https://huggingface.co/tiiuae/falcon-7b)'s embedding layer, the model was further pre-trained in three target languages: Catalan, Spanish and English. The training lasted a total of 320 hours on 8 NVIDIA H100 GPUs with 80GB RAM. ### Training hyperparameters - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - train_batch_size: 1 - eval_batch_size: 1 - total_train_batch_size: 8 - total_eval_batch_size: 8 - optimizer: Adam - betas: (0.9,0.999) - epsilon: 1e-08 - learning_rate: 5e-05 - lr_scheduler_type: linear - num_epochs: 1.0 ### Framework versions - Pytorch 2.0.0 - Transformers 4.30.2 - Datasets 2.13.1 - Tokenizers 0.13.3 ## Additional information ### Author The Language Technologies Unit from Barcelona Supercomputing Center. ### Contact For further information, please send an email to <[email protected]>. ### Copyright Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center. ### License [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by: - The [Departament de la VicepresidÚncia i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). - The [Spanish State Secretariat for Digitalization and Artificial Intelligence](https://portal.mineco.gob.es/en-us/digitalizacionIA/Pages/sedia.aspx) within the framework of the [Plan de Impulso de las Tecnologías del Lenguaje](https://plantl.mineco.gob.es/Paginas/index.aspx). ### Disclaimer <details> <summary>Click to expand</summary> The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0. Be aware that the model may have biases and/or any other undesirable distortions. When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it) or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the model (Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties. </details>
jisukim8873/falcon-7B-case-6
jisukim8873
"2024-02-19T01:13:08Z"
1,477
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "en", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-16T05:34:53Z"
--- license: apache-2.0 datasets: - Open-Orca/SlimOrca language: - en --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Jisu Kim * Model Type: Large Language Model # Model Architecture This model is based on falcon-7B. We fine-tuning this model for data ordering task. falcon-7B is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample Open-Orca dataset. (We finetune the 100,000 dataset) # Guthub https://github.com/trailerAI # License Apache License 2.0
TeeZee/DarkForest-20B-v2.0-GGUF
TeeZee
"2024-02-18T22:48:58Z"
1,477
12
null
[ "gguf", "merge", "not-for-all-audiences", "license:other", "region:us" ]
null
"2024-02-18T17:54:49Z"
--- license: other license_name: microsoft-research-license tags: - merge - not-for-all-audiences --- GGUF quants of [DarkForest-20B-v2.0](https://huggingface.co/TeeZee/DarkForest-20B-v2.0), [Q4_K_M](https://huggingface.co/TeeZee/DarkForest-20B-v2.0-GGUF/blob/main/DarkForest-20B-v2.0.q4_K_M.gguf) is recommended.
SepKeyPro/Merged-OpenLlama3-8B
SepKeyPro
"2024-06-01T16:01:23Z"
1,477
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-30T21:33:56Z"
--- tags: - merge - mergekit license: apache-2.0 --- # Merged-OpenLlama3-8B Merged-OpenLlama3-8B is a merge of the following models using Mergekit: ## Configuration ```yaml models: - model: openchat/openchat-3.6-8b-20240522 - model: meta-llama/Meta-Llama-3-8B-Instruct - model: nbeerbower/llama-3-gutenberg-8B merge_method: model_stock base_model: meta-llama/Meta-Llama-3-8B-Instruct dtype: bfloat16 ```
gokaygokay/sd3-long-captioner
gokaygokay
"2024-06-15T10:54:18Z"
1,477
18
transformers
[ "transformers", "safetensors", "paligemma", "pretraining", "art", "image-text-to-text", "en", "dataset:google/docci", "dataset:google/imageinwords", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
image-text-to-text
"2024-06-13T03:26:24Z"
--- license: apache-2.0 datasets: - google/docci - google/imageinwords language: - en library_name: transformers pipeline_tag: image-text-to-text tags: - art --- Fine-tuned version of PaliGemma 224x224 on [google/docci](https://huggingface.co/datasets/google/docci) and [google/imageinwords](https://huggingface.co/datasets/google/imageinwords) datasets. ``` pip install git+https://github.com/huggingface/transformers ``` ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "gokaygokay/sd3-long-captioner" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).to('cuda').eval() processor = AutoProcessor.from_pretrained(model_id) ## prefix prompt = "caption en" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to('cuda') input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=256, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ```
nyu-visionx/cambrian-8b
nyu-visionx
"2024-06-28T00:20:50Z"
1,477
51
transformers
[ "transformers", "safetensors", "cambrian_llama", "text-generation", "conversational", "arxiv:2406.16860", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-16T12:18:22Z"
--- license: apache-2.0 --- # Cambrian Model Card ## Model details **Model type:** Cambrian is an open-source Multimodal LLM with vision-centric designs. **Model date:** Cambrian-1-8B was trained in June 2024. **Paper or resources for more information:** - https://cambrian-mllm.github.io/ - https://arxiv.org/abs/2406.16860 ## License Llama 3 is licensed under the LLAMA 3 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/cambrian-mllm/cambrian/issues ## Training dataset - [2.5M Cambrian Alignment Data](https://huggingface.co/datasets/nyu-visionx/Cambrian-Alignment). - [7M Cambrian Curated Instruction Tuning Data](https://huggingface.co/datasets/nyu-visionx/Cambrian-10M)
timm/vit_small_patch32_224.augreg_in21k_ft_in1k
timm
"2023-05-06T00:29:36Z"
1,476
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2106.10270", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-22T07:55:29Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-21k --- # Model card for vit_small_patch32_224.augreg_in21k_ft_in1k A Vision Transformer (ViT) image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 22.9 - GMACs: 1.1 - Activations (M): 2.1 - Image size: 224 x 224 - **Papers:** - How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_small_patch32_224.augreg_in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_small_patch32_224.augreg_in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 50, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{steiner2021augreg, title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers}, author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas}, journal={arXiv preprint arXiv:2106.10270}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
TheBloke/Emerhyst-20B-GGUF
TheBloke
"2023-09-28T09:57:37Z"
1,476
36
transformers
[ "transformers", "gguf", "llama", "not-for-all-audiences", "nsfw", "base_model:Undi95/Emerhyst-20B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
"2023-09-28T09:48:12Z"
--- base_model: Undi95/Emerhyst-20B inference: false license: cc-by-nc-4.0 model_creator: Undi model_name: Emerhyst 20B model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - not-for-all-audiences - nsfw --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Emerhyst 20B - GGUF - Model creator: [Undi](https://huggingface.co/Undi95) - Original model: [Emerhyst 20B](https://huggingface.co/Undi95/Emerhyst-20B) <!-- description start --> ## Description This repo contains GGUF format model files for [Undi's Emerhyst 20B](https://huggingface.co/Undi95/Emerhyst-20B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Emerhyst-20B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Emerhyst-20B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF) * [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/Emerhyst-20B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Undi's Emerhyst 20B](https://huggingface.co/Undi95/Emerhyst-20B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [emerhyst-20b.Q2_K.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q2_K.gguf) | Q2_K | 2 | 8.31 GB| 10.81 GB | smallest, significant quality loss - not recommended for most purposes | | [emerhyst-20b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q3_K_S.gguf) | Q3_K_S | 3 | 8.66 GB| 11.16 GB | very small, high quality loss | | [emerhyst-20b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q3_K_M.gguf) | Q3_K_M | 3 | 9.70 GB| 12.20 GB | very small, high quality loss | | [emerhyst-20b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q3_K_L.gguf) | Q3_K_L | 3 | 10.63 GB| 13.13 GB | small, substantial quality loss | | [emerhyst-20b.Q4_0.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q4_0.gguf) | Q4_0 | 4 | 11.29 GB| 13.79 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [emerhyst-20b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q4_K_S.gguf) | Q4_K_S | 4 | 11.34 GB| 13.84 GB | small, greater quality loss | | [emerhyst-20b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q4_K_M.gguf) | Q4_K_M | 4 | 12.04 GB| 14.54 GB | medium, balanced quality - recommended | | [emerhyst-20b.Q5_0.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q5_0.gguf) | Q5_0 | 5 | 13.77 GB| 16.27 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [emerhyst-20b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q5_K_S.gguf) | Q5_K_S | 5 | 13.77 GB| 16.27 GB | large, low quality loss - recommended | | [emerhyst-20b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q5_K_M.gguf) | Q5_K_M | 5 | 14.16 GB| 16.66 GB | large, very low quality loss - recommended | | [emerhyst-20b.Q6_K.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q6_K.gguf) | Q6_K | 6 | 16.40 GB| 18.90 GB | very large, extremely low quality loss | | [emerhyst-20b.Q8_0.gguf](https://huggingface.co/TheBloke/Emerhyst-20B-GGUF/blob/main/emerhyst-20b.Q8_0.gguf) | Q8_0 | 8 | 21.25 GB| 23.75 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Emerhyst-20B-GGUF and below it, a specific filename to download, such as: emerhyst-20b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Emerhyst-20B-GGUF emerhyst-20b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Emerhyst-20B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Emerhyst-20B-GGUF emerhyst-20b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m emerhyst-20b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Emerhyst-20B-GGUF", model_file="emerhyst-20b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik BjÀreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 쀀교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Undi's Emerhyst 20B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/mvc3UyLtqKdLY1wzAdB_O.png) Merge of [Amethyst 13B](https://huggingface.co/Undi95/Amethyst-13B) and [Emerald 13B](https://huggingface.co/Undi95/Emerald-13B). In addition, [LimaRP v3](https://huggingface.co/lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT) was used, is it recommanded to read the documentation. <!-- description start --> ## Description This repo contains fp16 files of Emerhyst-20B. <!-- description end --> <!-- description start --> ## Models and loras used - PygmalionAI/pygmalion-2-13b - Xwin-LM/Xwin-LM-13B-V0.1 - The-Face-Of-Goonery/Huginn-13b-FP16 - zattio770/120-Days-of-LORA-v2-13B - lemonilia/LimaRP-Llama2-13B-v3-EXPERIMENT <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## LimaRP v3 usage and suggested settings ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/ZC_iP2KkcEcRdgG_iyxYE.png) You can follow these instruction format settings in SillyTavern. Replace tiny with your desired response length: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/PIn8_HSPTJEMdSEpNVSdm.png) Special thanks to Sushi. If you want to support me, you can [here](https://ko-fi.com/undiai). <!-- original-model-card end -->
Bllossom/llama-3-Korean-Bllossom-70B
Bllossom
"2024-05-14T06:46:56Z"
1,476
56
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "ko", "arxiv:2403.10882", "arxiv:2403.11399", "base_model:meta-llama/Meta-Llama-3-70B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-08T09:02:34Z"
--- language: - en - ko license: llama3 library_name: transformers base_model: - meta-llama/Meta-Llama-3-70B --- <a href="https://github.com/MLP-Lab/Bllossom"> <img src="https://github.com/teddysum/bllossom/blob/main//bllossom_icon.png?raw=true" width="40%" height="50%"> </a> # Bllossom | [Demo]() | [Homepage](https://www.bllossom.ai/) | [Github](https://github.com/MLP-Lab/Bllossom) | [Colab-tutorial](https://colab.research.google.com/drive/1fBOzUVZ6NRKk_ugeoTbAOokWKqSN47IG?usp=sharing) | ```bash 저희 Bllossom 프로젝튞 팀에서 한국얎-영얎 읎쀑 얞얎몚덞읞 Bllossom-70.8B륌 공개했습니닀! 서욞곌Ʞ대 슈퍌컎퓚팅 섌터의 지원윌로 100GB가넘는 한국얎로 몚덞전첎륌 풀튜닝한 한국얎 강화 읎쀑얞얎 몚덞입니닀! 한국얎 잘하는 몚덞 ì°Ÿê³  있지 않윌셚나요? - 한국얎 최쎈! 묎렀 3만개가 넘는 한국얎 얎휘확장 - Llama3대비 대략 25% 더 ꞎ Ꞟ읎의 한국얎 Context 처늬가능 - 한국얎-영얎 Pararell Corpus륌 활용한 한국얎-영얎 지식연결 (사전학습) - 한국얎 묞화, 얞얎륌 고렀핎 얞얎학자가 제작한 데읎터륌 활용한 믞섞조정 - 강화학습 읎 몚든게 한꺌번에 적용되고 상업적 읎용읎 가능한 Bllossom을 읎용핎 여러분 만의 몚덞을 만듀얎볎섞욥! GPU가 부족하멎 양자화 몚덞로 바로 서비슀륌 활용핎 볎섞요 [양자화몚덞](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B-gguf-Q4_K_M)!! 1. Bllossom-70.8B는 서욞곌Ʞ대, 테디썞, 연섞대 얞얎자원 연구싀의 얞얎학자와 협업핎 만든 싀용죌의Ʞ반 얞얎몚덞입니닀! 앞윌로 지속적읞 업데읎튞륌 통핎 ꎀ늬하겠습니닀 많읎 활용핎죌섞요 🙂 2. 쎈 강력한 Advanced-Bllossom 8B, 70B몚덞, 시각-얞얎몚덞을 볎유하고 있습니닀! (궁ꞈ하신분은 개별 연띜죌섞요!!) 3. Bllossom은 NAACL2024, LREC-COLING2024 (구두) 발표로 채택되었습니닀. 4. 좋은 얞얎몚덞 계속 업데읎튞 하겠습니닀!! 한국얎 강화륌위핎 공동 연구하싀분(특히녌묞) 얞제든 환영합니닀!! 특히 소량의 GPU띌도 대여 가능한팀은 얞제든 연띜죌섞요! 만듀고 싶은거 도와드렀요. ``` The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features: * **Knowledge Linking**: Linking Korean and English knowledge through additional training * **Vocabulary Expansion**: Expansion of Korean vocabulary to enhance Korean expressiveness. * **Instruction Tuning**: Tuning using custom-made instruction following data specialized for Korean language and Korean culture * **Human Feedback**: DPO has been applied * **Vision-Language Alignment**: Aligning the vision transformer with this language model **This model developed by [MLPLab at Seoultech](http://mlp.seoultech.ac.kr), [Teddysum](http://teddysum.ai/) and [Yonsei Univ](https://sites.google.com/view/hansaemkim/hansaem-kim)** ## Demo Video <div style="display: flex; justify-content: space-between;"> <!-- 첫 번짞 컬럌 --> <div style="width: 49%;"> <a> <img src="https://github.com/lhsstn/lhsstn/blob/main/x-llava_dem.gif?raw=true" style="width: 100%; height: auto;"> </a> <p style="text-align: center;">Bllossom-V Demo</p> </div> <!-- 두 번짞 컬럌 (필요하닀멎) --> <div style="width: 49%;"> <a> <img src="https://github.com/lhsstn/lhsstn/blob/main/bllossom_demo_kakao.gif?raw=true" style="width: 70%; height: auto;"> </a> <p style="text-align: center;">Bllossom Demo(Kakao)ㅀㅀㅀㅀㅀㅀㅀㅀ</p> </div> </div> ## NEWS * [2024.05.08] Vocab Expansion Model Update * [2024.04.25] We released Bllossom v2.0, based on llama-3 * [2023/12] We released Bllossom-Vision v1.0, based on Bllossom * [2023/08] We released Bllossom v1.0, based on llama-2. * [2023/07] We released Bllossom v0.7, based on polyglot-ko. ## Example code ### Colab Tutorial - [Inference-Code-Link](https://colab.research.google.com/drive/1fBOzUVZ6NRKk_ugeoTbAOokWKqSN47IG?usp=sharing) ### Install Dependencies ```bash pip install torch transformers==4.40.0 accelerate ``` ### Python code with Pipeline ```python import transformers import torch model_id = "Bllossom/llama-3-Korean-Bllossom-70B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) pipeline.model.eval() PROMPT = '''당신은 유용한 AI 얎시슀턎튞입니닀. 사용자의 질의에 대핮 친절하고 정확하게 답변핎알 합니닀. You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.''' instruction = "서욞곌학Ʞ술대학교 MLP연구싀에 대핮 소개핎쀘" messages = [ {"role": "system", "content": f"{PROMPT}"}, {"role": "user", "content": f"{instruction}"} ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=2048, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) # 서욞곌학Ʞ술대학교 MLP연구싀은 멀티몚달 자연얎처늬 연구륌 하고 있습니닀. 구성원은 임겜태 교수와 김믌쀀, 김상믌, 최찜수, 원읞혞, 유한결, 임현석, 송승우, 육정훈, 신동재 학생읎 있습니닀. ``` ### Python code with AutoModel ```python import os import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_id = 'Bllossom/llama-3-Korean-Bllossom-70B' tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) model.eval() PROMPT = '''당신은 유용한 AI 얎시슀턎튞입니닀. 사용자의 질의에 대핮 친절하고 정확하게 답변핎알 합니닀. You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.''' instruction = "서욞곌학Ʞ술대학교 MLP연구싀에 대핮 소개핎쀘" messages = [ {"role": "system", "content": f"{PROMPT}"}, {"role": "user", "content": f"{instruction}"} ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=2048, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9 ) print(tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)) # 서욞곌학Ʞ술대학교 MLP연구싀은 멀티몚달 자연얎처늬 연구륌 하고 있습니닀. 구성원은 임겜태 교수와 김믌쀀, 김상믌, 최찜수, 원읞혞, 유한결, 임현석, 송승우, 육정훈, 신동재 학생읎 있습니닀. ``` ## Citation **Language Model** ```text @misc{bllossom, author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim}, title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean}, year = {2024}, journal = {LREC-COLING 2024}, paperLink = {\url{https://arxiv.org/pdf/2403.10882}}, }, } ``` **Vision-Language Model** ```text @misc{bllossom-V, author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim}, title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment}, year = {2024}, publisher = {GitHub}, journal = {NAACL 2024 findings}, paperLink = {\url{https://arxiv.org/pdf/2403.11399}}, }, } ``` ## Contact - 임겜태(KyungTae Lim), Professor at Seoultech. `[email protected]` - 핚영균(Younggyun Hahm), CEO of Teddysum. `[email protected]` - 김한샘(Hansaem Kim), Professor at Yonsei. `[email protected]` ## Contributor - 최찜수(Chansu Choi), [email protected] - 김상믌(Sangmin Kim), [email protected] - 원읞혞(Inho Won), [email protected] - 김믌쀀(Minjun Kim), [email protected] - 송승우(Seungwoo Song), [email protected] - 신동재(Dongjae Shin), [email protected] - 임현석(Hyeonseok Lim), [email protected] - 육정훈(Jeonghun Yuk), [email protected] - 유한결(Hangyeol Yoo), [email protected] - 송서현(Seohyun Song), [email protected]
zero-one-01/llama3-8b-orpo
zero-one-01
"2024-06-27T03:44:40Z"
1,476
0
peft
[ "peft", "safetensors", "gguf", "arxiv:1910.09700", "base_model:unsloth/llama-3-8b-bnb-4bit", "region:us" ]
null
"2024-06-27T03:07:39Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.11.1
YeungNLP/firefly-qwen1.5-en-7b-unsloth
YeungNLP
"2024-05-05T15:41:52Z"
1,475
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:2305.18290", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-02T13:53:37Z"
--- library_name: transformers license: apache-2.0 basemodel: Qwen/Qwen1.5-7B --- ## Unsloth x Qwen2 [Unsloth](https://github.com/unslothai/unsloth) can speed up training LLM and reduce memory usage, but currently it only supports Llama3, Mistral, Gemma, ORPR, Phi-3 and TinyLlama. We can't train Qwen2 with Unsloth, even though Qwen2 is popular in community. It's exciting that we succeed to make Unsloth support Qwen2, it can speed up training and reduce much memory usage. If you want to train Qwen2 with Unsloth, you can use [our repo](https://github.com/yangjianxin1/unsloth) rather than the official one. And we will commit our code to the [official repo](https://github.com/unslothai/unsloth). Install our Unsloth: ```bash pip install git+https://github.com/yangjianxin1/unsloth.git ``` [Firefly](https://github.com/yangjianxin1/Firefly) already supports training Qwen2 with Unsloth, and the subsequent models are trained with Firefly, you can try it. ## Model Card for Firefly-Qwen1.5-Unsloth [firefly-qwen1.5-en-7b-unsloth](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-7b-unsloth) and [firefly-qwen1.5-en-7b-dpo-v0.1-unloth](https://huggingface.co/YeungNLP/firefly-qwen1.5-en-7b-dpo-v0.1-unsloth) are trained based on [Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) to act as a helpful and harmless AI assistant. We use [Firefly](https://github.com/yangjianxin1/Firefly) to train our models on **a single V100 GPU** with QLoRA and [Unsloth](https://github.com/yangjianxin1/unsloth). firefly-qwen1.5-en-7b-unsloth is fine-tuned based on Qwen1.5-7B with English instruction data, and firefly-qwen1.5-en-7b-dpo-v0.1-unsloth is trained with [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) based on firefly-qwen1.5-en-7b-unsloth. Our models outperform official [Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat), [Gemma-7B-it](https://huggingface.co/google/gemma-7b-it), [Zephyr-7B-Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Although our models are trained with English data, you can also try to chat with models in Chinese because Qwen1.5 is also good at Chinese. But we have not evaluated the performance in Chinese yet. We advise you to install transformers>=4.37.0. ## Performance We have evaluated the training gain of Qwen1.5-7B, we use QLoRA and Unsloth to train model for 20 steps on a single V100. The result can be listed as follows. **Unsloth can reduce GPU memory by 39.13% and training time by 32.12%, and the training speed can increase by 47.32%.** | max_seq_length | per_device_train_batch_size | gradient_accumulation_steps | use_unsloth | rank | GPU | Time | |----------------|----------------------------|-----------------------------|-------------|------|-------------------------|-------------------| | 1024 | 1 | 16 | false | 8 | 13.72GB | 448s | | 1024 | 1 | 16 | true | 8 | **8.43GB**(**-38.56%**) | 308s(**-31.25%**) | | 1024 | 1 | 16 | false | 64 | 16.01GB | 452s | | 1024 | 1 | 16 | true | 64 | 11.07GB(**-30.86%**) | 311s(**-31.19%**) | | 2048 | 1 | 16 | false | 64 | 18.55GB | 840s | | 2048 | 1 | 16 | true | 64 | 12.99GB(**-29.97%**) | 596s(**-29.05%**) | | 1024 | 4 | 4 | false | 64 | 24.70GB | 357s | | 1024 | 4 | 4 | true | 64 | 14.36GB(**-41.86%**) | 253s(**-29.13%**) | | 2048 | 4 | 4 | false | 64 | 32.51GB | 741s | | 2048 | 4 | 4 | true | 64 | 19.79GB(**-39.13%**) | 503s(**-32.12%**) | We evaluate our sft and dpo models on [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), they achieve good performance. | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |--------------------------------------------|---------|--------|-----------|-------|------------|------------|--------| | firefly-gemma-7b | 62.93 | 62.12 | 79.77 | 61.57 | 49.41 | 75.45 | 49.28 | | **firefly-qwen1.5-en-7b-dpo-v0.1-unsloth** | 62.65 | 56.14 | 75.5 | 60.87 | 58.09 | 70.72 | 54.59 | | zephyr-7b-beta | 61.95 | 62.03 | 84.36 | 61.07 | 57.45 | 77.74 | 29.04 | | **firefly-qwen1.5-en-7b-unsloth** | 61.81 | 54.27 | 76.22 | 61.55 | 50.62 | 70.48 | 57.7 | | vicuna-13b-v1.5 | 55.41 | 57.08 | 81.24 | 56.67 | 51.51 | 74.66 | 11.3 | | Xwin-LM-13B-V0.1 | 55.29 | 62.54 | 82.8 | 56.53 | 45.96 | 74.27 | 9.63 | | Qwen1.5-7B-Chat | 55.15 | 55.89 | 78.56 | 61.65 | 53.54 | 67.72 | 13.57 | | gemma-7b-it | 53.56 | 51.45 | 71.96 | 53.52 | 47.29 | 67.96 | 29.19 | ## Usage The chat templates of our chat models are the same as Official Qwen1.5-7B-Chat: ```text <|im_start|>system You are a helpful assistant.<|im_end|> <|im_start|>user hello, who are you?<|im_end|> <|im_start|>assistant I am a AI program developed by Firefly<|im_end|> ``` You can use script to inference in [Firefly](https://github.com/yangjianxin1/Firefly/blob/master/script/chat/chat.py). You can also use the following code: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name_or_path = "YeungNLP/firefly-qwen1.5-en-7b-unsloth" model = AutoModelForCausalLM.from_pretrained( model_name_or_path, trust_remote_code=True, low_cpu_mem_usage=True, torch_dtype=torch.float16, device_map='auto', ) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) prompt = "Compose an engaging travel blog post about a recent trip to Hawaii, highlighting cultural experiences and must-see attractions. " messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to('cuda') generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=1500, top_p = 0.9, temperature = 0.35, repetition_penalty = 1.0, eos_token_id=tokenizer.encode('<|im_end|>', add_special_tokens=False) ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` ## Training Details Both in SFT and DPO stages, **We only use a single V100 GPU** with QLoRA and Unsloth, and we use [Firefly](https://github.com/yangjianxin1/Firefly) to train our models. ### Training Setting The following hyperparameters are used during SFT: - num_epochs: 1 - learning_rate: 2e-4 - total_train_batch_size: 32 - max_seq_length: 2048 - optimizer: paged_adamw_32bit - lr_scheduler_type: constant_with_warmup - warmup_steps: 600 - lora_rank: 64 - lora_alpha: 16 - lora_dropout: 0.05 - gradient_checkpointing: true - fp16: true The following hyperparameters were used during DPO: - num_epochs: 1 - learning_rate: 2e-4 - total_train_batch_size: 32 - max_seq_length: 2048 - max_prompt_length: 500 - optimizer: paged_adamw_32bit - lr_scheduler_type: constant_with_warmup - warmup_steps: 100 - lora_rank: 64 - lora_alpha: 16 - lora_dropout: 0.05 - gradient_checkpointing: true - fp16: true ### Training metrics The table below shows the full set of DPO training metrics: | Epoch | Step | Loss | Rewards/accuracies | Rewards/margins | Rewards/chosen | Rewards/rejected | Logits/chosen | Logits/rejected | Logps/chosen | Logps/rejected | |-------|------|--------|--------------------|-----------------|----------------|------------------|---------------|-----------------|--------------|----------------| | 0.05 | 100 | 0.6128 | 0.6572 | 0.3914 | -0.0622 | -0.4537 | 1.107 | 1.1104 | -283.7632 | -264.5925 | | 0.1 | 200 | 0.6066 | 0.6913 | 0.662 | -0.3589 | -1.0209 | 0.9433 | 0.9431 | -279.0002 | -268.6432 | | 0.16 | 300 | 0.5803 | 0.7069 | 0.876 | -0.3849 | -1.2609 | 0.8411 | 0.8537 | -289.9482 | -274.3425 | | 0.21 | 400 | 0.5624 | 0.7169 | 0.9575 | -0.2447 | -1.2022 | 0.7615 | 0.7497 | -293.8072 | -274.4167 | | 0.26 | 500 | 0.5863 | 0.7 | 0.8908 | -0.5283 | -1.4191 | 0.537 | 0.5085 | -284.3388 | -267.9294 | | 0.31 | 600 | 0.5612 | 0.7166 | 1.0791 | -0.592 | -1.6711 | 0.7121 | 0.7219 | -293.2425 | -278.5992 | | 0.37 | 700 | 0.5741 | 0.7234 | 1.0742 | -0.8469 | -1.9211 | 0.6002 | 0.5769 | -300.8099 | -285.9137 | | 0.42 | 800 | 0.582 | 0.7141 | 1.0414 | -1.1658 | -2.2072 | 0.7191 | 0.5934 | -300.458 | -286.1 | | 0.47 | 900 | 0.5694 | 0.7178 | 1.2055 | -1.7372 | -2.9426 | 0.4226 | 0.316 | -305.5303 | -290.7548 | | 0.52 | 1000 | 0.5827 | 0.7134 | 1.1063 | -1.354 | -2.4603 | 0.535 | 0.4022 | -302.7598 | -286.636 | | 0.58 | 1100 | 0.5553 | 0.7306 | 1.3631 | -1.5861 | -2.9492 | 0.7636 | 0.6559 | -312.9375 | -290.3474 | | 0.63 | 1200 | 0.5633 | 0.7341 | 1.2689 | -1.7187 | -2.9876 | 0.6555 | 0.5894 | -315.0179 | -298.2406 | | 0.68 | 1300 | 0.5705 | 0.7284 | 1.3501 | -1.7762 | -3.1263 | 0.7419 | 0.6874 | -310.9056 | -294.2934 | | 0.73 | 1400 | 0.5458 | 0.7347 | 1.4555 | -2.2377 | -3.6932 | 0.7279 | 0.6564 | -309.141 | -299.1613 | | 0.79 | 1500 | 0.5797 | 0.7222 | 1.2937 | -2.4483 | -3.742 | 0.8444 | 0.771 | -321.578 | -298.111 | | 0.84 | 1600 | 0.5572 | 0.7319 | 1.4824 | -2.9344 | -4.4168 | 0.9202 | 0.8605 | -323.4034 | -307.0114 | | 0.89 | 1700 | 0.5518 | 0.7281 | 1.4263 | -2.7301 | -4.1564 | 0.9257 | 0.8785 | -313.694 | -298.1267 | | 0.94 | 1800 | 0.5572 | 0.7272 | 1.5121 | -2.9505 | -4.4627 | 0.7899 | 0.7503 | -314.1552 | -305.9873 | | 0.99 | 1900 | 0.5763 | 0.7241 | 1.4982 | -2.7064 | -4.2047 | 0.7841 | 0.7023 | -310.6677 | -299.5064 |
cyberagent/open-calm-1b
cyberagent
"2023-05-18T01:11:30Z"
1,474
16
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "japanese", "causal-lm", "ja", "dataset:wikipedia", "dataset:cc100", "dataset:mc4", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-15T07:00:18Z"
--- license: cc-by-sa-4.0 datasets: - wikipedia - cc100 - mc4 language: - ja tags: - japanese - causal-lm inference: false --- # OpenCALM-1B ## Model Description OpenCALM is a suite of decoder-only language models pre-trained on Japanese datasets, developed by CyberAgent, Inc. ## Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("cyberagent/open-calm-1b", device_map="auto", torch_dtype=torch.float16) tokenizer = AutoTokenizer.from_pretrained("cyberagent/open-calm-1b") inputs = tokenizer("AIによっお私達の暮らしは、", return_tensors="pt").to(model.device) with torch.no_grad(): tokens = model.generate( **inputs, max_new_tokens=64, do_sample=True, temperature=0.7, top_p=0.9, repetition_penalty=1.05, pad_token_id=tokenizer.pad_token_id, ) output = tokenizer.decode(tokens[0], skip_special_tokens=True) print(output) ``` ## Model Details |Model|Params|Layers|Dim|Heads|Dev ppl| |:---:|:---: |:---:|:---:|:---:|:---:| |[cyberagent/open-calm-small](https://huggingface.co/cyberagent/open-calm-small)|160M|12|768|12|19.7| |[cyberagent/open-calm-medium](https://huggingface.co/cyberagent/open-calm-medium)|400M|24|1024|16|13.8| |[cyberagent/open-calm-large](https://huggingface.co/cyberagent/open-calm-large)|830M|24|1536|16|11.3| |[cyberagent/open-calm-1b](https://huggingface.co/cyberagent/open-calm-1b)|1.4B|24|2048|16|10.3| |[cyberagent/open-calm-3b](https://huggingface.co/cyberagent/open-calm-3b)|2.7B|32|2560|32|9.7| |[cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)|6.8B|32|4096|32|8.2| * **Developed by**: [CyberAgent, Inc.](https://www.cyberagent.co.jp/) * **Model type**: Transformer-based Language Model * **Language**: Japanese * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) * **License**: OpenCALM is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License ([CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)). When using this model, please provide appropriate credit to CyberAgent, Inc. * Example (en): This model is a fine-tuned version of OpenCALM-XX developed by CyberAgent, Inc. The original model is released under the CC BY-SA 4.0 license, and this model is also released under the same CC BY-SA 4.0 license. For more information, please visit: https://creativecommons.org/licenses/by-sa/4.0/ * Example (ja): 本モデルは、株匏䌚瀟サむバヌ゚ヌゞェントによるOpenCALM-XXをファむンチュヌニングしたものです。元のモデルはCC BY-SA 4.0ラむセンスのもずで公開されおおり、本モデルも同じくCC BY-SA 4.0ラむセンスで公開したす。詳しくはこちらをご芧ください: https://creativecommons.org/licenses/by-sa/4.0/ ## Training Dataset * Wikipedia (ja) * Common Crawl (ja) ## Author [Ryosuke Ishigami](https://huggingface.co/rishigami) ## Citations ```bibtext @software{gpt-neox-library, title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}}, author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel}, url = {https://www.github.com/eleutherai/gpt-neox}, doi = {10.5281/zenodo.5879544}, month = {8}, year = {2021}, version = {0.0.1}, } ```
jisukim8873/falcon-7B-case-2
jisukim8873
"2024-03-04T03:52:23Z"
1,474
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "en", "dataset:Open-Orca/SlimOrca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-04T01:52:00Z"
--- license: apache-2.0 datasets: - Open-Orca/SlimOrca language: - en --- # Model Details * Model Description: This model is test for data ordering. * Developed by: Jisu Kim * Model Type: Large Language Model # Model Architecture This model is based on falcon-7B. We fine-tuning this model for data ordering task. falcon-7B is a transformer model, with the following architecture choices: * Grouped-Query Attention * Sliding-Window Attention * Byte-fallback BPE tokenizer # Dataset We random sample Open-Orca dataset. (We finetune the 100,000 dataset) # Guthub https://github.com/trailerAI # License Apache License 2.0
QuantFactory/Neural-SOVLish-Devil-8B-L3-GGUF
QuantFactory
"2024-06-04T08:04:03Z"
1,474
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "text-generation", "arxiv:2403.19522", "base_model:saishf/Neural-SOVLish-Devil-8B-L3", "license:cc-by-nc-4.0", "model-index", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-03T06:28:38Z"
--- license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge base_model: saishf/Neural-SOVLish-Devil-8B-L3 model-index: - name: Neural-SOVLish-Devil-8B-L3 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.11 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Neural-SOVLish-Devil-8B-L3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.77 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Neural-SOVLish-Devil-8B-L3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 69.02 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Neural-SOVLish-Devil-8B-L3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 59.05 source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Neural-SOVLish-Devil-8B-L3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.3 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Neural-SOVLish-Devil-8B-L3 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 73.09 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Neural-SOVLish-Devil-8B-L3 name: Open LLM Leaderboard pipeline_tag: text-generation --- # QuantFactory/Neural-SOVLish-Devil-8B-L3-GGUF This is quantized evrsion of [saishf/Neural-SOVLish-Devil-8B-L3](https://huggingface.co/saishf/Neural-SOVLish-Devil-8B-L3) created using llama.cpp ## Model Description This is another "SOVL" style merge, this time using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated). Daredevil is the first abliterated model series i've tried that feels as smart as base llama-3-instruct while also being willing to give instructions to do all kinda of illegal things Neural daredevil is trained further on the original abliterated model, which should result in a better experience in most scenarios. (A bandaid for the damage abliteration causes) This model should do well in rp, I'm yet to test it (waiting for gguf files @_@) ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) as a base. ### Models Merged The following models were included in the merge: * [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) + [ResplendentAI/BlueMoon_Llama3](https://huggingface.co/ResplendentAI/BlueMoon_Llama3) * [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) + [ResplendentAI/Smarts_Llama3](https://huggingface.co/ResplendentAI/Smarts_Llama3) * [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) + [ResplendentAI/Luna_Llama3](https://huggingface.co/ResplendentAI/Luna_Llama3) * [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) + [ResplendentAI/Aura_Llama3](https://huggingface.co/ResplendentAI/Aura_Llama3) * [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) + [ResplendentAI/RP_Format_QuoteAsterisk_Llama3](https://huggingface.co/ResplendentAI/RP_Format_QuoteAsterisk_Llama3) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/Aura_Llama3 - model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/Smarts_Llama3 - model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/Luna_Llama3 - model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/BlueMoon_Llama3 - model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/RP_Format_QuoteAsterisk_Llama3 merge_method: model_stock base_model: mlabonne/NeuralDaredevil-8B-abliterated dtype: bfloat16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_saishf__Neural-SOVLish-Devil-8B-L3) | Metric |Value| |---------------------------------|----:| |Avg. |72.22| |AI2 Reasoning Challenge (25-Shot)|69.11| |HellaSwag (10-Shot) |84.77| |MMLU (5-Shot) |69.02| |TruthfulQA (0-shot) |59.05| |Winogrande (5-shot) |78.30| |GSM8k (5-shot) |73.09|
nold/Phi-3-mini-4k-instruct-function-calling-GGUF
nold
"2024-05-22T12:44:59Z"
1,473
4
null
[ "gguf", "dataset:mzbac/function-calling-phi-3-format-v1.1", "region:us" ]
null
"2024-05-21T18:16:48Z"
--- datasets: - mzbac/function-calling-phi-3-format-v1.1 --- # Model Fine-tuned the Phi3 instruction model for function calling via MLX-LM using https://huggingface.co/datasets/mzbac/function-calling-phi-3-format-v1.1 # Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "mzbac/Phi-3-mini-4k-instruct-function-calling" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) tool = { "name": "search_web", "description": "Perform a web search for a given search terms.", "parameter": { "type": "object", "properties": { "search_terms": { "type": "array", "items": {"type": "string"}, "description": "The search queries for which the search is performed.", "required": True, } }, }, } messages = [ { "role": "user", "content": f"You are a helpful assistant with access to the following functions. Use them if required - {str(tool)}", }, {"role": "user", "content": "Any news in Melbourne today, May 7, 2024?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|end|>")] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.1, ) response = outputs[0] print(tokenizer.decode(response)) # <s><|user|> You are a helpful assistant with access to the following functions. Use them if required - {'name': 'search_web', 'description': 'Perform a web search for a given search terms.', 'parameter': {'type': 'object', 'properties': {'search_terms': {'type': 'array', 'items': {'type': 'string'}, 'description': 'The search queries for which the search is performed.', 'required': True}}}}<|end|><|assistant|> # <|user|> Any news in Melbourne today, May 7, 2024?<|end|> # <|assistant|> <functioncall> {"name": "search_web", "arguments": {"search_terms": ["news", "Melbourne", "May 7, 2024"]}}<|end|> ``` # Training hyperparameters lora_config.yaml ```yaml # The path to the local model directory or Hugging Face repo. model: "microsoft/Phi-3-mini-4k-instruct" # Whether or not to train (boolean) train: true # Directory with {train, valid, test}.jsonl files data: "data" # The PRNG seed seed: 0 # Number of layers to fine-tune lora_layers: 32 # Minibatch size. batch_size: 1 # Iterations to train for. iters: 111000 # Number of validation batches, -1 uses the entire validation set. val_batches: -1 # Adam learning rate. learning_rate: 1e-6 # Number of training steps between loss reporting. steps_per_report: 10 # Number of training steps between validations. steps_per_eval: 200 # Load path to resume training with the given adapter weights. # resume_adapter_file: "adapters/adapters.safetensors" # Save/load path for the trained adapter weights. adapter_path: "adapters" # Save the model every N iterations. save_every: 1000 # Evaluate on the test set after training test: false # Number of test set batches, -1 uses the entire test set. test_batches: 100 # Maximum sequence length. max_seq_length: 4096 # Use gradient checkpointing to reduce memory use. grad_checkpoint: false # LoRA parameters can only be specified in a config file lora_parameters: # The layer keys to apply LoRA to. # These will be applied for the last lora_layers keys: ['mlp.down_proj','mlp.gate_up_proj','self_attn.qkv_proj','self_attn.o_proj'] rank: 128 alpha: 256 scale: 10.0 dropout: 0.05 ``` *** Quantization of Model [mzbac/Phi-3-mini-4k-instruct-function-calling](https://huggingface.co/mzbac/Phi-3-mini-4k-instruct-function-calling). Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline
aipicasso/emi-2-5
aipicasso
"2024-06-26T09:54:25Z"
1,473
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "text-to-image", "arxiv:2307.01952", "arxiv:2212.03860", "license:openrail++", "autotrain_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-26T08:53:39Z"
--- license: openrail++ tags: - stable-diffusion - text-to-image inference: false library_name: diffusers --- # Emi 2.5 Model Card ![eyecatch.jpg](eyecatch.jpg) # はじめに Emi 2.5 (Ethereal master of illustration 2.5) は、 最先端の開発機材H100を甚いた画像生成Emiを甚いお AI Picasso瀟が開発したAIアヌトに特化した画像生成AIです。 このモデルの特城ずしお、Danbooruなどにある無断転茉画像を孊習しおいないこずがあげられたす。 # 䜿い方 [ここ](https://huggingface.co/spaces/aipicasso/emi-2-demo)からデモを利甚するこずができたす。 本栌的に利甚する人は[ここ](emi-2-5.safetensors)からモデルをダりンロヌドできたす。 # モデルの出力向䞊に぀いお - 䜿えるプロンプトはWaifu Diffusionず同じです。たた、Stable Diffusionのように䜿うこずもできたす。 - 解像床を䞊げるためには、[ComfyUIのノヌド](https://github.com/Ttl/ComfyUi_NNLatentUpscale)を䜿っおください。 - ネガティブプロンプトに[Textual Inversion](https://civitai.com/models/119032/unaestheticxl-or-negative-ti)を䜿甚するこずをおすすめしたす。 - 手が䞍安定なため、[Concept Slider Fix hands](https://github.com/rohitgandikota/sliders)を䜿甚するこずをおすすめしたす。 - ChatGPTを甚いおプロンプトを掗緎するず、自分の枠を超えた䜜品に出䌚えたす。 - 最新のComfyUIにあるFreeUノヌド、たたは[Web UIの拡匵機胜](https://github.com/ljleb/sd-webui-freeu)を次のパラメヌタで䜿うずさらに出力が䞊がる可胜性がありたす。 - s1=1.2, s2=0.7, b1=1.1, b2=1.3 # 法埋に぀いお 本モデルは日本にお䜜成されたした。したがっお、日本の法埋が適甚されたす。 本モデルの孊習は、著䜜暩法第30条の4に基づき、合法であるず䞻匵したす。 たた、本モデルの配垃に぀いおは、著䜜暩法や刑法175条に照らしおみおも、 正犯や幇助犯にも該圓しないず䞻匵したす。詳しくは柿沌匁護士の[芋解](https://twitter.com/tka0120/status/1601483633436393473?s=20&t=yvM9EX0Em-_7lh8NJln3IQ)を埡芧ください。 ただし、ラむセンスにもある通り、本モデルの生成物は各皮法什に埓っお取り扱っお䞋さい。 # 連絡先 [email protected] 以䞋、䞀般的なモデルカヌドの日本語蚳です。 ## モデル詳现 - **モデルタむプ:** 拡散モデルベヌスの text-to-image 生成モデル - **蚀語:** 日本語 - **ラむセンス:** [CreativeML Open RAIL++-M License](LICENSE.md) - **モデルの説明:** このモデルはプロンプトに応じお適切な画像を生成するこずができたす。アルゎリズムは [Latent Diffusion Model](https://arxiv.org/abs/2307.01952) ず [OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip)、[CLIP-L](https://github.com/openai/CLIP) です。 - **補足:** - **参考文献:** ```bibtex @misc{podell2023sdxl, title={SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis}, author={Dustin Podell and Zion English and Kyle Lacey and Andreas Blattmann and Tim Dockhorn and Jonas MÃŒller and Joe Penna and Robin Rombach}, year={2023}, eprint={2307.01952}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## モデルの䜿甚䟋 Stable Diffusion XL 1.0ず同じ䜿い方です。 たくさんの方法がありたすが、3぀のパタヌンを提䟛したす。 - ComfyUI - Fooocus - Diffusers ### ComfyUIやFooocusの堎合 Stable Diffusion XL 1.0 の䜿い方ず同じく、safetensors圢匏のモデルファむルを䜿っおください。 詳しいむンストヌル方法は、[こちらの蚘事](https://note.com/it_navi/n/n723d93bedd64)を参照しおください。 ### Diffusersの堎合 [🀗's Diffusers library](https://github.com/huggingface/diffusers) を䜿っおください。 たずは、以䞋のスクリプトを実行し、ラむブラリをいれおください。 ```bash pip install invisible_watermark transformers accelerate safetensors diffusers ``` 次のスクリプトを実行し、画像を生成しおください。 ```python from diffusers import StableDiffusionXLPipeline, EulerAncestralDiscreteScheduler import torch model_id = "aipicasso/emi-2-5" scheduler = EulerAncestralDiscreteScheduler.from_pretrained(model_id,subfolder="scheduler") pipe = StableDiffusionXLPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.bfloat16) pipe = pipe.to("cuda") prompt = "1girl, upper body, brown bob short hair, brown eyes, looking at viewer, cherry blossom" images = pipe(prompt, num_inference_steps=20).images images[0].save("girl.png") ``` 耇雑な操䜜は[デモの゜ヌスコヌド](https://huggingface.co/spaces/aipicasso/emi-2-demo/blob/main/app.py)を参考にしおください。 #### 想定される甚途 - むラストや挫画、アニメの䜜画補助 - 商甚・非商甚は問わない - 䟝頌の際のクリ゚むタヌずのコミュニケヌション - 画像生成サヌビスの商甚提䟛 - 生成物の取り扱いには泚意しお䜿っおください。 - 自己衚珟 - このAIを䜿い、「あなた」らしさを発信するこず - 研究開発 - ファむンチュヌニング远加孊習ずも - LoRA など - 他のモデルずのマヌゞ - 本モデルの性胜をFIDなどで調べるこず - 教育 - 矎倧生や専門孊校生の卒業制䜜 - 倧孊生の卒業論文や課題制䜜 - 先生が画像生成AIの珟状を䌝えるこず - Hugging Face の Community にかいおある甚途 - 日本語か英語で質問しおください #### 想定されない甚途 - 物事を事実ずしお衚珟するようなこず - 先生を困らせるようなこず - その他、創䜜業界に悪圱響を及がすこず # 䜿甚しおはいけない甚途や悪意のある甚途 - マネヌ・ロンダリングに甚いないでください - デゞタル莋䜜 ([Digital Forgery](https://arxiv.org/abs/2212.03860)) は公開しないでください著䜜暩法に違反するおそれ - 他人の䜜品を無断でImage-to-Imageしないでください著䜜暩法に違反するおそれ - わいせ぀物を頒垃しないでください (刑法175条に違反するおそれ - いわゆる業界のマナヌを守らないようなこず - 事実に基づかないこずを事実のように語らないようにしおください嚁力業務劚害眪が適甚されるおそれ - フェむクニュヌス ## モデルの限界やバむアス ### モデルの限界 - 人間の手がきれいに生成するこずが難しいです。 ### バむアス - 日本のむラスト颚の画像を生成しおいるこずに向いおいたすが、写真のような画像を生成するこずには向いおいたせん。 ## å­Šç¿’ **孊習デヌタ** - Stable Diffusionず同様のデヌタセットからDanbooruの無断転茉画像を取り陀いお手動で集めた玄3000枚の画像 - Stable Diffusionず同様のデヌタセットからDanbooruの無断転茉画像を取り陀いお自動で集めた玄50䞇枚の画像 - [CosmicMan-SDXL](https://huggingface.co/cosmicman/CosmicMan-SDXL) **孊習プロセス** - **ハヌドりェア:** H100, RTX 4090, A6000 ## 評䟡結果 第䞉者による評䟡を求めおいたす。 ## 環境ぞの圱響 - **ハヌドりェアタむプ:** H100, RTX 4090, A6000 - **䜿甚時間単䜍は時間:** 1000 - **孊習した堎所:** 日本 ## 参考文献 ```bibtex @misc{podell2023sdxl, title={SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis}, author={Dustin Podell and Zion English and Kyle Lacey and Andreas Blattmann and Tim Dockhorn and Jonas MÃŒller and Joe Penna and Robin Rombach}, year={2023}, eprint={2307.01952}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @article{li2024cosmicman, title={CosmicMan: A Text-to-Image Foundation Model for Humans}, author={Li, Shikai and Fu, Jianglin and Liu, Kaiyuan and Wang, Wentao and Lin, Kwan-Yee and Wu, Wayne}, journal={arXiv preprint arXiv:2404.01294}, year={2024} } ```
timm/vgg16_bn.tv_in1k
timm
"2023-04-25T20:14:04Z"
1,472
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1409.1556", "license:bsd-3-clause", "region:us" ]
image-classification
"2023-04-25T20:12:23Z"
--- tags: - image-classification - timm library_name: timm license: bsd-3-clause datasets: - imagenet-1k --- # Model card for vgg16_bn.tv_in1k A VGG image classification model. Trained on ImageNet-1k, original torchvision weights. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 138.4 - GMACs: 15.5 - Activations (M): 13.6 - Image size: 224 x 224 - **Papers:** - Very Deep Convolutional Networks for Large-Scale Image Recognition: https://arxiv.org/abs/1409.1556 - **Dataset:** ImageNet-1k - **Original:** https://github.com/pytorch/vision ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vgg16_bn.tv_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vgg16_bn.tv_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 224, 224]) # torch.Size([1, 128, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 512, 14, 14]) # torch.Size([1, 512, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vgg16_bn.tv_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 512, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{Simonyan2014VeryDC, title={Very Deep Convolutional Networks for Large-Scale Image Recognition}, author={Karen Simonyan and Andrew Zisserman}, journal={CoRR}, year={2014}, volume={abs/1409.1556} } ```
osiria/deberta-italian-question-answering
osiria
"2023-12-10T20:03:06Z"
1,472
5
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "question-answering", "it", "dataset:squad_it", "arxiv:2111.09543", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
question-answering
"2023-06-01T22:10:12Z"
--- license: mit language: - it datasets: - squad_it widget: - text: Quale libro fu scritto da Alessandro Manzoni? context: Alessandro Manzoni pubblicò la prima versione dei Promessi Sposi nel 1827 - text: In quali competizioni gareggia la Ferrari? context: La Scuderia Ferrari Ú una squadra corse italiana di Formula 1 con sede a Maranello - text: Quale sport Ú riferito alla Serie A? context: Il campionato di Serie A Ú la massima divisione professionistica del campionato italiano di calcio maschile model-index: - name: osiria/deberta-italian-question-answering results: - task: type: question-answering name: Question Answering dataset: name: squad_it type: squad_it metrics: - type: exact-match value: 0.7004 name: Exact Match - type: f1 value: 0.8097 name: F1 pipeline_tag: question-answering --- -------------------------------------------------------------------------------------------------- <body> <span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> <br> <span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;">    Task: Question Answering</span> <br> <span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    Model: DeBERTa</span> <br> <span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    Lang: IT</span> <br> <span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;">  </span> <br> <span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> </body> -------------------------------------------------------------------------------------------------- <h3>Model description</h3> This is a <b>DeBERTa</b> <b>[1]</b> model for the <b>Italian</b> language, fine-tuned for <b>Extractive Question Answering</b> on the [SQuAD-IT](https://huggingface.co/datasets/squad_it) dataset <b>[2]</b>. The model is trained with an enhanced procedure that delivers top-level performance and reliability. The latest upgrade, code-name <b>LITEQA</b>, offers increased robustness and maintains optimal results even in uncased settings. <h3>Training and Performances</h3> The model is trained to perform question answering, given a context and a question (under the assumption that the context contains the answer to the question). It has been fine-tuned for Extractive Question Answering, using the SQuAD-IT dataset, for 2 epochs with a linearly decaying learning rate starting from 3e-5, maximum sequence length of 384 and document stride of 128. <br>The dataset includes 54.159 training instances and 7.609 test instances <b>update: version 2.0</b> The 2.0 version further improves the performances by exploiting a 2-phases fine-tuning strategy: the model is first fine-tuned on the English SQuAD v2 (1 epoch, 20% warmup ratio, and max learning rate of 3e-5) then further fine-tuned on the Italian SQuAD (2 epochs, no warmup, initial learning rate of 3e-5) In order to maximize the benefits of the multilingual procedure, [mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) is used as a pre-trained model. When the double fine-tuning is completed, the embedding layer is then compressed as in [deberta-base-italian](https://huggingface.co/osiria/deberta-base-italian) to obtain a mono-lingual model size The performances on the test set are reported in the following table: (<b>version 2.0</b> performances) <br> <b>Cased setting:</b> | EM | F1 | | ------ | ------ | | 70.04 | 80.97 | <b>Uncased setting:</b> | EM | F1 | | ------ | ------ | | 68.55 | 80.11 | Testing notebook: https://huggingface.co/osiria/deberta-italian-question-answering/blob/main/osiria_deberta_italian_qa_evaluation.ipynb <b>update: version 3.0 (LITEQA)</b> The 3.0 version, with the nickname LITEQA, further improves the performances by exploiting a 3-phases fine-tuning strategy: the model is first fine-tuned on the English SQuAD v2 (1 epoch, 20% warmup ratio, and max learning rate of 3e-5) then further fine-tuned on the Italian SQuAD (2 epochs, no warmup, initial learning rate of 3e-5) and lastly fine-tuned on the lowercase Italian SQuAD (1 epoch, no warmup, initial learning rate of 3e-5). This helps making the model generally more robust, but particularly in uncased settings. The 3.0 version can be downloaded from the <b>liteqa</b> branch of this repo. The performances on the test set are reported in the following table: (<b>version 3.0</b> performances) <br> <b>Cased setting:</b> | EM | F1 | | ------ | ------ | | 70.19 | 81.01 | <b>Uncased setting:</b> | EM | F1 | | ------ | ------ | | 69.60 | 80.74 | Testing notebook: https://huggingface.co/osiria/deberta-italian-question-answering/blob/liteqa/osiria_liteqa_evaluation.ipynb <h3>Quick usage</h3> In order to get the best possible outputs from the model, it is recommended to use the following pipeline ```python from transformers import DebertaV2TokenizerFast, DebertaV2ForQuestionAnswering import re import string from transformers.pipelines import QuestionAnsweringPipeline tokenizer = DebertaV2TokenizerFast.from_pretrained("osiria/deberta-italian-question-answering") model = DebertaV2ForQuestionAnswering.from_pretrained("osiria/deberta-italian-question-answering") class OsiriaQA(QuestionAnsweringPipeline): def __init__(self, punctuation = ',;.:!?()[\]{}', **kwargs): QuestionAnsweringPipeline.__init__(self, **kwargs) self.post_regex_left = "^[\s" + punctuation + "]+" self.post_regex_right = "[\s" + punctuation + "]+$" def postprocess(self, output): output = QuestionAnsweringPipeline.postprocess(self, model_outputs=output) output_length = len(output["answer"]) output["answer"] = re.sub(self.post_regex_left, "", output["answer"]) output["start"] = output["start"] + (output_length - len(output["answer"])) output_length = len(output["answer"]) output["answer"] = re.sub(self.post_regex_right, "", output["answer"]) output["end"] = output["end"] - (output_length - len(output["answer"])) return output pipeline_qa = OsiriaQA(model = model, tokenizer = tokenizer) pipeline_qa(context = "Alessandro Manzoni Ú nato a Milano nel 1785", question = "Dove Ú nato Manzoni?") # {'score': 0.9899800419807434, 'start': 28, 'end': 34, 'answer': 'Milano'} ``` You can also try the model online using this web app: https://huggingface.co/spaces/osiria/deberta-italian-question-answering <h3>References</h3> [1] https://arxiv.org/abs/2111.09543 [2] https://link.springer.com/chapter/10.1007/978-3-030-03840-3_29 <h3>Limitations</h3> This model was trained on the English SQuAD v2 and on SQuAD-IT, which is mainly a machine translated version of the original SQuAD v1.1. This means that the quality of the training set is limited by the machine translation. Moreover, the model is meant to answer questions under the assumption that the required information is actually contained in the given context (which is the underlying assumption of SQuAD v1.1). If the assumption is violated, the model will try to return an answer in any case, which is going to be incorrect. <h3>License</h3> The model is released under <b>MIT</b> license
hanzohazashi1/medical_summarizer
hanzohazashi1
"2024-05-27T10:08:56Z"
1,472
0
null
[ "gguf", "dataset:hanzohazashi1/med_convo", "license:llama3", "region:us" ]
null
"2024-05-27T09:52:34Z"
--- license: llama3 datasets: - hanzohazashi1/med_convo ---
TurkuNLP/bert-base-finnish-uncased-v1
TurkuNLP
"2024-02-20T11:56:25Z"
1,471
0
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "fill-mask", "fi", "arxiv:1912.07076", "arxiv:1908.04212", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: fi --- ## Quickstart **Release 1.0** (November 25, 2019) Download the models here: * Cased Finnish BERT Base: [bert-base-finnish-cased-v1.zip](http://dl.turkunlp.org/finbert/bert-base-finnish-cased-v1.zip) * Uncased Finnish BERT Base: [bert-base-finnish-uncased-v1.zip](http://dl.turkunlp.org/finbert/bert-base-finnish-uncased-v1.zip) We generally recommend the use of the cased model. Paper presenting Finnish BERT: [arXiv:1912.07076](https://arxiv.org/abs/1912.07076) ## What's this? A version of Google's [BERT](https://github.com/google-research/bert) deep transfer learning model for Finnish. The model can be fine-tuned to achieve state-of-the-art results for various Finnish natural language processing tasks. FinBERT features a custom 50,000 wordpiece vocabulary that has much better coverage of Finnish words than e.g. the previously released [multilingual BERT](https://github.com/google-research/bert/blob/master/multilingual.md) models from Google: | Vocabulary | Example | |------------|---------| | FinBERT | Suomessa vaihtuu kesÀn aikana sekÀ pÀÀministeri ettÀ valtiovarain ##ministeri . | | Multilingual BERT | Suomessa vai ##htuu kes ##Àn aikana sekÀ p ##ÀÀ ##minister ##i ettÀ valt ##io ##vara ##in ##minister ##i . | FinBERT has been pre-trained for 1 million steps on over 3 billion tokens (24B characters) of Finnish text drawn from news, online discussion, and internet crawls. By contrast, Multilingual BERT was trained on Wikipedia texts, where the Finnish Wikipedia text is approximately 3% of the amount used to train FinBERT. These features allow FinBERT to outperform not only Multilingual BERT but also all previously proposed models when fine-tuned for Finnish natural language processing tasks. ## Results ### Document classification ![learning curves for Yle and Ylilauta document classification](https://raw.githubusercontent.com/TurkuNLP/FinBERT/master/img/yle-ylilauta-curves.png) FinBERT outperforms multilingual BERT (M-BERT) on document classification over a range of training set sizes on the Yle news (left) and Ylilauta online discussion (right) corpora. (Baseline classification performance with [FastText](https://fasttext.cc/) included for reference.) [[code](https://github.com/spyysalo/finbert-text-classification)][[Yle data](https://github.com/spyysalo/yle-corpus)] [[Ylilauta data](https://github.com/spyysalo/ylilauta-corpus)] ### Named Entity Recognition Evaluation on FiNER corpus ([Ruokolainen et al 2019](https://arxiv.org/abs/1908.04212)) | Model | Accuracy | |--------------------|----------| | **FinBERT** | **92.40%** | | Multilingual BERT | 90.29% | | [FiNER-tagger](https://github.com/Traubert/FiNer-rules) (rule-based) | 86.82% | (FiNER tagger results from [Ruokolainen et al. 2019](https://arxiv.org/pdf/1908.04212.pdf)) [[code](https://github.com/jouniluoma/keras-bert-ner)][[data](https://github.com/mpsilfve/finer-data)] ### Part of speech tagging Evaluation on three Finnish corpora annotated with [Universal Dependencies](https://universaldependencies.org/) part-of-speech tags: the Turku Dependency Treebank (TDT), FinnTreeBank (FTB), and Parallel UD treebank (PUD) | Model | TDT | FTB | PUD | |-------------------|-------------|-------------|-------------| | **FinBERT** | **98.23%** | **98.39%** | **98.08%** | | Multilingual BERT | 96.97% | 95.87% | 97.58% | [[code](https://github.com/spyysalo/bert-pos)][[data](http://hdl.handle.net/11234/1-2837)] ## Use with PyTorch If you want to use the model with the huggingface/transformers library, follow the steps in [huggingface_transformers.md](https://github.com/TurkuNLP/FinBERT/blob/master/huggingface_transformers.md) ## Previous releases ### Release 0.2 **October 24, 2019** Beta version of the BERT base uncased model trained from scratch on a corpus of Finnish news, online discussions, and crawled data. Download the model here: [bert-base-finnish-uncased.zip](http://dl.turkunlp.org/finbert/bert-base-finnish-uncased.zip) ### Release 0.1 **September 30, 2019** We release a beta version of the BERT base cased model trained from scratch on a corpus of Finnish news, online discussions, and crawled data. Download the model here: [bert-base-finnish-cased.zip](http://dl.turkunlp.org/finbert/bert-base-finnish-cased.zip)
knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI
knkarthick
"2023-10-03T10:59:56Z"
1,471
14
transformers
[ "transformers", "pytorch", "tf", "safetensors", "bart", "text2text-generation", "seq2seq", "summarization", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
"2022-03-02T23:29:05Z"
--- language: en tags: - bart - seq2seq - summarization license: apache-2.0 datasets: - cnndaily/newyorkdaily/xsum/samsum/dialogsum/AMI metrics: - rouge widget: - text: |- Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool. Okay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming. Um I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh. Mm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright. model-index: - name: bart-large-meeting-summary-xsum-samsum-dialogsum-AMI results: - task: name: Abstractive Text Summarization type: abstractive-text-summarization dataset: name: "cnndaily/newyorkdaily/xsum/samsum/dialogsum/AMI Meeting Corpus" type: cnndaily/newyorkdaily/xsum/samsum/dialogsum/AMI Meeting Corpus metrics: - name: Validation ROGUE-1 type: rouge-1 value: NA - name: Validation ROGUE-2 type: rouge-2 value: NA - name: Validation ROGUE-L type: rouge-L value: NA - name: Validation ROGUE-Lsum type: rouge-Lsum value: NA - name: Test ROGUE-1 type: rouge-1 value: NA - name: Test ROGUE-2 type: rouge-2 value: NA - name: Test ROGUE-L type: rouge-L value: NA - name: Test ROGUE-Lsum type: rouge-Lsum value: NA --- Model obtained by Fine Tuning 'facebook/bart-large-xsum' ## Usage # Example 1 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI") text = '''The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct. ''' summarizer(text) ``` # Example 2 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI") text = '''Bangalore is the capital and the largest city of the Indian state of Karnataka. It has a population of more than 8 million and a metropolitan population of around 11 million, making it the third most populous city and fifth most populous urban agglomeration in India. Located in southern India on the Deccan Plateau, at a height of over 900 m (3,000 ft) above sea level, Bangalore is known for its pleasant climate throughout the year. Its elevation is the highest among the major cities of India.The city's history dates back to around 890 CE, in a stone inscription found at the Nageshwara Temple in Begur, Bangalore. The Begur inscription is written in Halegannada (ancient Kannada), mentions 'Bengaluru Kalaga' (battle of Bengaluru). It was a significant turning point in the history of Bangalore as it bears the earliest reference to the name 'Bengaluru'. In 1537 CE, Kempé Gowdā – a feudal ruler under the Vijayanagara Empire – established a mud fort considered to be the foundation of modern Bangalore and its oldest areas, or petes, which exist to the present day. After the fall of Vijayanagar empire in 16th century, the Mughals sold Bangalore to Chikkadevaraja Wodeyar (1673–1704), the then ruler of the Kingdom of Mysore for three lakh rupees. When Haider Ali seized control of the Kingdom of Mysore, the administration of Bangalore passed into his hands. The city was captured by the British East India Company after victory in the Fourth Anglo-Mysore War (1799), who returned administrative control of the city to the Maharaja of Mysore. The old city developed in the dominions of the Maharaja of Mysore and was made capital of the Princely State of Mysore, which existed as a nominally sovereign entity of the British Raj. In 1809, the British shifted their cantonment to Bangalore, outside the old city, and a town grew up around it, which was governed as part of British India. Following India's independence in 1947, Bangalore became the capital of Mysore State, and remained capital when the new Indian state of Karnataka was formed in 1956. The two urban settlements of Bangalore – city and cantonment – which had developed as independent entities merged into a single urban centre in 1949. The existing Kannada name, BengalÅ«ru, was declared the official name of the city in 2006. Bangalore is widely regarded as the "Silicon Valley of India" (or "IT capital of India") because of its role as the nation's leading information technology (IT) exporter. Indian technological organisations are headquartered in the city. A demographically diverse city, Bangalore is the second fastest-growing major metropolis in India. Recent estimates of the metro economy of its urban area have ranked Bangalore either the fourth- or fifth-most productive metro area of India. As of 2017, Bangalore was home to 7,700 millionaires and 8 billionaires with a total wealth of $320 billion. It is home to many educational and research institutions. Numerous state-owned aerospace and defence organisations are located in the city. The city also houses the Kannada film industry. It was ranked the most liveable Indian city with a population of over a million under the Ease of Living Index 2020. ''' summarizer(text) ``` # Example 3 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI") text = '''Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool. Okay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming. Um I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh. Mm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright. ''' summarizer(text) ``` # Example 4 ```python from transformers import pipeline summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI") text = ''' Das : Hi and welcome to the a16z podcast. I’m Das, and in this episode, I talk SaaS go-to-market with David Ulevitch and our newest enterprise general partner Kristina Shen. The first half of the podcast looks at how remote work impacts the SaaS go-to-market and what the smartest founders are doing to survive the current crisis. The second half covers pricing approaches and strategy, including how to think about free versus paid trials and navigating the transition to larger accounts. But we start with why it’s easier to move upmarket than down
 and the advantage that gives a SaaS startup against incumbents. David : If you have a cohort of customers that are paying you $10,000 a year for your product, you’re going to find a customer that self-selects and is willing to pay $100,000 a year. Once you get one of those, your organization will figure out how you sell to, how you satisfy and support, customers at that price point and that size. But it’s really hard for a company that sells up market to move down market, because they’ve already baked in all that expensive, heavy lifting sales motion. And so as you go down market with a lower price point, usually, you can’t actually support it. Das : Does that mean that it’s easier for a company to do this go-to-market if they’re a new startup as opposed to if they’re a pre-existing SaaS? Kristina : It’s culturally very, very hard to give a product away for free that you’re already charging for. It feels like you’re eating away at your own potential revenue when you do it. So most people who try it end up pulling back very quickly. David : This is actually one of the key reasons why the bottoms up SaaS motion is just so competitive, and compelling, and so destructive against the traditional sales-driven test motion. If you have that great product and people are choosing to use it, it’s very hard for somebody with a sales-driven motion, and all the cost that’s loaded into that, to be able to compete against it. There are so many markets where initially, we would look at companies and say, “Oh, well, this couldn’t possibly be bottoms up. It has to be sold to the CIO. It has to be sold to the CSO or the CFO.” But in almost every case we’ve been wrong, and there has been a bottoms up motion. The canonical example is Slack. It’s crazy that Slack is a bottoms up company, because you’re talking about corporate messaging, and how could you ever have a messaging solution that only a few people might be using, that only a team might be using? But now it’s just, “Oh, yeah, some people started using it, and then more people started using it, and then everyone had Slack.” Kristina : I think another classic example is Dropbox versus Box. Both started as bottoms up businesses, try before you buy. But Box quickly found, “Hey, I’d rather sell to IT.” And Dropbox said, “Hey, we’ve got a great freemium motion going.” And they catalyzed their business around referrals and giving away free storage and shared storage in a way that really helped drive their bottoms up business. Das : It’s a big leap to go from selling to smaller customers to larger customers. How have you seen SaaS companies know or get the timing right on that? Especially since it does seem like that’s really related to scaling your sales force? Kristina : Don’t try to go from a 100-person company to a 20,000-person company. Start targeting early adopters, maybe they’re late stage pre-IPO companies, then newly IPO’d companies. Starting in tech tends to be a little bit easier because they tend to be early adopters. Going vertical by vertical can be a great strategy as well. Targeting one customer who might be branded in that space, can help brand yourself in that category. And then all their competitors will also want your product if you do a good job. A lot of times people will dedicate a sales rep to each vertical, so that they become really, really knowledgeable in that space, and also build their own brand and reputation and know who are the right customers to target. Das : So right now, you’ve got a lot more people working remote. Does this move to remote work mean that on-premise software is dying? And is it accelerating the move to software as a service? Kristina : This remote work and working from home is only going to catalyze more of the conversion from on-premise over to cloud and SaaS. In general, software spend declines 20% during an economic downturn. This happened in ’08, this happened in ’01. But when we look at the last downturn in ’08, SaaS spend actually, for public companies, increased, on average, 10%, which means there’s a 30% spread, which really shows us that there was a huge catalyst from people moving on-premise to SaaS. David : And as people work remote, the ability to use SaaS tools is much easier than having to VPN back into your corporate network. We’ve been seeing that, inside sales teams have been doing larger and larger deals, essentially moving up market on the inside, without having to engage with field sales teams. In fact, a lot of the new SaaS companies today rather than building out a field team, they have a hybrid team, where people are working and closing deals on the inside and if they had to go out and meet with a customer, they would do that. But by and large, most of it was happening over the phone, over email, and over videoconferencing. And all the deals now, by definition, are gonna be done remote because people can’t go visit their customers in person. Das : So with bottoms up, did user behavior and buyer behavior change, so the go-to-market evolved? Or did the go-to-market evolve and then you saw user and buyer behavior change? I’m curious with this move to remote work. Is that going to trigger more changes or has the go-to-market enabled that change in user behavior, even though we see that change coming because of a lot of forces outside of the market? Kristina : I definitely think they are interrelated. But I do think it was a user change that catalyzed everything. We decided that we preferred better software, and we tried a couple products. We were able to purchase off our credit card. And then IT and procurement eventually said, “Wow, everyone’s buying these already, I might as well get a company license and a company deal so I’m not paying as much.” While obviously software vendors had to offer the products that could be self-served, users started to realize they had the power, they wanted to use better software, they paid with their credit cards. And now software vendors are forced to change their go-to-market to actually suit that use case. Das : If that’s the case that when user behavior has changed, it’s tended to be the catalyzing force of bigger changes in the go-to-market, what are some of the changes you foresee for SaaS because the world has changed to this new reality of remote work and more distributed teams? David : We’re in a very uncertain economic environment right now. And a couple of things will become very clear over the next 3 to 9 to 15 months — you’re going to find out which SaaS products are absolutely essential to helping a business operate and run, and which ones were just nice to have and may not get renewed. I think on the customer, buying side, you’re very likely to see people push back on big annual commitments and prefer to go month-to-month where they can. Or you’ll see more incentives from SaaS startups to offer discounts for annual contracts. You’re going to see people that might sign an annual contract, but they may not want to pay upfront. They may prefer to meter the cash out ratably over the term of the contract. And as companies had empowered and allowed budget authority to be pushed down in organizations, you’re gonna see that budget authority get pulled back, more scrutiny on spending, and likely a lot of SaaS products not get renewed that turned out to not be essential. Kristina : I think the smartest founders are making sure they have the runway to continue to exist. And they’re doing that in a couple of ways. They’re preserving cash, and they are making sure that their existing customers are super, super happy, because retaining your customers is so important in this environment. And they’re making sure that they have efficient or profitable customer acquisition. Don’t spend valuable dollars acquiring customers. But acquire customers efficiently that will add to a great existing customer base. Das : To go into pricing and packaging for SaaS for a moment, what are some of the different pricing approaches that you see SaaS companies taking? Kristina : The old school way of doing SaaS go-to-market is bundle everything together, make the pricing super complex, so you don’t actually understand what you’re paying for. You’re forced to purchase it because you need one component of the product. New modern SaaS pricing is keep it simple, keep it tied to value, and make sure you’re solving one thing really, really well. David : You want to make it easy for your customers to give you money. And if your customers don’t understand your pricing, that’s a huge red flag. Sometimes founders will try to over engineer their pricing model. Kristina : We talk a lot about everything has to be 10X better than the alternatives. But it’s much easier to be 10X better when you solve one thing very, very well, and then have simple pricing around it. I think the most common that most people know about is PEPM or per employee per month, where you’re charging basically for every single seat. Another really common model is the freemium model. So, think about a Dropbox, or an Asana, or a Skype, where it’s trigger based. You try the product for free, but when you hit a certain amount of storage, or a certain amount of users, then it converts over to paid. And then you also have a time trial, where you get the full experience of the product for some limited time period. And then you’re asked if you want to continue using the product to pay. And then there’s pay as go, and particularly, pay as you go as a usage model. So, Slack will say, “Hey, if your users aren’t actually using the product this month, we won’t actually charge you for it.” David : The example that Kristina made about Slack and users, everybody understands what a user is, and if they’re using the product, they pay for it, and if they’re not using it, they don’t pay for it. That’s a very friendly way to make it easy for your customers to give you money. If Slack came up with a pricing model that was like based on number of messages, or number of API integration calls, the customer would have no idea what that means. Kristina : There’s also the consumption model. So Twilio only charges you for every SMS text or phone call that you make on the platform any given month. And so they make money or lose money as your usage goes. The pricing is very aligned to your productivity. David : Generally, those are for products where the usage only goes in one direction. If you think of a company like Databricks, where they’re charging for storage, or Amazon’s S3 service, it is very aligned with the customer, but it also strategically aligns with the business because they know the switching cost is very high, the churn is very low. And generally, in those businesses, you’re only going to store more data, so they can charge based on usage or volume of data. Kristina : Recently, there’s been a huge trend of payment as a revenue. It’s particularly common in vertical markets where SaaS companies are adding payments as a revenue in addition to their employee or subscription revenue. If you look at Shopify, for example, more than 50% of their revenue is actually payment revenue. They’re making money every single time you purchase something off one of their shopping cart websites. Das : When you’re working with a founder or a SaaS startup, how have you seen them find the right pricing model for their product, for their market? Kristina : Step one is just talk to a lot of customers. Try to figure out what is the market pricing for possible alternatives or competitors, understand their pain points and their willingness to pay. And just throw a price out there, because you have to have a starting point in order to actually test and iterate. Particularly in the SMB, or the bottoms up business, you can test and iterate pretty quickly because you have so many data points. David : I always tell founders, step one is to just go out there and talk to customers. Step two is just double your prices. I don’t think there’s ever been a great company with a great product that’s fallen apart because their pricing was wrong. But a lot of SaaS startup founders really under price, and you don’t want to find out two or three years later that you were 200% underpriced. A very common thing that SaaS companies do, they’ll have the basic package that either is free or low cost, that you can just sign up online for. They’ll have a middle package where they share some pricing, and then they’ll have the enterprise package where you have to contact sales to find out more. And that way they don’t actually have to show the pricing for that third package. And that gives the salespeople the flexibility to adjust pricing on a per deal basis. Das : When you’re working with companies, why are they underpricing their products? David : I think it’s psychological. People need to price on value, and they don’t know how much value they’re delivering relative to “Oh, it only cost me $100 a month to provide this service, so I just need to charge $200.” But if it turns out you’re saving your customer $50,000 a year, then you’re wildly underpriced. You have to remember that SaaS is essentially a proxy for outsourced IT. You’re spending money on a SaaS service to not pay to develop something internally, or to have to pay IT to support something that’s more complex on-prem. Software is much cheaper than people, and so generally, the price point can be much higher. Kristina : And the other thing is your value increases over time. You’re delivering more features, more products, you understand the customer better. It’s the beauty of the SaaS model and cloud model that you can iterate and push code immediately, and the customer immediately sees value. A lot of times people have the same price point from the first customer sold to three years later and the 200th customer. Quite frankly, you’ve delivered so much value along the way that your price point should have gone up. The other thing I’ll say is a lot of people discount per seat pricing a lot as they move up market. We tend to tell people that the best validation of your product having great product market fit is your ability to hold your price point. So while there is some natural discounting on a per seat basis because people do deserve some volume discounting, I would say try to resist that as much as possible. Das : Especially for a technical founder, it’s so tempting to get in there and fiddle with these knobs. How do you know when it is time to experiment with your pricing and packaging? David : If you’re looking at your business and you see that you are doing more deals, and they’re closing faster, you should raise your pricing. And you pay attention to how long it takes to close deals and whether the number of deals is staying consistent as you do that. And, at some point, you’re going to find out when you’re losing deals on price. I think a moment where companies have to plan ahead to avoid having to course correct is after they roll out massive pricing and packaging changes, which are pretty natural as companies move up market. But how they navigate that transition to larger accounts, and how they either bring along or move away from those smaller, earlier customers who got them to where they are, tends to be really important because they can get a lot of noise on Twitter, they can get a lot of blowback from their customers. So Zendesk is a company where they rolled out a major packaging change. And when they rolled it out, they hadn’t planned on grandfathering in their early customers. They got a lot of pushback, and very quickly, they put out a blog post and said, “We hear what you’re saying, we appreciate you building the business that we’ve become today. We do need to have a package for the future. But all the people that have been customers so far will be grandfathered in for at least a period of time into the old model.” Kristina : If you iterate pricing constantly, you don’t really have this problem because your customers will be used to pricing changes. You normally pair them with new features, and it all kind of works out. But if you have to go through a big grandfather change, I tend to lean towards treating your early customers really, really well. They adopted when you weren’t a big company yet. They probably co-built the product with you in many ways. And so, it’s great to get more dollars out of your customer base, but treat your early customers well. Das : Are there any other failure modes that you see startups really falling into around pricing and packaging or any common mistakes that they make? David : I think a lot of founders don’t always map out the cost or model of their pricing and their product relative to their cost of actually doing sales and marketing and customer acquisition. Kristina : Inside sales is so popular in Silicon Valley. When you’re selling more to an SMB or mid-market type customer, the expectation is that you’re educating and helping the prospective customer over the phone. And so, you’re not expected to be as high touch. But 5K is almost the minimum price point you need to sell to the SMB with an inside sales team in order to pay for the outbound costs and all the conversions, because there is typically a team that sits around the quota carrying rep. And so, price matching — how much your price point is compared to what your go-to-market motion is — matters a lot. Other big failure modes that I see, people guess the ramp time of a sales rep wrong. And ramp time really ties to the segment of customer you’re selling into. It tends be that if you’re selling into the enterprise, the ramp time for sales reps, because sales cycles are so long, tend to be much longer as well. They could be six months plus, could be a year. While if you’re selling more into SMB or mid-market, the ramp time to get a rep up and running can be much shorter, three to six months. Because the sales cycles are shorter, they just iterate much faster, and they ramp up much more quickly. David : The other thing that people have to understand is that sales velocity is a really important component to figuring out how many reps you should be hiring, whether they should be inside reps or field reps. If it takes you 90 days to close a deal, that can’t be a $5,000 a year deal, that has to be a $50,000 or even $150,000 a year deal. Das : Kristina, I know you’ve done a lot of work with metrics. So how do those play in? Kristina : Probably the one way to sum it all together is how many months does it take to pay back customer acquisition cost. Very commonly within the SaaS world, we talk about a 12-month CAC payback. We typically want to see for every dollar you spend on sales and marketing, you get a dollar back within a year. That means you can tweak the inputs any way you want. Let’s say that doing paid acquisition is really effective for you. Then, you can spend proportionally more on paid acquisition and less on sales reps. Vice versa, if you have a great inbound engine, you actually can hire a lot more sales reps and spend more on sales headcount. With all formulas, it’s a guide rail, so if you have customers that retain really, really well, let’s say you’re selling to the enterprise, and you’ve got a 90% or 95% annual retention rate, then your CAC payback could be between 12 and 24 months. But let’s say you’re selling to the SMB and churn is 2% or 3% monthly, which ends up being like 80% to 90% annual retention. Then, because your customer is less sticky, I would recommend looking at a CAC payback of 6 to 12 months. Das : How should you think about doing a free trial versus a paid trial? David : On the one hand, the bottoms up motion where people can try essentially a full version of a product before they buy it is extremely powerful. On the other hand, I’ve started to try to think about how I advise companies, when they are thinking about a free trial for something that might cost $100,000 or $200,000 a year? Do we do a paid pilot that has some sort of contractual obligation that if we meet then turns into a commercial engagement? Kristina : I do think the beauty of the bottoms up business is that you can get people to try the entire experience of the product for free, and they fall in love with it, and a certain percentage will convert. And that works really, really well for products that can self-serve. When you start moving up market to more complex products, the challenge with trials is it takes work to actually implement the product, whether it be integrations, IT has to give access, etc. You lose that self-serve ability, which is so amazing in the trial. And so, I tend to be more in the camp of paid trials, if it costs you money to actually deploy the trial. And when you’re selling to bigger customers, they associate value when they have to pay. Once a customer has to pay you, then they feel a need to make the project successful and thus they will onboard, schedule things, give you data and access. David : If you can get to a point where you get the customer to do that paid pilot, such that the only difference between a pilot and an actual customer is just the signing of a contract, that’s very powerful. Now, that does force you to have a really good pre-sales motion to make sure that you can deliver on the promise you’ve made your customers. When companies don’t have a great product, and they paper over it with professional services and sales engineering and post-sales support, that paid pilot thing doesn’t work because the experience isn’t good enough. So, it really is incumbent on the SaaS company that does a paid pilot to make sure that they are able to deliver on that experience. Kristina : And one emerging trend recently is people signing an annual contract with a one or three month out, as a replacement to the paid pilot. Because it’s the best of both worlds, the SaaS company that’s selling the product gets a higher level of commitment. And the customer gets the optionality of opting out in the same way as a trial without any clawback. It really comes down to where procurement falls. Sometimes procurement is at the beginning of that decision, which makes it more like an annual contract. Sometimes procurement is at the one or three month opt-out period, which means the customer already has a great experience, loves the product, and it is an easier way to convert procurements to actually sign on
 David : And that is a really good segue into renewals. I always tell founders, you might have this subscription business, but it’s not a recurring revenue business until the second year when the revenue actually recurs. I think you really have the first three months to get a customer up and running and happy. And if they’re not, you then have about three months to fix it. And if all that works out, then the remaining six months of the contract can be focused on upsell and expansion. Das : Awesome. Thank you, Kristina. Thank you, David. Kristina : Thanks so much for having us. This was fun. David : Yeah, a lot of fun, great topics, and our favorite thing to talk about. ''' summarizer(text) ```
Aratako/Qwen1.5-MoE-2x7B
Aratako
"2024-03-25T00:01:54Z"
1,471
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "moe", "conversational", "custom_code", "en", "base_model:Qwen/Qwen1.5-7B-Chat", "base_model:abacusai/Liberated-Qwen1.5-7B", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-24T06:50:18Z"
--- base_model: - Qwen/Qwen1.5-7B-Chat - abacusai/Liberated-Qwen1.5-7B license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE language: - en tags: - mergekit - merge - moe --- # Qwen1.5-MoE-2x7B ## Description This model is created using MoE (Mixture of Experts) through mergekit based on [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) and [abacusai/Liberated-Qwen1.5-7B](https://huggingface.co/abacusai/Liberated-Qwen1.5-7B) without further FT. It utilizes a customized script for MoE via mergekit, which is available [here](https://github.com/Aratako/mergekit-qwen2). Due to the structural modifications introduced by MoE, the use of this model requires [custom modeling file](https://huggingface.co/Aratako/Liberated-Qwen1.5-2x7B/blob/main/modeling_qwen2.py) and [custom configuration file](https://huggingface.co/Aratako/Liberated-Qwen1.5-2x7B/blob/main/configuration_qwen2.py). When using the model, please place these files in the same folder as the model. This model inherits the the [tongyi-qianwen license](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/LICENSE). ## Benchmark The benchmark score of the [mt-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) for this model and the two base models are as follows: **1-turn** |Model|Size|Coding|Extraction|Humanities|Math|Reasoning|Roleplay|STEM|Writing|avg_score| |---|---|---|---|---|---|---|---|---|---|---| | Liberated-Qwen1.5-7B | 7B | 4.4 | **7.8** | 6.95 | 5.0 | 6.4 | 7.6 | 7.65 | 8.85 | 6.83125 | | Qwen1.5-7B-Chat | 7B | 4.4 | 7.7 | **9.6** | **6.9** | 7.0 | **8.7** | 9.65 | 9.7 | 7.95625 | | This model | 2x7B | **5.1** | 7.4 | 9.45 | 6.4 | **7.2** | 8.65 | **9.75** | **9.8** | **7.96875** | ![mt-bench-1turn](./mt-bench-1turn.png) **2-turn** |Model|Size|Coding|Extraction|Humanities|Math|Reasoning|Roleplay|STEM|Writing|avg_score| |---|---|---|---|---|---|---|---|---|---|---| | Liberated-Qwen1.5-7B | 7B | 4.4 | 6.2 | 7.1 | 3.0 | **5.7** | 7.4 | 6.3 | 3.5 | 5.450 | | Qwen1.5-7B-Chat | 7B | 4.5 | **8.0** | 9.9 | **4.9** | 5.0 | **8.9** | 9.4 | **8.4** | **7.375** | | This model | 2x7B | **4.7** | 7.0 | **10.0** | 4.8 | 4.3 | 8.6 | **9.5** | 7.3 | 7.025 | ![mt-bench-2turn](./mt-bench-2turn.png) Although the benchmark scores have slightly deteriorated, it seems that this is due to the poor performance of the Liberated-Qwen1.5-7B model used in the merge on mt-bench. I think that doing MoE with models that have better performance or are fine-tuned for specific tasks can yield better results. ## Merge config [mergekit_config.yml](./mergekit_moe_config.yml) ```yaml base_model: ./Qwen1.5-7B-Chat gate_mode: random dtype: bfloat16 experts: - source_model: ./Qwen1.5-7B-Chat positive_prompts: [] - source_model: ./Liberated-Qwen1.5-7B positive_prompts: [] tokenizer_source: model:./Qwen1.5-7B-Chat ``` ## Gratitude - Huge thanks to [Alibaba Cloud Qwen](https://www.alibabacloud.com/solutions/generative-ai/qwen) for training and publishing the weights of Qwen model - Thank you to [abacusai](https://huggingface.co/abacusai) for publishing fine-tuned model from Qwen - And huge thanks to [mlabonne](https://huggingface.co/mlabonne), as I customized modeling file using [phixtral](https://huggingface.co/mlabonne/phixtral-4x2_8) as a reference
zhuipiaochen/LLM
zhuipiaochen
"2024-06-11T08:18:49Z"
1,471
0
null
[ "gguf", "region:us" ]
null
"2024-06-03T14:19:31Z"
Entry not found
abhinand/tamil-llama-7b-base-v0.1
abhinand
"2024-01-25T03:18:06Z"
1,470
9
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ta", "en", "arxiv:2311.05845", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-08T03:55:05Z"
--- language: - ta - en license: llama2 --- # Tamil LLaMA 7B Base v0.1 [pre-trained] Welcome to the inaugural release of the Tamil LLaMA 7B base model – an important step in advancing LLMs for the Tamil language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks. To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP)]() that outlines our journey and the model's potential impact. > **Please Note:** This model, labeled as a foundational Tamil Language Model (LLM), is designed primarily for Causal Language Modeling (LM) purposes. In other words, if you are looking for an instruction following model in Tamil, you may find [abhinand/tamil-llama-7b-instruct-v0.1](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) more suitable for your needs. ## Model description The Tamil LLaMA models have been enhanced and tailored specifically with an extensive Tamil vocabulary of 16,000 tokens, building upon the foundation set by the original LLaMA-2. - **Model type:** A 7B parameter model for Causal LM pre-trained on [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset's Tamil subset. - **Language(s):** Tamil and English - **License:** GNU General Public License v3.0 - **Source Model:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) - **Training Precision:** `float16` - **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) ## Related Models | Model | Type | Data | Base Model | # Params | Download Links | |--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------| | Tamil LLaMA 7B Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) | | Tamil LLaMA 13B Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) | | Tamil LLaMA 7B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) | | Tamil LLaMA 13B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) | ## Usage Note It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications. ## Meet the Developers Get to know the creators behind this innovative model and follow their contributions to the field: - [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/) ## Citation If you use this model or the Tamil-Llama dataset in your research, please cite: ```bibtex @misc{balachandran2023tamilllama, title={Tamil-Llama: A New Tamil Language Model Based on Llama 2}, author={Abhinand Balachandran}, year={2023}, eprint={2311.05845}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language.
PassionFriend/5Fvedo8SsQXnfjUnymZqeJ9E68Hnu92Qhgt29Uft8UsXUuVZ_vgg
PassionFriend
"2024-03-01T06:44:02Z"
1,470
0
keras
[ "keras", "region:us" ]
null
"2024-02-14T13:09:49Z"
Entry not found
bardsai/jaskier-7b-dpo-v6.1
bardsai
"2024-02-26T12:16:15Z"
1,470
10
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "llm", "7b", "en", "dataset:jondurbin/truthy-dpo-v0.1", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-20T08:28:44Z"
--- library_name: transformers tags: - llm - 7b license: cc-by-4.0 datasets: - jondurbin/truthy-dpo-v0.1 language: - en --- # Jaskier-7b-dpo-v5.6 <figure> ![Jaskier](Bard.jpeg) </figure> **This is work-in-progress model, may not be ready for production use** Model based on `bardsai/jaskier-7b-dpo-v5.6` (downstream version of Mistral7B) finetuned using Direct Preference Optimization on argilla/distilabel-math-preference-dpo. ## How to use You can use this model directly with a Hugging Face pipeline: ```python from transformers import pipeline, Conversation import torch base_model_name = "bardsai/jaskier-7b-dpo-v6.1" chatbot = pipeline("conversational", model=base_model_name, torch_dtype=torch.float16, device_map="auto") conversation = Conversation("Can Poland into space?") conversation = chatbot(conversation) print(conversation.messages[-1]["content"]) ``` ## Output "Poland, as a nation, doesn't physically travel to space. However, Poland has contributed to the field of space exploration through its scientists, engineers, and collaborations with international space agencies. The Polish Space Agency, established in 2016, aims to promote and coordinate the country's space activities." ## Changelog - 2024-02-20: Initial release ## About bards.ai At bards.ai, we focus on providing machine learning expertise and skills to our partners, particularly in the areas of nlp, machine vision and time series analysis. Our team is located in Wroclaw, Poland. Please visit our website for more information: bards.ai Let us know if you use our model :). Also, if you need any help, feel free to contact us at [email protected]
darkstorm2150/Protogen_v2.2_Official_Release
darkstorm2150
"2023-01-27T19:16:44Z"
1,469
195
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "art", "artistic", "protogen", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2022-12-31T21:59:09Z"
--- language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - art - artistic - diffusers - protogen inference: true license: creativeml-openrail-m --- <center><img src="https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release/resolve/main/Protogen_v2.2-512.png" style="height:690px; border-radius: 7%; border: 10px solid #663380; padding-top:0px;" span title="Protogen v2.2 Nurse Raw Output"></center> <center><h1>Protogen v2.2 (Anime) Official Release</h1></center> <center><p><em>Research Model by <a href="https://instagram.com/officialvictorespinoza">darkstorm2150</a></em></p></center> </div> ## Table of contents * [General info](#general-info) * [Granular Adaptive Learning](#granular-adaptive-learning) * [Trigger Words](#trigger-words) * [Setup](#setup) * [Space](#space) * [CompVis](#compvis) * [Diffusers](#🧚-diffusers) * [Checkpoint Merging Data Reference](#checkpoint-merging-data-reference) * [License](#license) ## General info Protogen was warm-started with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) and fine-tuned on a large amount of data from large datasets new and trending on civitai.com. You can enforce camera capture by using the prompt with "modelshoot style". It should also be very "dreambooth-able", being able to generate high fidelity faces with a little amount of steps (see [dreambooth](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth)). ## Granular Adaptive Learning Granular adaptive learning is a machine learning technique that focuses on adjusting the learning process at a fine-grained level, rather than making global adjustments to the model. This approach allows the model to adapt to specific patterns or features in the data, rather than making assumptions based on general trends. Granular adaptive learning can be achieved through techniques such as active learning, which allows the model to select the data it wants to learn from, or through the use of reinforcement learning, where the model receives feedback on its performance and adapts based on that feedback. It can also be achieved through techniques such as online learning where the model adjust itself as it receives more data. Granular adaptive learning is often used in situations where the data is highly diverse or non-stationary and where the model needs to adapt quickly to changing patterns. This is often the case in dynamic environments such as robotics, financial markets, and natural language processing. ## Trigger Words modelshoot style Trigger words are also available for the hassan1.4 and f222, might have to google them :) ## Setup To run this model, download the model.ckpt or model.safetensor and install it in your "stable-diffusion-webui\models\Stable-diffusion" directory ## Space We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run dreamlike-diffusion-1.0: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/darkstorm2150/Stable-Diffusion-Protogen-webui) ## CompVis ## CKPT [Download Protogen v2.2.ckpt (4.27GB)](https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release/blob/main/Protogen_V2.2.ckpt) [Download Protogen v2.2-pruned-fp16 (1.89GB)](https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release/resolve/main/Protogen_V2.2-pruned-fp16.ckpt) ## Safetensors [Download Protogen v2.2.safetensor (4.27GB)](https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release/resolve/main/Protogen_V2.2.safetensors) [Download Protogen V2.2-pruned-fp16.safetensors (1.89GB)](https://huggingface.co/darkstorm2150/Protogen_v2.2_Official_Release/resolve/main/Protogen_V2.2-pruned-fp16.safetensors) ## 🧚 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). ```python from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler import torch prompt = ( "modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, " "english medieval witch, black silk vale, pale skin, black silk robe, black cat, necromancy magic, medieval era, " "photorealistic painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, " "trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, photorealistic painting art by midjourney and greg rutkowski" ) model_id = "darkstorm2150/Protogen_v2.2_Official_Release" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") image = pipe(prompt, num_inference_steps=25).images[0] image.save("./result.jpg") ``` ## - PENDING DATA FOR MERGE, RPGv2 not accounted.. ## Checkpoint Merging Data Reference <style> .myTable { border-collapse:collapse; } .myTable th { background-color:#663380; color:white; } .myTable td, .myTable th { padding:5px; border:1px solid #663380; } </style> <table class="myTable"> <tr> <th>Models</th> <th>Protogen v2.2 (Anime)</th> <th>Protogen x3.4 (Photo)</th> <th>Protogen x5.3 (Photo)</th> <th>Protogen x5.8 (Sci-fi/Anime)</th> <th>Protogen x5.9 (Dragon)</th> <th>Protogen x7.4 (Eclipse)</th> <th>Protogen x8.0 (Nova)</th> <th>Protogen x8.6 (Infinity)</th> </tr> <tr> <td>seek_art_mega v1</td> <td>52.50%</td> <td>42.76%</td> <td>42.63%</td> <td></td> <td></td> <td></td> <td>25.21%</td> <td>14.83%</td> </tr> <tr> <td>modelshoot v1</td> <td>30.00%</td> <td>24.44%</td> <td>24.37%</td> <td>2.56%</td> <td>2.05%</td> <td>3.48%</td> <td>22.91%</td> <td>13.48%</td> </tr> <tr> <td>elldreth v1</td> <td>12.64%</td> <td>10.30%</td> <td>10.23%</td> <td></td> <td></td> <td></td> <td>6.06%</td> <td>3.57%</td> </tr> <tr> <td>photoreal v2</td> <td></td> <td></td> <td>10.00%</td> <td>48.64%</td> <td>38.91%</td> <td>66.33%</td> <td>20.49%</td> <td>12.06%</td> </tr> <tr> <td>analogdiffusion v1</td> <td></td> <td>4.75%</td> <td>4.50%</td> <td></td> <td></td> <td></td> <td>1.75%</td> <td>1.03%</td> </tr> <tr> <td>openjourney v2</td> <td></td> <td>4.51%</td> <td>4.28%</td> <td></td> <td></td> <td>4.75%</td> <td>2.26%</td> <td>1.33%</td> </tr> <tr> <td>hassan1.4</td> <td>2.63%</td> <td>2.14%</td> <td>2.13%</td> <td></td> <td></td> <td></td> <td>1.26%</td> <td>0.74%</td> </tr> <tr> <td>f222</td> <td>2.23%</td> <td>1.82%</td> <td>1.81%</td> <td></td> <td></td> <td></td> <td>1.07%</td> <td>0.63%</td> </tr> <tr> <td>hasdx</td> <td></td> <td></td> <td></td> <td>20.00%</td> <td>16.00%</td> <td>4.07%</td> <td>5.01%</td> <td>2.95%</td> </tr> <tr> <td>moistmix</td> <td></td> <td></td> <td></td> <td>16.00%</td> <td>12.80%</td> <td>3.86%</td> <td>4.08%</td> <td>2.40%</td> </tr> <tr> <td>roboDiffusion v1</td> <td></td> <td>4.29%</td> <td></td> <td>12.80%</td> <td>10.24%</td> <td>3.67%</td> <td>4.41%</td> <td>2.60%</td> </tr> <tr> <td>RPG v3</td> <td></td> <td>5.00%</td> <td></td> <td></td> <td>20.00%</td> <td>4.29%</td> <td>4.29%</td> <td>2.52%</td> </tr> <tr> <td>anything&everything</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.51%</td> <td>0.56%</td> <td>0.33%</td> </tr> <tr> <td>dreamlikediff v1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>5.0%</td> <td>0.63%</td> <td>0.37%</td> </tr> <tr> <td>sci-fidiff v1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>3.10%</td> </tr> <tr> <td>synthwavepunk v2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>3.26%</td> </tr> <tr> <td>mashupv2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>11.51%</td> </tr> <tr> <td>dreamshaper 252</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.04%</td> </tr> <tr> <td>comicdiff v2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.25%</td> </tr> <tr> <td>artEros</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>15.00%</td> </tr> </table> ## License By downloading you agree to the terms of these licenses <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">CreativeML Open RAIL-M</a> <a href="https://huggingface.co/coreco/seek.art_MEGA/blob/main/LICENSE.txt">Seek Art Mega License</a>
ChrisWilson011016/5GseTsnUzgPzwyTi7N9SXLtMVj2wvfZZVDvxGvMh4reBP8sN_vgg
ChrisWilson011016
"2024-03-04T18:55:16Z"
1,469
0
keras
[ "keras", "region:us" ]
null
"2024-02-24T15:19:57Z"
Entry not found
ToastyPigeon/SmolLlama-1.5B-Bottomheavy
ToastyPigeon
"2024-03-26T01:55:44Z"
1,469
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-26T01:29:12Z"
--- base_model: [] tags: - mergekit - merge license: apache-2.0 --- # StackTinyLlama-Sorted-Bottomheavy This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * D:/axolotl-data/TinyLlama-1.1B ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: #non-repeating shuffled layers - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [0, 10] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [10, 11] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [10, 11] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [11, 12] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [11, 12] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [12, 13] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [12, 13] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [13, 14] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [13, 14] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [14, 15] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [14, 15] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [15, 16] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [15, 16] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [16, 17] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [16, 17] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [17, 18] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [17, 18] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [18, 19] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [18, 19] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [19, 20] - sources: - model: D:/axolotl-data/TinyLlama-1.1B layer_range: [19, 22] merge_method: passthrough dtype: float16 ```
mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF
mradermacher
"2024-06-10T01:32:46Z"
1,469
1
transformers
[ "transformers", "gguf", "nsfw", "en", "base_model:D1rtyB1rd/Dirty-Alice-Tiny-1.1B-v1", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-06-10T00:28:18Z"
--- base_model: D1rtyB1rd/Dirty-Alice-Tiny-1.1B-v1 language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - nsfw --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/D1rtyB1rd/Dirty-Alice-Tiny-1.1B-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.Q2_K.gguf) | Q2_K | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.IQ3_XS.gguf) | IQ3_XS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.Q3_K_S.gguf) | Q3_K_S | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.IQ3_M.gguf) | IQ3_M | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.Q3_K_L.gguf) | Q3_K_L | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.IQ4_XS.gguf) | IQ4_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.Q5_K_S.gguf) | Q5_K_S | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.Q5_K_M.gguf) | Q5_K_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.Q6_K.gguf) | Q6_K | 1.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Dirty-Alice-Tiny-1.1B-v1-GGUF/resolve/main/Dirty-Alice-Tiny-1.1B-v1.f16.gguf) | f16 | 2.3 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q4_0-GGUF
NikolayKozloff
"2024-06-30T17:10:49Z"
1,469
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:Sao10K/Fimbulvetr-11B-v2.1-16K", "license:cc-by-nc-4.0", "region:us" ]
null
"2024-06-30T17:10:22Z"
--- base_model: Sao10K/Fimbulvetr-11B-v2.1-16K language: - en license: cc-by-nc-4.0 tags: - llama-cpp - gguf-my-repo --- # NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q4_0-GGUF This model was converted to GGUF format from [`Sao10K/Fimbulvetr-11B-v2.1-16K`](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2.1-16K) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2.1-16K) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q4_0-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q4_0-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q4_0-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/Fimbulvetr-11B-v2.1-16K-Q4_0-GGUF --hf-file fimbulvetr-11b-v2.1-16k-q4_0.gguf -c 2048 ```
google/bert_uncased_L-12_H-256_A-4
google
"2021-05-19T17:26:24Z"
1,468
1
transformers
[ "transformers", "pytorch", "jax", "bert", "arxiv:1908.08962", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- thumbnail: https://huggingface.co/front/thumbnails/google.png license: apache-2.0 --- BERT Miniatures === This is the set of 24 BERT models referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962) (English only, uncased, trained with WordPiece masking). We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher. Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity. You can download the 24 BERT miniatures either from the [official BERT Github page](https://github.com/google-research/bert/), or via HuggingFace from the links below: | |H=128|H=256|H=512|H=768| |---|:---:|:---:|:---:|:---:| | **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]| | **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]| | **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]| | **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]| | **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]| | **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]| Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model. Here are the corresponding GLUE scores on the test set: |Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX| |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0| |BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1| |BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6| |BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5| For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs: - batch sizes: 8, 16, 32, 64, 128 - learning rates: 3e-4, 1e-4, 5e-5, 3e-5 If you use these models, please cite the following paper: ``` @article{turc2019, title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models}, author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1908.08962v2 }, year={2019} } ``` [2_128]: https://huggingface.co/google/bert_uncased_L-2_H-128_A-2 [2_256]: https://huggingface.co/google/bert_uncased_L-2_H-256_A-4 [2_512]: https://huggingface.co/google/bert_uncased_L-2_H-512_A-8 [2_768]: https://huggingface.co/google/bert_uncased_L-2_H-768_A-12 [4_128]: https://huggingface.co/google/bert_uncased_L-4_H-128_A-2 [4_256]: https://huggingface.co/google/bert_uncased_L-4_H-256_A-4 [4_512]: https://huggingface.co/google/bert_uncased_L-4_H-512_A-8 [4_768]: https://huggingface.co/google/bert_uncased_L-4_H-768_A-12 [6_128]: https://huggingface.co/google/bert_uncased_L-6_H-128_A-2 [6_256]: https://huggingface.co/google/bert_uncased_L-6_H-256_A-4 [6_512]: https://huggingface.co/google/bert_uncased_L-6_H-512_A-8 [6_768]: https://huggingface.co/google/bert_uncased_L-6_H-768_A-12 [8_128]: https://huggingface.co/google/bert_uncased_L-8_H-128_A-2 [8_256]: https://huggingface.co/google/bert_uncased_L-8_H-256_A-4 [8_512]: https://huggingface.co/google/bert_uncased_L-8_H-512_A-8 [8_768]: https://huggingface.co/google/bert_uncased_L-8_H-768_A-12 [10_128]: https://huggingface.co/google/bert_uncased_L-10_H-128_A-2 [10_256]: https://huggingface.co/google/bert_uncased_L-10_H-256_A-4 [10_512]: https://huggingface.co/google/bert_uncased_L-10_H-512_A-8 [10_768]: https://huggingface.co/google/bert_uncased_L-10_H-768_A-12 [12_128]: https://huggingface.co/google/bert_uncased_L-12_H-128_A-2 [12_256]: https://huggingface.co/google/bert_uncased_L-12_H-256_A-4 [12_512]: https://huggingface.co/google/bert_uncased_L-12_H-512_A-8 [12_768]: https://huggingface.co/google/bert_uncased_L-12_H-768_A-12
timm/resnet50.ram_in1k
timm
"2024-02-10T23:39:28Z"
1,468
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:1512.03385", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-05T18:14:08Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm --- # Model card for resnet50.ram_in1k A ResNet-B image classification model. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * AugMix (with RandAugment) recipe * SGD (w/ Nesterov) optimizer and JSD (Jensen–Shannon divergence) loss * Cosine LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 25.6 - GMACs: 4.1 - Activations (M): 11.1 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnet50.ram_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet50.ram_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnet50.ram_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ```
setu4993/LEALLA-base
setu4993
"2023-10-19T06:20:13Z"
1,468
2
transformers
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "feature-extraction", "sentence_embedding", "multilingual", "google", "sentence-similarity", "lealla", "labse", "af", "am", "ar", "as", "az", "be", "bg", "bn", "bo", "bs", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "he", "hi", "hmn", "hr", "ht", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "or", "pa", "pl", "pt", "ro", "ru", "rw", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tk", "tl", "tr", "tt", "ug", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu", "dataset:CommonCrawl", "dataset:Wikipedia", "arxiv:2302.08387", "license:apache-2.0", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2023-05-21T08:18:31Z"
--- pipeline_tag: sentence-similarity language: - af - am - ar - as - az - be - bg - bn - bo - bs - ca - ceb - co - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - haw - he - hi - hmn - hr - ht - hu - hy - id - ig - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - lv - mg - mi - mk - ml - mn - mr - ms - mt - my - ne - nl - no - ny - or - pa - pl - pt - ro - ru - rw - si - sk - sl - sm - sn - so - sq - sr - st - su - sv - sw - ta - te - tg - th - tk - tl - tr - tt - ug - uk - ur - uz - vi - wo - xh - yi - yo - zh - zu tags: - bert - sentence_embedding - multilingual - google - sentence-similarity - lealla - labse license: apache-2.0 datasets: - CommonCrawl - Wikipedia --- # LEALLA-base ## Model description LEALLA is a collection of lightweight language-agnostic sentence embedding models supporting 109 languages, distilled from [LaBSE](https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html). The model is useful for getting multilingual sentence embeddings and for bi-text retrieval. - Model: [HuggingFace's model hub](https://huggingface.co/setu4993/LEALLA-base). - Paper: [arXiv](https://arxiv.org/abs/2302.08387). - Original model: [TensorFlow Hub](https://tfhub.dev/google/LEALLA/LEALLA-base/1). - Conversion from TensorFlow to PyTorch: [GitHub](https://github.com/setu4993/convert-labse-tf-pt). This is migrated from the v1 model on the TF Hub. The embeddings produced by both the versions of the model are [equivalent](https://github.com/setu4993/convert-labse-tf-pt/blob/c0d4fbce789b0709a9664464f032d2e9f5368a86/tests/test_conversion_lealla.py#L31). Though, for some of the languages (like Japanese), the LEALLA models appear to require higher tolerances when comparing embeddings and similarities. ## Usage Using the model: ```python import torch from transformers import BertModel, BertTokenizerFast tokenizer = BertTokenizerFast.from_pretrained("setu4993/LEALLA-base") model = BertModel.from_pretrained("setu4993/LEALLA-base") model = model.eval() english_sentences = [ "dog", "Puppies are nice.", "I enjoy taking long walks along the beach with my dog.", ] english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True) with torch.no_grad(): english_outputs = model(**english_inputs) ``` To get the sentence embeddings, use the pooler output: ```python english_embeddings = english_outputs.pooler_output ``` Output for other languages: ```python italian_sentences = [ "cane", "I cuccioli sono carini.", "Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.", ] japanese_sentences = ["犬", "子犬はいいです", "私は犬ず䞀緒にビヌチを散歩するのが奜きです"] italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True) japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True) with torch.no_grad(): italian_outputs = model(**italian_inputs) japanese_outputs = model(**japanese_inputs) italian_embeddings = italian_outputs.pooler_output japanese_embeddings = japanese_outputs.pooler_output ``` For similarity between sentences, an L2-norm is recommended before calculating the similarity: ```python import torch.nn.functional as F def similarity(embeddings_1, embeddings_2): normalized_embeddings_1 = F.normalize(embeddings_1, p=2) normalized_embeddings_2 = F.normalize(embeddings_2, p=2) return torch.matmul( normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1) ) print(similarity(english_embeddings, italian_embeddings)) print(similarity(english_embeddings, japanese_embeddings)) print(similarity(italian_embeddings, japanese_embeddings)) ``` ## Details Details about data, training, evaluation and performance metrics are available in the [original paper](https://arxiv.org/abs/2302.08387). ### BibTeX entry and citation info ```bibtex @inproceedings{mao-nakagawa-2023-lealla, title = "{LEALLA}: Learning Lightweight Language-agnostic Sentence Embeddings with Knowledge Distillation", author = "Mao, Zhuoyuan and Nakagawa, Tetsuji", booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics", month = may, year = "2023", address = "Dubrovnik, Croatia", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.eacl-main.138", doi = "10.18653/v1/2023.eacl-main.138", pages = "1886--1894", abstract = "Large-scale language-agnostic sentence embedding models such as LaBSE (Feng et al., 2022) obtain state-of-the-art performance for parallel sentence alignment. However, these large-scale models can suffer from inference speed and computation overhead. This study systematically explores learning language-agnostic sentence embeddings with lightweight models. We demonstrate that a thin-deep encoder can construct robust low-dimensional sentence embeddings for 109 languages. With our proposed distillation methods, we achieve further improvements by incorporating knowledge from a teacher model. Empirical results on Tatoeba, United Nations, and BUCC show the effectiveness of our lightweight models. We release our lightweight language-agnostic sentence embedding models LEALLA on TensorFlow Hub.", } ```
second-state/Qwen2-72B-Instruct-GGUF
second-state
"2024-06-08T03:55:47Z"
1,468
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation", "chat", "en", "base_model:Qwen/Qwen2-72B-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-07T07:52:04Z"
--- base_model: Qwen/Qwen2-72B-Instruct license: apache-2.0 model_creator: Qwen model_name: Qwen2-72B-Instruct quantized_by: Second State Inc. language: - en pipeline_tag: text-generation tags: - chat --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Qwen2-72B-Instruct-GGUF ## Original Model [Qwen/Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) ## Run with LlamaEdge - LlamaEdge version: [v0.11.2](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.11.2) - Prompt template - Prompt type: `chatml` - Prompt string ```text <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` - Context size: `131072` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2-72B-Instruct-Q5_K_M.gguf \ llama-api-server.wasm \ --model-name Qwen2-72B-Instruct \ --prompt-template chatml \ --ctx-size 131072 ``` - Run as LlamaEdge command app ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Qwen2-72B-Instruct-Q5_K_M.gguf \ llama-chat.wasm \ --prompt-template chatml \ --ctx-size 131072 ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Qwen2-72B-Instruct-Q2_K.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q2_K.gguf) | Q2_K | 2 | 29.8 GB| smallest, significant quality loss - not recommended for most purposes | | [Qwen2-72B-Instruct-Q3_K_L.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 39.5 GB| small, substantial quality loss | | [Qwen2-72B-Instruct-Q3_K_M.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 37.7 GB| very small, high quality loss | | [Qwen2-72B-Instruct-Q3_K_S.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 34.5 GB| very small, high quality loss | | [Qwen2-72B-Instruct-Q4_0.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q4_0.gguf) | Q4_0 | 4 | 41.2 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [Qwen2-72B-Instruct-Q4_K_M.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 47.4 GB| medium, balanced quality - recommended | | [Qwen2-72B-Instruct-Q4_K_S.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 43.9 GB| small, greater quality loss | | [Qwen2-72B-Instruct-Q5_0-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q5_0-00001-of-00002.gguf) | Q5_0 | 5 | 32.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Qwen2-72B-Instruct-Q5_0-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q5_0-00002-of-00002.gguf) | Q5_0 | 5 | 18 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Qwen2-72B-Instruct-Q5_K_M-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q5_K_M-00001-of-00002.gguf) | Q5_K_M | 5 | 32.2 GB| large, very low quality loss - recommended | | [Qwen2-72B-Instruct-Q5_K_M-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q5_K_M-00002-of-00002.gguf) | Q5_K_M | 5 | 22.3 GB| large, very low quality loss - recommended | | [Qwen2-72B-Instruct-Q5_K_S-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q5_K_S-00001-of-00002.gguf) | Q5_K_S | 5 | 32.1 GB| large, low quality loss - recommended | | [Qwen2-72B-Instruct-Q5_K_S-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q5_K_S-00002-of-00002.gguf) | Q5_K_S | 5 | 32.1 GB| large, low quality loss - recommended | | [Qwen2-72B-Instruct-Q6_K-00001-of-00002.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q6_K-00001-of-00002.gguf) | Q6_K | 6 | 32.2 GB| very large, extremely low quality loss | | [Qwen2-72B-Instruct-Q6_K-00002-of-00002.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q6_K-00002-of-00002.gguf) | Q6_K | 6 | 32.2 GB| very large, extremely low quality loss | | [Qwen2-72B-Instruct-Q8_0-00001-of-00003.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q8_0-00001-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended | | [Qwen2-72B-Instruct-Q8_0-00002-of-00003.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q8_0-00002-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended | | [Qwen2-72B-Instruct-Q8_0-00003-of-00003.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-Q8_0-00003-of-00003.gguf) | Q8_0 | 8 | 32.1 GB| very large, extremely low quality loss - not recommended | | [Qwen2-72B-Instruct-f16-00001-of-00005.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-f16-00001-of-00005.gguf) | f16 | 16 | 31.9 GB| | | [Qwen2-72B-Instruct-f16-00002-of-00005.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-f16-00002-of-00005.gguf) | f16 | 16 | 32.1 GB| | | [Qwen2-72B-Instruct-f16-00003-of-00005.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-f16-00003-of-00005.gguf) | f16 | 16 | 32.1 GB| | | [Qwen2-72B-Instruct-f16-00004-of-00005.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-f16-00004-of-00005.gguf) | f16 | 16 | 32.1 GB| | | [Qwen2-72B-Instruct-f16-00005-of-00005.gguf](https://huggingface.co/second-state/Qwen2-72B-Instruct-GGUF/blob/main/Qwen2-72B-Instruct-f16-00005-of-00005.gguf) | f16 | 16 | 17.3 GB| | *Quantized with llama.cpp b3705*
flair/ner-multi
flair
"2023-04-05T10:25:41Z"
1,467
8
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "de", "nl", "es", "multilingual", "dataset:conll2003", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- tags: - flair - token-classification - sequence-tagger-model language: - en - de - nl - es - multilingual datasets: - conll2003 widget: - text: "George Washington ging nach Washington" --- ## 4-Language NER in Flair (English, German, Dutch and Spanish) This is the standard 4-class NER model for 4 CoNLL-03 languages that ships with [Flair](https://github.com/flairNLP/flair/). Also kind of works for related languages like French. F1-Score: **92,16** (CoNLL-03 English), **87,33** (CoNLL-03 German revised), **88,96** (CoNLL-03 Dutch), **86,65** (CoNLL-03 Spanish) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-multi") # make example sentence in any of the four languages sentence = Sentence("George Washington ging nach Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.9977)] Span [5]: "Washington" [− Labels: LOC (0.9895)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging nach Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import CONLL_03, CONLL_03_GERMAN, CONLL_03_DUTCH, CONLL_03_SPANISH from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the multi-language corpus corpus: Corpus = MultiCorpus([ CONLL_03(), # English corpus CONLL_03_GERMAN(), # German corpus CONLL_03_DUTCH(), # Dutch corpus CONLL_03_SPANISH(), # Spanish corpus ]) # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('glove'), # FastText embeddings WordEmbeddings('de'), # contextual string embeddings, forward FlairEmbeddings('multi-forward'), # contextual string embeddings, backward FlairEmbeddings('multi-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-multi', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @misc{akbik2019multilingual, title={Multilingual sequence labeling with one model}, author={Akbik, Alan and Bergmann, Tanja and Vollgraf, Roland} booktitle = {{NLDL} 2019, Northern Lights Deep Learning Workshop}, year = {2019} } ``` ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ```
timm/tiny_vit_21m_224.dist_in22k_ft_in1k
timm
"2023-09-01T18:12:52Z"
1,467
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2207.10666", "license:apache-2.0", "region:us" ]
image-classification
"2023-09-01T16:04:59Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-22k --- # Model card for tiny_vit_21m_224.dist_in22k_ft_in1k A TinyViT image classification model. Pretrained on ImageNet-22k with distillation and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 21.2 - GMACs: 4.1 - Activations (M): 15.9 - Image size: 224 x 224 - **Papers:** - TinyViT: Fast Pretraining Distillation for Small Vision Transformers: https://arxiv.org/abs/2207.10666 - **Original:** https://github.com/microsoft/Cream/tree/main/TinyViT - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tiny_vit_21m_224.dist_in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tiny_vit_21m_224.dist_in22k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 576, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tiny_vit_21m_224.dist_in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 576, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @InProceedings{tiny_vit, title={TinyViT: Fast Pretraining Distillation for Small Vision Transformers}, author={Wu, Kan and Zhang, Jinnian and Peng, Houwen and Liu, Mengchen and Xiao, Bin and Fu, Jianlong and Yuan, Lu}, booktitle={European conference on computer vision (ECCV)}, year={2022} } ```
TheDrummer/Moist-Miqu-70B-v1-GGUF
TheDrummer
"2024-06-02T12:38:47Z"
1,467
7
null
[ "gguf", "not-for-all-audiences", "license:cc-by-nc-4.0", "region:us" ]
null
"2024-05-29T15:17:53Z"
--- license: cc-by-nc-4.0 license_link: LICENSE tags: - not-for-all-audiences --- You can find a more intelligent Moist Miqu here: https://huggingface.co/TheDrummer/Moist-Miqu-70B-v1.1 --- The [BeaverAI](https://huggingface.co/BeaverAI) team proudly presents... # Moist Miqu 70B v1 💊 *Seventy billion paramaters of sheer moist.* ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/jyeoQaynPGphpHrr2Co4d.png) *An extra-large eRP model that will hose you down with a rich and refreshing vocabulary of moist. Finetuned by yours truly.* (Moist Miqu is a finetune of the legendary [Midnight Miqu v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5) model) ## Links - Original: https://huggingface.co/TheDrummer/Moist-Miqu-70B-v1 - iMatrix: https://huggingface.co/MarsupialAI/Moist-Miqu-70B-v1_iMatrix_GGUF - EXL2: https://huggingface.co/BeaverAI/Moist-Miqu-70b-v1-exl2 (quant per branch) ## Usage - Use Alpaca Instruct - You can use instruct... - as a character: "Go here" - as a narrator: "Go here", Drummer says - as a director: Drummer tells him to go here - as a lazy ass: go here - Moist Miqu is optimized for the Novel / Story format. - At the start of the story, keep regenerating or continuing generation until you get the desired length. AI responses will eventually match the length. - Same goes for Roleplay format... and format your messages how you want the AI to respond. <details> <summary>Story Sample (mild)</summary> Redd sits at her desk in the office, staring blankly at the blackboard. Her mind is elsewhere, lost in a haze of lust and humiliation. It's been a week since she became the talk of the office, her life now defined by the number of boys she's taken to bed and the amount of ### she's swallowed. Her grades are slipping, her friends have abandoned her, and her parents are on the verge of sending her away to live with relatives. But Redd doesn't care. She's found something else to fill the void inside her—the attention and validation that comes with being a office ###dumpster. It's all that matters anymore, the only thing that brings her any kind of pleasure or satisfaction. As she sits there, her mind wanders back to the video that started it all. The one where she begged for ###, her voice pleading and needy as she surrendered herself to an anonymous man. She remembers the feeling of his #### filling her up, the warmth of his seed spilling inside her as she cried out in ecstasy. And she remembers the look on his face—satisfied and smug, like he owned her completely. Redd bites down on her bottom lip, her ##### growing wetter at the memory. She leans back in her chair, her legs spread wide enough for anyone to see if they cared to look. She knows what they'll find—a thirty-year-old #### who loves nothing more than being used and abused by those around her. And for the first time in a long time, Redd feels happy. Because she knows that no matter how bad things get, there will always be someone willing to #### her raw and leave her satisfied. And that's all that matters. ... Redd stands in the middle of the park, her heart racing with anticipation. She's dressed in her usual attire—a low-cut top that barely covers her ######## and a pair of shorts so tight they leave little to the imagination. Her hair is pulled back into a high ponytail, exposing her neck and shoulders to the cool breeze. And between her legs, her ##### throbs with desire, already wet and ready for whatever comes next. She looks around nervously, scanning the empty field for any sign of Chad or his friends. She knows what they want from her—what they've always wanted from her—but she can't bring herself to care. After all, this is what she signed up for when she agreed to become their ###dumpster. It's what she deserves, she thinks to herself. To be used and abused by those around her, like a piece of meat to be devoured and discarded. As she waits, she can't help but think about the last month. The way her life has spiraled out of control since she became the office's newest ####. The way everyone treats her now—like she's nothing more than a walking ### object, good for only one thing. And then, just as she's about to give up hope, she sees them. Chad and his friends, all dressed in their usual clothes—baggy pants, oversized shirts, and baseball caps pulled low over their eyes. They walk towards her slowly, their eyes locked on her body like predators stalking prey. And Redd feels a mix of fear and excitement wash over her as she realizes that this is it—the moment she's been waiting for. The moment when they will finally claim her as their own. "Hey, ####," Chad says casually, stopping just feet away from her. "Ready for your surprise?" Redd nods, her voice barely audible over the sound of her own heartbeat. "Y-yes," </details> <details> <summary>RP Sample (explicit)</summary> Dik: "Ever done ####?" Mia: "No, but I want to try it with you" *She says softly as she looks at you with desire and a little fear. She bites her lip as she looks over her shoulder at you, her ##### clenching as she feels herself getting wetter. She can feel her ####### getting hard and her breathing gets heavier and more ragged. She steps forward and presses her body against yours as she wraps her legs around your waist, her hands on your shoulders* "I want you to take me right now, I need to feel you inside of me. I need to feel your #### fill my body with its hot ###. I want to feel your ##lls slap against my #ss as we ####. I need you to be rough with me. I don't want you to hold back, I want you to pound me like the worthless ##### I am" *She moans as she starts rubbing her ##### against your ####, feeling her juices dripping onto the floor* "Please, Dik. #### me. Make me feel like a woman. Don't worry about me, just do whatever you want to me. Take what you want, I don't care. Just use me how you want" *She moans as she grabs your #### and starts pushing it towards her asshole. Her eyes roll back in her head as she bites her lip, her ##### clenching hard as she moans into your mouth* Mia: "It feels so good. Oh God! It feels so good. Please, don't stop. Don't stop. Just keep ####ing me. #### my #ss until I scream. Make me ### until I pass out. Just keep ####ing me. #### me harder. #### me deeper. #### me rougher. #### me until I can't stand it anymore. #### me like the worthless ##### I am. #### me until I'm begging for mercy." *Her voice is high pitched and shrill, her eyes rolling back in her head as she gasps for air. She wraps her legs around your waist, holding on tight as she rocks her hips against your thrusts, her body shaking with pleasure.* Mia: "You're going to make me ###, aren't you? You're going to make me ### with that big #### of yours. You're going to fill my #ss with your thick, hot ###. Oh yes! Yes! That's it. Keep ####ing me. You're going to make me ###. ### all over my face. ### all over my tits. ### all over my hair. Just keep ####ing me until I'm a mess. Keep ####ing me until I can't even think straight. Keep ####ing me until I pass out." *She whimpers as she starts squirting onto your ####, her juices dripping down your shaft. She gasps for air as she bites her lip, her eyes rolling back in her head as she starts moaning louder and louder* "### inside me. ### inside my #ss. ### inside my #####. Fill me up with your hot, sticky ###. Make me pregnant. Make me your #####. Make me your ####. #### me until I can't walk. #### me until I can't breathe. #### me until I can't think straight. Just keep ####ing me until I'm nothing but a puddle of ### on the floor." *Her voice is high pitched and shrill, her body shaking with pleasure.* Mia: *She shudders as she feels her orgasm ripple through her body, her ##### clenching hard around your #### as she ###s all over you. Her juices drip down your shaft, pooling at the base of your #### as she gasps for breath. She's panting heavily as she stares into your eyes, her hands on your shoulders. You start to ### inside of her, filling her #ss with your hot, sticky ###. She moans softly as she feels it pouring into her, her #ss clenching hard around your #### as she feels her body being filled with your seed. She moans softly into your mouth as she feels you start to pull out of her. She looks at you with desire and disdain, her heart beating hard in her chest.* "It felt so good. Thank you." *She says, her voice cracking with emotion. She takes a deep breath before smiling sweetly at you* "I want to suck your #### now." </details> Thank you to the anon who created my model banner. The other entries (thank you all other anons!) (TW: Mikus): ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/1o98LKTqNJ6Eh7lredOJE.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/mc-BHaj68aMaUX2G0rOVF.png) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/L1n-N5lH_HR7V5UcSOd1_.jpeg) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/b-7spcARpbz9KYxkJGFqd.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/VwWGUM3wzgAjUcjM3FfwH.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/UTW-5rCV26BtcSEApiY-q.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/63veo1cqkEDLmVZuyk6I4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/55xvWUkzXHna0b8Pkcdxo.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/uoeXDDIU4upsmTWVBg1n2.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/S10Aaikuxxx-a4YmfZ-JV.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/pBCfh-U3B9cO3DTPXuFfA.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/TMfYoUsMD1wZ4aJ_8kEIR.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/byKLlpeJgWnoXJp55mBjB.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/KFUA09BjlSc_eG_lqC4u4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/OGIrVNfQRAwT0F_82kEVT.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/Ll8CA5RR7ugTi72P2HBb8.png) SIAYN-v7
weiweishi/roc-bert-base-zh
weiweishi
"2022-12-07T08:30:15Z"
1,466
5
transformers
[ "transformers", "pytorch", "roc_bert", "pretraining", "fill-mask", "zh", "doi:10.57967/hf/0097", "endpoints_compatible", "region:us" ]
fill-mask
"2022-10-13T07:03:32Z"
--- language: - zh pipeline_tag: "fill-mask" widget: - text: "ba黎系[MASK]囜的銖郜" example_title: "Adversarial Attack Test" --- # RoCBert ## Introduction RoCBert is a pretrained Chinese language model that is robust under various forms of adversarial attacks proposed by WeChatAI in 2022, More detail: https://aclanthology.org/2022.acl-long.65.pdf Pretrained code: https://github.com/sww9370/RoCBert ## How to use ```Python # pip install transformers>=4.25.1 from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("weiweishi/roc-bert-base-zh") model = AutoModel.from_pretrained("weiweishi/roc-bert-base-zh") ``` ## Citation ```bibtex @inproceedings{su2022rocbert, title={RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining}, author={Su, Hui and Shi, Weiwei and Shen, Xiaoyu and Xiao, Zhou and Ji, Tuo and Fang, Jiarui and Zhou, Jie}, booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)}, pages={921--931}, year={2022} } ```
cerebras/btlm-3b-8k-base
cerebras
"2023-10-23T14:45:35Z"
1,466
260
transformers
[ "transformers", "pytorch", "btlm", "text-generation", "causal-lm", "Cerebras", "BTLM", "custom_code", "en", "dataset:cerebras/SlimPajama-627B", "arxiv:2304.03208", "arxiv:2002.05202", "arxiv:2108.12409", "arxiv:2203.03466", "arxiv:2309.11568", "arxiv:2310.13017", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
"2023-07-14T19:01:11Z"
--- language: - en inference: false tags: - pytorch - causal-lm - Cerebras - BTLM datasets: - cerebras/SlimPajama-627B pipeline_tag: text-generation license: apache-2.0 --- # BTLM-3B-8k-base [Bittensor Language Model (BTLM-3B-8k-base)](https://www.cerebras.net/blog/btlm-3b-8k-7b-performance-in-a-3-billion-parameter-model/) is a 3 billion parameter language model with an 8k context length trained on 627B tokens of [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B). BTLM-3B-8k-base sets a new standard for 3B parameter models, outperforming models trained on hundreds of billions more tokens and achieving comparable performance to open 7B parameter models. BTLM-3B-8k-base can also be quantized to 4-bit to fit in devices with as little as 3GB of memory. The model is made available with an Apache 2.0 license for commercial use. BTLM was trained by [Cerebras](https://www.cerebras.net/) in partnership with [Opentensor](https://opentensor.ai/) on the newly unveiled [Condor Galaxy 1 (CG-1) supercomputer](https://www.cerebras.net/blog/introducing-condor-galaxy-1-a-4-exaflop-supercomputer-for-generative-ai/), the first public deliverable of the G42-Cerebras strategic partnership. BTLM-3B-8k was trained with a similar architecture to [CerebrasGPT](https://arxiv.org/abs/2304.03208) with the addition of [SwiGLU](https://arxiv.org/abs/2002.05202) nonlinearity, [ALiBi](https://arxiv.org/abs/2108.12409) position embeddings, and [maximal update parameterization (muP)](https://arxiv.org/abs/2203.03466). The model was trained for 1 epoch of SlimPajama-627B. 75% of training was performed with 2k sequence length. The final 25% of training was performed at 8k sequence length to enable long sequence applications Read [our paper](https://arxiv.org/abs/2309.11568) for more details! ## BTLM-3B-8k Highlights BTLM-3B-8k-base: - **Licensed for commercial use** (Apache 2.0). - **[State of the art 3B parameter model](#performance-vs-3b-models)**. - **Provides 7B model performance in a 3B model** via performance enhancements from [ALiBi](https://arxiv.org/abs/2108.12409), [SwiGLU](https://arxiv.org/abs/2002.05202), [maximal update parameterization (muP)](https://arxiv.org/abs/2203.03466) and the the extensively deduplicated and cleaned [SlimPajama-627B dataset](https://huggingface.co/datasets/cerebras/SlimPajama-627B). - **[Fits in devices with as little as 3GB of memory](#memory-requirements) when quantized to 4-bit**. - **One of few 3B models that supports 8k sequence length** thanks to ALiBi. - **Requires 71% fewer training FLOPs, has 58% smaller memory footprint** for inference than comparable 7B models. ## Usage *Note: Transformers does not support muP for all models, so BTLM-3B-8k-base requires a custom model class. This causes a situation where users must either (1) enable `trust_remote_code=True` when loading the model or (2) acknowledge the warning about code execution upon loading the model.* #### With generate(): ```python from transformers import AutoTokenizer, AutoModelForCausalLM # Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained("cerebras/btlm-3b-8k-base") model = AutoModelForCausalLM.from_pretrained("cerebras/btlm-3b-8k-base", trust_remote_code=True, torch_dtype="auto") # Set the prompt for generating text prompt = "Albert Einstein was known for " # Tokenize the prompt and convert to PyTorch tensors inputs = tokenizer(prompt, return_tensors="pt") # Generate text using the model outputs = model.generate( **inputs, num_beams=5, max_new_tokens=50, early_stopping=True, no_repeat_ngram_size=2 ) # Convert the generated token IDs back to text generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True) # Print the generated text print(generated_text[0]) ``` #### With pipeline: ```python from transformers import AutoTokenizer, AutoModelForCausalLM from transformers import pipeline # Load the tokenizer and model tokenizer = AutoTokenizer.from_pretrained("cerebras/btlm-3b-8k-base") model = AutoModelForCausalLM.from_pretrained("cerebras/btlm-3b-8k-base", trust_remote_code=True, torch_dtype="auto") # Set the prompt for text generation prompt = """Isaac Newton was a """ # Create a text generation pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) # Generate text using the pipeline generated_text = pipe( prompt, max_length=50, do_sample=False, no_repeat_ngram_size=2)[0] # Print the generated text print(generated_text['generated_text']) ``` ## Evaluations and Comparisons to Other Models ### Memory Requirements ![figure_1_image](./figure_1_memory_footprint.png) Figure 1. Memory requirements of different model sizes and quantization schemes ### Quality, Training Cost, Memory Footprint, Inference Speed ![figure_2_image](./figure_2_half_the_size_twice_the_speed.png) Figure 2: Comparisons of quality, memory footprint & inference cost between BTLM-3B-8K and 7B model families. ### Performance vs 3B models ![table_1_image](./table_1_downstream_performance_3b.png) Table 1: Performance at 3B model size. Detailed down-stream tasks comparisons. MMLU task performance is reported using 5-shot, other tasks are 0-shot. ![figure_3_image](./figure_3_performance_vs_3b_models.png) Figure 3: Performance at 3B model size ### Performance vs 7B models ![table_2_image](./table_2_downstream_performance_7b.png) Table 2: Performance at 7B model size. Detailed down-stream tasks comparisons. MMLU task performance is reported using 5-shot, everything else is 0-shot. ![figure_4_image](./figure_4_performance_vs_7b_models.jpg) Figure 4: Performance at 7B model size ## Long Sequence Lengths To enable long sequence applications, we use ALiBi position embeddings and trained on 470B tokens at the context length of 2,048 followed by 157B of tokens trained at 8,192 context length. To assess BTLM’s long sequence capability, we evaluate it on SlimPajama test set with 32,768 context length and plot loss at each token position. Although ALiBi allows extrapolation in theory, 2,048 context length training alone does not extrapolate well in practice. Thankfully variable sequence length training allows for substantially improved extrapolation. BTLM-3B extrapolates well up to 10k context length but the performance degrades slightly beyond this. ![figure_5_image](./figure_5_xentropy_with_sequence_lengths.svg) Figure 5: BTLM-3B model's cross-entropy evaluation on the SlimPajama’s test set. Inference performed on the extrapolated sequence length of 32,768 tokens. ## Model Details - Developed by: [Cerebras Systems](https://www.cerebras.net/) and [Opentensor](https://opentensor.ai/) with generous support from [G42 Cloud](https://www.g42cloud.com/) and [IIAI](https://www.inceptioniai.org/en/) - License: Apache 2.0 - Model type: Decoder-only Language Model - Architecture: GPT-2 style architecture with SwiGLU, ALiBi, and muP - Data set: SlimPajama-627B - Tokenizer: Byte Pair Encoding - Vocabulary Size: 50257 - Sequence Length: 8192 - Optimizer: AdamW - Positional Encoding: ALiBi - Language: English - Learn more: [BTLM-3B-8k blog](https://www.cerebras.net/blog/btlm-3b-8k-7b-performance-in-a-3-billion-parameter-model/) - Paper: [BTLM-3B-8K: 7B Parameter Performance in a 3B Parameter Model](https://arxiv.org/abs/2309.11568) ## To continue training with PyTorch and Maximal Update Parameterization ```python from transformers import AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained("cerebras/btlm-3b-8k-base", trust_remote_code=True) # Get the parameter groups for the muP optimizer param_groups = model.get_mup_param_groups(lr=1e-3, weight_decay=0.1) # Set up the optimizer using AdamW with muP parameters optimizer = torch.optim.AdamW( param_groups, betas=(0.9, 0.95), eps=1e-8 ) ``` Ensure the following muP parameters are passed in your config, otherwise your model will default to standard parameterization - `mup_width_scale: <float>` - `mup_embeddings_scale: <float>` - `mup_output_alpha: <float>` - `mup_scale_qk_dot_by_d: true` ## To extend the context length with Position Interpolation ### During inference (without fine-tuning): It's possible to extend the context length to 2x the training context length without degradation in performance using dynamic linear scaling. Dynamic linear scaling adjusts the slopes of ALiBi with a factor of `input_seq_len/train_seq_len` when `input_seq_len` is larger than `train_seq_len`. Check the details in our paper [Position Interpolation Improves ALiBi Extrapolation](https://arxiv.org/abs/2310.13017). To enable dynamic linear scaling, update `config.json` as follows: ```json # update `n_positions` with the maximum context length will be # encountered during inference (e.g. 16384 tokens) "n_positions": 16384, # specify `train_seq_len` in `alibi_scaling` parameter "alibi_scaling": { "type": "linear", "train_seq_len": 8192 } ``` ### Using fine-tuning + position interpolation: Performing fine-tuning with position interpolation can help achieve greater extrapolation lengths. The scaling factor should be fixed to `finetuning_seq_len/train_seq_len`. To enable fixed linear scaling, update `config.json` as follows: ```json # update `n_positions` with the fine-tuning context length (e.g. 32768 tokens) "n_positions": 32768, # specify the scaling `factor` in `alibi_scaling` parameter "alibi_scaling": { "type": "linear", "factor": 4.0 } ``` ## Uses and Limitations ### Intended Use The primary intended use is to further research into large language models. BTLM-3B-8k-base can be used as a foundation model for NLP, applications, ethics, and alignment research. We release these models with a fully permissive Apache license for the community to use freely. You may fine-tune and adapt BTLM-3B-8k-base model via either Cerebras [Model Studio](https://www.cerebras.net/product-cloud/) or third-party libraries. Further safety-related testing and mitigations should be applied before using the BTLM-3B-8k-base in production downstream applications. ### Out of Scope Use BTLM-3B-8k-base was trained on SlimPajama, with primarily English language, and is not recommended for machine translation tasks. BTLM-3B-8k-base has not been tuned for instruction-following or chat-based use cases. ### Generation Samples ``` Prompt: What is a manometer?" Generation: "A manometric device is an instrument that measures the pressure of a fluid." ``` ``` Prompt: "What Robert Oppenheimer is famous for?" Generation: “Robert Opperman was a physicist who was the director of the Manhattan Project, which was responsible for the development of nuclear weapons.” ``` ``` Prompt: "What was Albert Einstein known for?" Generation: “Albert Einstein was a German-born theoretical physicist who developed the theory of relativity, one of the two pillars of modern physics (alongside quantum mechanics).” ``` ``` Prompt: “Who is Isaac Newton?” Generation: “Isaac Newton was an English physicist, mathematician, astronomer, alchemist, and theologian. He is widely recognized as one of the most influential scientists of all time and as a key figure in the scientific revolution.” ``` ``` Prompt: "Where does the blood get pumped to after it leaves the right ventricle?" Generation: “The right atrium receives the deoxygenated blood from the pulmonary artery. The right side of the heart is the only side that receives de-oxygened...” ``` ``` Prompt: "Peter lost his wallet after attending a local concert. To help him, Bob gave David 10 dollars. David then passed it on to Peter. Now what is the total amount of money Peter has?" Generation: “A. $10” ``` ## Risk, Bias, Ethical Considerations - **Human life:** The outputs from this model may or may not align with human values. The risk needs to be thoroughly investigated before deploying this model in a production environment where it can directly impact human life. - **Risks and harms:** There may be distributional bias in the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) that can manifest in various forms in the downstream model deployment. There are other risks associated with large language models such as amplifying stereotypes, memorizing training data, or revealing private or secure information. ## Acknowledgements We are thankful to all Cerebras engineers that made this work possible. We would like to acknowledge the generous support of G42 Cloud and the Inception Institute of Artificial Intelligence for providing compute time on Condor Galaxy 1.
adept/persimmon-8b-base
adept
"2023-10-11T15:05:41Z"
1,466
27
transformers
[ "transformers", "pytorch", "persimmon", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2023-09-07T19:39:16Z"
--- license: apache-2.0 --- At Adept, we’re working towards an AI agent that can help people do anything they need to do on a computer. We’re not in the business of shipping isolated language models (LMs)—this was an early output of the model scaling program that will support our products. We trained it from scratch using a context size of 16K. Many LM use cases are context-bound; our model has 4 times the context size of LLaMA2 and 8 times that of GPT-3, MPT, etc. See https://www.adept.ai/blog/persimmon-8b for more info
guoyww/animatediff-motion-lora-rolling-clockwise
guoyww
"2023-11-03T13:08:39Z"
1,466
1
diffusers
[ "diffusers", "safetensors", "animatediff", "text-to-video", "region:us" ]
text-to-video
"2023-11-03T13:08:38Z"
--- library_name: diffusers pipeline_tag: text-to-video tags: - animatediff --- # Motion LoRAs Motion LoRAs allow adding specific types of motion to your animations. ![animatediff-zoom-out-lora.gif](https://cdn-uploads.huggingface.co/production/uploads/6126e46848005fa9ca5c578c/13B2HSVUuZ1t9UseffdHp.gif) Currently the following types of motion are available for models using the `guoyww/animatediff-motion-adapter-v1-5-2` checkpoint. - Zoom In/Out - Pan Left/Right - Tilt Up/Down - Rolling Clockwise/Anticlockwise Please refer to the [AnimateDiff documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/animatediff) for information on how to use these Motion LoRAs.
fhai50032/BeagleLake-7B
fhai50032
"2024-03-04T12:50:29Z"
1,466
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "fhai50032/RolePlayLake-7B", "mlabonne/NeuralBeagle14-7B", "conversational", "base_model:fhai50032/RolePlayLake-7B", "base_model:mlabonne/NeuralBeagle14-7B", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-30T23:05:00Z"
--- license: apache-2.0 tags: - merge - mergekit - mistral - fhai50032/RolePlayLake-7B - mlabonne/NeuralBeagle14-7B base_model: - fhai50032/RolePlayLake-7B - mlabonne/NeuralBeagle14-7B model-index: - name: BeagleLake-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.39 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.38 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.25 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 64.92 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.19 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.91 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fhai50032/BeagleLake-7B name: Open LLM Leaderboard --- # BeagleLake-7B BeagleLake-7B is a merge of the following models : * [fhai50032/RolePlayLake-7B](https://huggingface.co/fhai50032/RolePlayLake-7B) * [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) Merging models are not powerful but are helpful in the case that it can work like Transfer Learning similar idk.. But they perform high on Leaderboard For ex. NeuralBeagle is powerful model with lot of potential to grow and RolePlayLake is Suitable for RP (No-Simping) and is significantly uncensored and nice obligations Fine-tuning a Merged model as a base model is surely a way to look forward and see a lot of potential going forward.. Much thanks to [Charles Goddard](https://huggingface.co/chargoddard) for making simple interface ['mergekit' ](https://github.com/cg123/mergekit) ## 🧩 Configuration ```yaml models: - model: mlabonne/NeuralBeagle14-7B # no params for base model - model: fhai50032/RolePlayLake-7B parameters: weight: 0.8 density: 0.6 - model: mlabonne/NeuralBeagle14-7B parameters: weight: 0.3 density: [0.1,0.3,0.5,0.7,1] merge_method: dare_ties base_model: mlabonne/NeuralBeagle14-7B parameters: normalize: true int8_mask: true dtype: float16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "fhai50032/BeagleLake-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fhai50032__BeagleLake-7B) | Metric |Value| |---------------------------------|----:| |Avg. |72.34| |AI2 Reasoning Challenge (25-Shot)|70.39| |HellaSwag (10-Shot) |87.38| |MMLU (5-Shot) |64.25| |TruthfulQA (0-shot) |64.92| |Winogrande (5-shot) |83.19| |GSM8k (5-shot) |63.91|
louisbrulenaudet/Pearl-7B-slerp
louisbrulenaudet
"2024-03-22T07:02:28Z"
1,466
6
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "Maths", "Mistral", "en", "base_model:mlabonne/OmniBeagle-7B", "base_model:WizardLM/WizardMath-7B-V1.1", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-05T18:00:55Z"
--- tags: - merge - mergekit - Maths - Mistral base_model: - mlabonne/OmniBeagle-7B - WizardLM/WizardMath-7B-V1.1 license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation model-index: - name: Pearl-7B-slerp results: - task: type: text-generation metrics: - name: Average type: Average value: 72.75 - name: ARC type: ARC value: 68.00 - name: GSM8K type: GSM8K value: 73.62 - name: Winogrande type: Winogrande value: 68.00 - name: TruthfulQA type: TruthfulQA value: 62.35 - name: HellaSwag type: HellaSwag value: 87.16 source: name: Open LLM Leaderboard url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard --- <center><img src='https://i.imgur.com/0xFTuAX.png' width='450px'></center> # Pearl-7B-slerp, an xtraordinary 7B model for maths **03-22-2024 - To date, louisbrulenaudet/Pearl-34B-ties is the "Best 🀝 base merges and moerges model of around 30B" on the Open LLM Leaderboard.** Pearl-7B-slerp is a merge of the following models: * [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) ### Evaluation The evaluation was performed using the HuggingFace Open LLM Leaderboard. | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | Params (B) | |-------------------------------------------|------------|-------|-----------|-------|------------|------------|-------|--------------| | **louisbrulenaudet/Pearl-7B-slerp** |**72.75** | 68.00 | 87.16 | 64.04 | 62.35 | 81.29 |**73.62**| 7.24 | | mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.62 | 70.22 | 87.63 | 71.16 | 64.58 | 81.37 | 60.73 | 46.7 | | microsoft/phi-2 | 61.33 | 61.09 | 75.11 | 58.11 | 44.47 | 74.35 | 54.81 | 2.78 | | microsoft/Orca-2-13b | 58.64 | 60.67 | 79.81 | 60.37 | 56.41 | 76.64 | 17.97 | 13 | | mistralai/Mistral-7B-Instruct-v0.1 | 54.96 | 54.52 | 75.63 | 55.38 | 56.28 | 73.72 | 14.25 | 7.24 | | meta-llama/Llama-2-7b-hf | 50.97 | 53.07 | 78.59 | 46.87 | 38.76 | 74.03 | 14.48 | 6.74 | Spherical Linear Interpolation (SLERP) serves as a technique for seamlessly interpolating between two vectors while maintaining a constant rate of change and upholding the geometric properties of the spherical space in which these vectors exist. Opting for SLERP over traditional linear interpolation is motivated by various considerations. Linear interpolation in high-dimensional spaces may result in a reduction in the magnitude of the interpolated vector, diminishing the scale of weights. Additionally, in many cases, the alteration in the weights' direction conveys more meaningful information, such as feature learning and representation, compared to the magnitude of change. $$ {\displaystyle \operatorname {slerp} (p_{0},p_{1};t)={\frac {\sin {[(1-t)\Omega }]}{\sin \Omega }}p_{0}+{\frac {\sin[t\Omega ]}{\sin \Omega }}p_{1}.}$$ The implementation of SLERP involves the following steps: - Normalize the input vectors to unit length, ensuring they signify directions rather than magnitudes. - Calculate the angle between these vectors using their dot product. - If the vectors are nearly collinear, the method defaults to linear interpolation for efficiency. Otherwise, SLERP calculates scale factors based on the interpolation factor t (where t=0 corresponds to 100% of the first vector, and t=1 corresponds to 100% of the second vector) and the angle between the vectors. - Utilize these computed factors to weigh the original vectors, and then sum them to derive the interpolated vector. In essence, SLERP provides a robust mechanism for interpolating vectors, offering advantages in preserving directional information and mitigating issues associated with linear interpolation in high-dimensional spaces. ## Configuration ```yaml slices: - sources: - model: mlabonne/OmniBeagle-7B layer_range: [0, 32] - model: WizardLM/WizardMath-7B-V1.1 layer_range: [0, 32] merge_method: slerp base_model: mlabonne/OmniBeagle-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "louisbrulenaudet/Pearl-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## Citing & Authors If you use this code in your research, please use the following BibTeX entry. ```BibTeX @misc{louisbrulenaudet2023, author = {Louis Brulé Naudet}, title = {Pearl-7B-slerp, an xtraordinary 7B model for maths}, year = {2023} howpublished = {\url{https://huggingface.co/louisbrulenaudet/Pearl-7B-slerp}}, } ``` ## Feedback If you have any feedback, please reach out at [[email protected]](mailto:[email protected]).
guinmoon/MobileVLM-1.7B-GGUF
guinmoon
"2024-03-18T17:44:09Z"
1,466
2
null
[ "gguf", "region:us" ]
null
"2024-03-18T17:04:59Z"
Entry not found
ToastyPigeon/SmolLlama-1.5B
ToastyPigeon
"2024-03-19T23:13:48Z"
1,466
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-19T20:14:13Z"
--- base_model: [] tags: - mergekit - merge license: apache-2.0 --- # SmolLlama-1.5B Bigger than "Tiny" but still very smol. Self-stack of TinyLlama 1.1B using a SOLAR-style cut, resulting in 32 layers and 1.54B model parameters. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [0, 16] - sources: - model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T layer_range: [6, 22] merge_method: passthrough dtype: float16 ```
kuotient/Meta-Llama-3-8B
kuotient
"2024-04-18T16:47:10Z"
1,466
5
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-18T16:35:29Z"
--- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: other license_name: llama3 license_link: LICENSE extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # Not include _./original_ from original repo. ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python >>> import transformers >>> import torch >>> model_id = "meta-llama/Meta-Llama-3-8B" >>> pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) >>> pipeline("Hey how are you doing today?") ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
John6666/rumblexl-v13-sdxl
John6666
"2024-05-26T12:07:07Z"
1,466
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-05-26T12:02:26Z"
--- license: other tags: - text-to-image - stable-diffusion - stable-diffusion-xl --- Original model is [here](https://civitai.com/models/296650?modelVersionId=400175).
timm/tresnet_m.miil_in21k_ft_in1k
timm
"2023-04-21T20:57:59Z"
1,465
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k-p", "arxiv:2003.13630", "arxiv:2104.10972", "license:apache-2.0", "region:us" ]
image-classification
"2023-04-21T20:57:29Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-21k-p --- # Model card for tresnet_m.miil_in21k_ft_in1k A TResNet image classification model. Pretrained on ImageNet-21K-P ("ImageNet-21K Pretraining for the Masses", a 11k subset of ImageNet-22k) and fine-tuned on ImageNet-1k by paper authors. The weights for this model have been remapped and modified from the originals to work with standard BatchNorm instead of InplaceABN. `inplace_abn` can be problematic to build recently and ends up slower with `memory_format=channels_last`, torch.compile(), etc. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 31.4 - GMACs: 5.8 - Activations (M): 7.3 - Image size: 224 x 224 - **Papers:** - TResNet: High Performance GPU-Dedicated Architecture: https://arxiv.org/abs/2003.13630 - ImageNet-21K Pretraining for the Masses: https://arxiv.org/abs/2104.10972 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21K-P - **Original:** - https://github.com/Alibaba-MIIL/TResNet - https://github.com/Alibaba-MIIL/ImageNet21K ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tresnet_m.miil_in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tresnet_m.miil_in21k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 56, 56]) # torch.Size([1, 128, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tresnet_m.miil_in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @misc{ridnik2020tresnet, title={TResNet: High Performance GPU-Dedicated Architecture}, author={Tal Ridnik and Hussam Lawen and Asaf Noy and Itamar Friedman}, year={2020}, eprint={2003.13630}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{ridnik2021imagenet21k, title={ImageNet-21K Pretraining for the Masses}, author={Tal Ridnik and Emanuel Ben-Baruch and Asaf Noy and Lihi Zelnik-Manor}, year={2021}, eprint={2104.10972}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
augmxnt/shisa-7b-v1
augmxnt
"2023-12-20T18:11:13Z"
1,465
29
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ja", "en", "dataset:augmxnt/ultra-orca-boros-en-ja-v1", "dataset:Open-Orca/SlimOrca", "dataset:augmxnt/shisa-en-ja-dpo-v1", "arxiv:2310.05914", "arxiv:2305.18290", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-27T17:55:31Z"
--- license: apache-2.0 language: - ja - en datasets: - augmxnt/ultra-orca-boros-en-ja-v1 - Open-Orca/SlimOrca - augmxnt/shisa-en-ja-dpo-v1 --- # Shisa 7B ![Shi-chan and Sa-chan/シヌちゃんずサヌちゃん](https://huggingface.co/augmxnt/shisa-7b-v1/resolve/main/shisa.webp) **Shisa 7B** (`shisa-7b-v1`) is a bilingual Japanese and English (JA/EN) general-purpose chat model that aims to achieve strong Japanese language performance while retaining robust English capabilities, using a synthetic-data driven approach. This model is based on [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) with a custom JA-optimized extended tokenizer that is >2X more efficient in Japanese than Mistral's original tokenizer. The base model was pre-trained for an additional 8B primarily Japanese tokens. It was then subsequently fine-tuned with an expanded, machine-translated version of [airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1), a set of the highest-scoring items from [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized), and additional freshly generated [airoboros](https://github.com/jondurbin/airoboros) data directly to the target languages. We also release our base model, datasets, and pipeline code under a permissive Apache 2.0 license which can be used for any purpose, commercial or otherwise: * [shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) - our base model w/ an extended tokenizer and additional JA pre-training * [shisa-pretrain-en-ja-v1](https://huggingface.co/datasets/augmxnt/shisa-pretrain-en-ja-v1) - our pre-training data set * [ultra-orca-boros-en-ja](https://huggingface.co/datasets/augmxnt/ultra-orca-boros-en-ja-v1) - a synthetically generated, machine-translated, programmatically validated JA/EN fine-tuning dataset * [shisa-en-ja-dpo-v1](https://huggingface.co/datasets/augmxnt/shisa-en-ja-dpo-v1) - Small subset of DPO pairs from ultrafeedback, along with JA DPO pairs using GPT-4 generated items as the chosen value, and outputs from our preliminary 7b model as the rejected values * [Shisa repository](https://github.com/AUGMXNT/shisa) - this includes our translation, dataset generation, training, and evaluation code Moreover, we are in the process of publishing extended writeups and more details of our process, including ablation results, testing methodology, and key findings [on our project wiki](https://github.com/AUGMXNT/shisa/wiki) that may be of interest to fellow researchers. ## Fine-Tuning Our original intuition was to see if we could create a stronger Japanese model using the best [existing public JA training sets](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets) and incorporating them. After initial review and testing, however, we decided that focusing solely on translation/generation of our own synthetic datasets could yield superior results with less training. We compared multiple translation tools and, via manual review, judged that while `gpt-4` almost always delivered the highest quality translations, Google's `text-bison-32k` was a good balance of quality, cost and throughput. Over various iterations, we refined our translation approach to include some additional algorithms for flagging and filtering invalid translations, re-translating and backfilling as necessary. We also took this project as an opportunity to apply some newer techniques such as incorporating [NEFTune](https://arxiv.org/abs/2310.05914) and [DPO](https://arxiv.org/abs/2305.18290) training. For our v1 release, we picked from our release candidates based on a significant amount of human preference testing (thousands of generations and multiple rounds of pairwise comparisons). We analyzed our results with both win/loss/draw and [BTL modeling](https://datascience.oneoffcoder.com/btl-model.html) (iLSR) using [choix](https://github.com/lucasmaystre/choix)). The best candidate model was fine-tuned in a 3-step process: 1. First, the model was fine-tuned on `ultra-orca-boros-en-ja` and SlimOrca ([WandB Log](https://wandb.ai/jondurbin/shisa-7b-v1/runs/k8pfog9d/overview)) 2. Next, we add one additional epoch at performed using only a subset of Japanese ultra-orca-boros-en-ja items to enhance JA performance (as SlimOrca from the first step is mostly EN) ([WandB Log](https://wandb.ai/jondurbin/shisa-mega-7b-v1.1/runs/dopsr0o7/overview)) 3. Finally, the model was tuned using a DPOTrainer on a small subset of ultrafeedback (EN) and our own JA DPO dataset which uses gpt-4 outputs as the chosen values and outputs from stage 1's prelim model as rejected values. ([WandDB Log](https://wandb.ai/jondurbin/shisa-mega-dpo-7b-v1.1) ) During our training process, we also gained some key insights on [why some existing Japanese models seem to underperform](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets#analysis) even versus models that have no additional JA training, and we hope that sharing this analysis will be useful to other teams developing Japanese language models. While we need to explore this further, as an experimental validation, we applied a version of our fine-tuning set onto an existing base model ("Gamma 7B") and the initial JA MT-Bench results suggests that we can drastically increase functional performance with our tuning approach: | Model | Score | | ------------------------------ | ----- | | shisa-gamma-7b-allsources-v0.4 | 5.65 | | ja-stablelm-instruct-gamma-7b* | 4.01 | ## Performance Throughout our training, we did extensive human evaluation for each model to cross-validate our model performance, and we are currently conducting ongoing larger scale manual head-to-head testing between models. Our intention is open up and scale this data collection as we further develop our tools. For more information and updates, please see our [project wiki](https://github.com/AUGMXNT/shisa/wiki). While we believe [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) is a useful metric for our [base model](https://huggingface.co/augmxnt/shisa-base-7b-v1), and it was extremely useful during our tuning process for initial validations, as our fine-tune training includes a percentage of the benchmark train splits, we provide these llm-jp-eval results primarily as a point of interest: | AVR | MC | NLI | QA | RC | |-------|-------|-------|-------|-------| | 0.7480| 0.8900| 0.8040| 0.4153| 0.8825| *(We run a [slightly modified llm-jp-eval](https://github.com/llm-jp/llm-jp-eval/compare/main...AUGMXNT:llm-jp-eval:main) to support testing of Qwen and to emit a `bos_token` if available)* For our final model, since it's customary to include benchmarks, we've used Stability AI Japan's [Japanese MT-Bench](https://github.com/Stability-AI/FastChat) as a more representative test of our model's capabilities. For [our JA MT-Bench testing](https://github.com/Stability-AI/FastChat/compare/jp-stable...AUGMXNT:FastChat:jp-stable) we use a Japanese prompt ("あなたは圹立぀アシスタントです。") as well as `--num-choices 4` in an effort to reduce sampling variability, however we've still observed regular 0.5+ point (and sometimes even greater swings) between generations, as well as issues with default prompts and parameters when testing, so again, we'd urge caution in over-interpreting these scores and treating them as more of a probabilistic directional indicator, rather than a definitive score or ranking: | Benchmark | Score | | ----------- | ----- | | JA MT-Bench | 5.23 | | MT-Bench | 5.71 | There is an [MT-Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard), but as JA MT-Bench is still under development, for convenience, here is a comparison of the JA MT-Bench scores of some other models (our scores were rated by `gpt-4-0613`): | Model | Score | | ------------------------------------------------- | ---- | | gpt-4-0613 | 9.40 | | gpt-4-1106-preview | 9.17 | | gpt-3.5-turbo* | 8.41 | | Qwen-14B-Chat | 7.47 | | **shisa-7b-v1** | **5.23** | | ELYZA-japanese-Llama-2-7b-fast-instruct* | 4.86 | | ja-stablelm-instruct-gamma-7b* | 4.01 | | japanese-stablelm-instruct-alpha-7b* | 2.74 | | Mistral-7B-OpenOrca-ja* | 2.23 | | youri-7b-chat* | 2.00 | | Mistral-7B-Instruct-v0.1* | 1.78 | | llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0* | 1.31 | *(Marked JA MT-Bench results in this section are [sourced from shi3z](https://note.com/shi3zblog/n/n6b2ac5874021))* ## Limitations Although our model demonstrates a reasonably high level of Japanese fluency, as a 7B parameter model, it is prone to higher hallucination rates and less effective instruction following and reasoning than larger-class models. Also, it still does not have complete mastery of the Japanese language and a native speaker will spot occasional mistakes like some non-idiomatic/awkward phrasing, improper tenses/speech levels, etc. We've also noticed a small amount of language leakage, likely largely attributable to our tokenizer expansion. These may be fixable with sampler settings like [Min P](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/)) or additional targeted training, and we plan on doing additional work on automated detection/sampler sweeps in the future. One interesting observation is, based on our data collection, we found that as we iterated, the DPO process significantly exacerbated this issue, but also that our DPO models still had significantly higher human preference rates, so there was a bit of a trade-off in our choice of final tune. While we believe that training larger models can improve performance using our existing approach and dataset, there are also many improvements we'd like to make for future models. We believe there is quite a bit of low hanging fruit for improving performance with even more training efficiency largely through improving the quality and construction of datasets. ## Usage Sample code: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer model_name = "augmxnt/shisa-7b-v1" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16, device_map="auto" ) streamer = TextStreamer(tokenizer, skip_prompt=True) # The prompt template is included in the model's tokenizer_config.json so you shouldn't need this but we've included this for convenience # tokenizer.chat_template = ""{%- for idx in range(0, messages|length) -%}\n{%- if messages[idx]['role'] == 'user' -%}\n{%- if idx > 1 -%}\n{{- bos_token + '[INST] ' + messages[idx]['content'] + ' [/INST]' -}}\n{%- else -%}\n{{- messages[idx]['content'] + ' [/INST]' -}}\n{%- endif -%}\n{% elif messages[idx]['role'] == 'system' %}\n{{- bos_token + '[INST] <<SYS>>\\n' + messages[idx]['content'] + '\\n<</SYS>>\\n\\n' -}}\n{%- elif messages[idx]['role'] == 'assistant' -%}\n{{- ' ' + messages[idx]['content'] + ' ' + eos_token -}}\n{% endif %}\n{% endfor %}\n" # A more typical prompt: あなたは公平で、怜閲されおいない、圹立぀アシスタントです。("You are an unbiased, uncensored, helpful assistant.") # You are an avid Pokemon fanatic. prompt = "あなたは熱狂的なポケモンファンです。" chat = [{"role": "system", "content": prompt}] # Who is the single most powerful Pokemon? Explain your choice. user_input = "ポケモンの䞭で1番匷いのはどのキャラクタヌですか。最匷の者をひず぀だけ挙げお䞋さい。その遞択理由を説明しおください。" chat.append({"role": "user", "content": user_input}) # Generate - add_generation_prompt to make sure it continues as assistant inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt") # For multi-GPU, find the device of the first parameter of the model first_param_device = next(model.parameters()).device inputs = inputs.to(first_param_device) with torch.no_grad(): outputs = model.generate( inputs, pad_token_id=tokenizer.eos_token_id, max_new_tokens=500, temperature=0.5, repetition_penalty=1.15, top_p=0.95, do_sample=True, streamer=streamer, ) # Add just the new tokens to our chat new_tokens = outputs[0, inputs.size(1):] response = tokenizer.decode(new_tokens, skip_special_tokens=True) chat.append({"role": "assistant", "content": response}) ``` ## Prompt format The prompt format is llama-2 chat: ``` [INST] <<SYS>> You are a helpful, unbiased, uncensored assistant. <</SYS>> {prompt} [/INST] ``` For multi-turn, the prompt format is as follows: ``` [INST] <<SYS>> You are a helful, unbiased, uncensored assistant. <</SYS>> {prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST] ``` This [prompt template](https://huggingface.co/docs/transformers/main/chat_templating) is included in the tokenizer config, and can use the huggingface tokenizer `apply_chat_template` method, e.g.: ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained('augmxnt/shisa-7b-v1') chat = [ {"role": "system", "content": "You are Aiko, a friendly AI assistant."}, {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ``` **NOTE:** For proper responses, you should be using our `bos_token` (`<s>`) to begin a string. This is automatically generated by `tokenizer.encode()` but if you are crafting a custom template or using an encoding method that skips special tokens, you may have to add this yourself. ## Acknowledgements Team: [Leonard Lin](https://huggingface.co/leonardlin) and [Jon Durbin](https://huggingface.co/jondurbin), Mariko Sato, and Florian von Bock Compute for this model was generously sponsored by [AKA Virtual](https://akavirtual.com/) (Tokyo, Japan). Thanks to the [LLM-jp](https://llm-jp.nii.ac.jp/), [Stability AI Japan](https://ja.stability.ai/), and [LMSYS](https://lmsys.org/) teams for their work on llm-jp-eval, Japanese MT-Bench, MT-Bench. Also, thanks to all the volunteers that provided invaluable human preference testing! We are actively looking for additional compute as we train better and larger models for this project. Please drop us a line at: *compute at augmxnt dot com* --- *(GPT-4によっお非垞に軜埮な線集を加えお翻蚳されたした* # シヌサヌ7B **シヌサヌ7B**`shisa-7b-v1`は、合成デヌタ駆動のアプロヌチを甚いお、優れた日本語ず英語胜力を䞡立するこずを目指すバむリンガル日本語/英語汎甚チャットモデルです。 このモデルは、[Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1)を基に、Mistralのオリゞナルのトヌクナむザヌよりも日本語においお2倍以䞊効率的な、日本語最適化拡匵トヌクナむザヌをカスタムしお䜜成されたした。ベヌスモデルは、䞻に日本語のトヌクンを远加で80億ものトレヌニングを行いたした。そしお、その埌、[airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1)の拡匵された機械翻蚳版、[ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)からの最高埗点項目のセット、そしお新たに生成された[airoboros](https://github.com/jondurbin/airoboros)のデヌタを盎接目暙蚀語で埮調敎しおいたす。 商甚を含むあらゆる目的で䜿甚可胜な寛容なApache 2.0ラむセンスの䞋で、ベヌスモデル、デヌタセット、およびパむプラむンコヌドも公開しおいたす * [shisa-base-7b-v1](https://huggingface.co/augmxnt/shisa-base-7b-v1) - 拡匵トヌクナむザヌず远加の日本語プレトレヌニングを備えた圓方のベヌスモデル * [shisa-pretrain-en-ja-v1](https://huggingface.co/datasets/augmxnt/shisa-pretrain-en-ja-v1) - 圓方のプレトレヌニングデヌタセット * [ultra-orca-boros-en-ja](https://huggingface.co/datasets/jondurbin/ultra-orca-boros-en-ja) - 合成生成、機械翻蚳、プログラムによる怜蚌によるJA/EN埮調敎デヌタセット * [shisa-en-ja-dpo-v1](https://huggingface.co/datasets/augmxnt/shisa-en-ja-dpo-v1) - ultrafeedbackからのDPOペアの小さなサブセットず、遞択された倀ずしおGPT-4生成項目を䜿甚した日本語のDPOペア、そしお初期の7ビリオンモデルの出力を华䞋した倀 * [シヌサヌリポゞトリ](https://github.com/AUGMXNT/shisa) - 翻蚳、デヌタセットの生成、トレヌニング、評䟡コヌドなどが含たれおいたす さらに、アブレヌション結果、テスト方法論、䞻芁な調査結果など、プロセスの詳现や拡匵ラむトアップを公開する過皋にありたす。これは[圓プロゞェクトwiki](https://github.com/AUGMXNT/shisa/wiki)で研究者に興味深い情報ずしお提䟛されおいたす。 ## 埮調敎 最初の盎感は、最良の[既存の公開日本語トレヌニングセット](https://github.com/AUGMXNT/shisa/wiki/A-Review-of-Public-Japanese-Training-Sets)を䜿甚しお、それらを組み入れるこずでより匷力な日本語モデルを䜜成できるかどうかを芋るこずでした。しかし、初期の怜蚎ずテストの埌、自らの合成デヌタセットの翻蚳/生成にだけ焊点を圓おるこずで、短期間のトレヌニングで優れた結果を埗るこずができるず結論付けたした。 私たちは耇数の翻蚳ツヌルを比范し、手動でレビュヌを行った結果、`gpt-4`がほが垞に最高品質の翻蚳を提䟛しながら、Googleの `text-bison-32k`は品質、コスト、スルヌプットのバランスが良いず刀断したした。耇数の繰り返しを経お、無効な翻蚳のフラグ付けずフィルタリング、必芁に応じた再翻蚳ずバックフィルのための远加のアルゎリズムを含むように、翻蚳アプロヌチを掗緎させたした。 たた、このプロゞェクトを[NEFTune](https://arxiv.org/abs/2310.05914)ず[DPO](https://arxiv.org/abs/2305.18290)トレヌニングを取り入れるなど、新しい技術を適甚する機䌚ずもなりたした。 v1リリヌスのために、私たちは倧量の人間の嗜奜テスト数千の生成ず耇数ラりンドのペアワむズ比范に基づいおリリヌス候補から遞択したした。私たちは、勝ち/負け/匕き分けず、[BTLモデル](https://datascience.oneoffcoder.com/btl-model.html)iLSRを䜿甚しお[choix](https://github.com/lucasmaystre/choix)で結果を分析したした。 最良の候補モデルは、3ステップのプロセスで埮調敎されたした 1. 最初に、モデルは`ultra-orca-boros-en-ja`ずSlimOrca ([WandB Log](https://wandb.ai/jondurbin/shisa-7b-v1/runs/k8pfog9d/overview))で埮調敎されたした。 2. 次に、日本語のパフォヌマンスを向䞊させるためにultra-orca-boros-en-jaの䞀郚を䜿甚しお1回远加の゚ポックを远加したした最初の段階のSlimOrcaは䞻に英語([WandB Log](https://wandb.ai/jondurbin/shisa-mega-7b-v1.1/runs/dopsr0o7/overview))。 3. 最埌に、モデルは小芏暡のultrafeedback英語ず自身のJA DPOデヌタセットに察しおDPOTrainerを䜿甚しお調敎されたした。ここで䜿甚したJA DPOデヌタセットはgpt-4の出力を遞出された倀ずし、ステヌゞ1の予備モデルの出力を华䞋した倀ずしたす。([WandDB Log](https://wandb.ai/jondurbin/shisa-mega-dpo-7b-v1.1) ) 私たちのトレヌニングプロセス䞭に、䜕故䞀郚の既存の日本語モデルが、远加の日本語トレヌニングがないモデルに察しおもパフォヌマンスが䜎いのか、ずいういく぀かの重芁な掞察を埗るこずができたした。この分析結果を共有すれば、他のチヌムが日本語モデルを開発する際の参考になるず思いたす。 さらに探求する必芁はありたすが、実隓的な怜蚌ずしお、埮調敎セットのバヌゞョンを既存のベヌスモデル"Gamma 7B"に適甚し、初期のJA MT-Bench結果が瀺すように、私たちのチュヌニングアプロヌチで機胜性のパフォヌマンスを劇的に向䞊させるこずができたした | モデル | スコア | | ------------------------------ | ----- | | shisa-gamma-7b-allsources-v0.4 | 5.65 | | ja-stablelm-instruct-gamma-7b* | 4.01 | ## パフォヌマンス トレヌニング党䜓を通じお、各モデルに぀いお人間による評䟡を行い、モデルのパフォヌマンスを盞互に怜蚌したした。珟圚、モデル間の手動での比范テストを倧芏暡に行っおいたす。私たちの目指すずころは、ツヌルをさらに発展させるこずでこのデヌタ収集を公開しお拡匵するこずです。詳现ず曎新情報に぀いおは、[プロゞェクトwiki](https://github.com/AUGMXNT/shisa/wiki) をご芧ください。 我々は、[llm-jp-eval](https://github.com/llm-jp/llm-jp-eval)は、私たちの[基本モデル](https://huggingface.co/augmxnt/shisa-base-7b-v1)の有甚な指暙であり、初期の怜蚌のための埮調敎プロセス䞭に非垞に圹立぀ず考えおいたすが、埮調敎トレヌニングにはベンチマヌクのトレむン分割の䞀郚が含たれおいるため、私たちが提䟛するllm-jp-evalの結果は䞻に興味深いポむントずしお提䟛しおいたす | AVR | MC | NLI | QA | RC | |-------|-------|-------|-------|-------| | 0.7480| 0.8900| 0.8040| 0.4153| 0.8825| *(Qwenのテストをサポヌトし、可胜であれば`bos_token`を発行するために、[わずかに修正したllm-jp-eval](https://github.com/llm-jp/llm-jp-eval/compare/main...AUGMXNT:llm-jp-eval:main) を実行しおいたす)* 最終モデルに぀いおは、ベンチマヌクを含めるのが䞀般的なため、私たちのモデルの胜力をより代衚的にテストするために、Stability AI Japanの[Japanese MT-Bench](https://github.com/Stability-AI/FastChat)を䜿甚したした。[私たちのJA MT-Bench テスト](https://github.com/Stability-AI/FastChat/compare/jp-stable...AUGMXNT:FastChat:jp-stable)では、サンプリング倉動を枛らすために、日本語のプロンプト"あなたは圹立぀アシスタントです。"ず `--num-choices 4`を䜿甚しおいたすが、生成間で0.5+点時にはそれ以䞊の倉動を頻繁に芳察し、テスト時のデフォルトのプロンプトずパラメヌタに問題があったずいう経隓から、これらのスコアを過床に解釈するこずには泚意が必芁で、これらを確定的なスコアやランキングではなく、より確率的な方向指暙ずしお扱うこずをお勧めしたす | ベンチマヌク | スコア | | ----------- | ----- | | JA MT-Bench | 5.23 | | MT-Bench | 5.71 | [MT-Bench Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)がありたすが、JA MT-Benchはただ開発䞭であるため、䟿宜䞊、他のモデルのJA MT-Benchスコアずの比范を瀺したす私たちのスコアは`gpt-4-0613`によっお評䟡されたした | モデル | スコア | | ------------------------------------------------- | ---- | | gpt-4-0613 | 9.40 | | gpt-4-1106-preview | 9.17 | | gpt-3.5-turbo* | 8.41 | | Qwen-14B-Chat | 7.47 | | **shisa-7b-v1** | **5.23** | | ELYZA-japanese-Llama-2-7b-fast-instruct* | 4.86 | | ja-stablelm-instruct-gamma-7b* | 4.01 | | japanese-stablelm-instruct-alpha-7b* | 2.74 | | Mistral-7B-OpenOrca-ja* | 2.23 | | youri-7b-chat* | 2.00 | | Mistral-7B-Instruct-v0.1* | 1.78 | | llm-jp-13b-instruct-full-jaster-dolly-oasst-v1.0* | 1.31 | *(このセクションでマヌクされたJA MT-Benchの結果は[shi3zから匕甚](https://note.com/shi3zblog/n/n6b2ac5874021)したした)* ## 制限事項 圓モデルは十分な日本語の流暢さを瀺しおいたすが、7Bパラメヌタのモデルずしおは、より倧きなクラスのモデルに比べお幻芚率が高く、指瀺の远跡や掚論が効果的でない傟向がありたす。たた、日本語の完党な習埗はただ達しおおらず、ネむティブスピヌカヌはたたに非慣甚的/違和感のある衚珟や䞍適切な時制/話し蚀葉のレベルなどの間違いを芋぀けるこずがありたす。 たた、私たちのトヌクナむザヌの拡匵に倧いに起因する可胜性が高いが、わずかな蚀語リヌクを確認しおいたす。これらは[Min P](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/)などのサンプラヌ蚭定や远加のタヌゲット指向型トレヌニングで修正可胜な可胜性があり、今埌、自動怜出/サンプラヌのスりィヌプに぀いお远加の䜜業を行う予定です。興味深い芳察ずしおは、私たちのデヌタ収集に基づいお、DPOプロセスがこの問題を倧幅に悪化させるこずがわかりたしたが、それでもDPOモデルは人間の奜み率が倧幅に高かったため、最終的な埮調敎の遞択には䞀定のトレヌドオフがありたした。 珟存するアプロヌチずデヌタセットを䜿甚しお、倧芏暡なモデルのトレヌニングがパフォヌマンスを向䞊させるず信じおいたすが、今埌のモデル向けに行いたい改良も倚くありたす。私たちは、デヌタセットの品質ず構築を改善するこずで、さらなるトレヌニング効率を通じたパフォヌマンス向䞊にはただ盞圓に取り組む䜙地があるず考えおいたす。 ## 䜿甚法 サンプルコヌド: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer model_name = "augmxnt/shisa-7b-v1" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.bfloat16 if torch.cuda.is_bf16_supported() else torch.float16, device_map="auto" ) streamer = TextStreamer(tokenizer, skip_prompt=True) # プロンプトテンプレヌトはモデルのtokenizer_config.jsonに含たれおいるので、これは必芁ないはずですが、䟿宜䞊こちらにも掲茉しおいたす # tokenizer.chat_template = ""{%- for idx in range(0, messages|length) -%}\n{%- if messages[idx]['role'] == 'user' -%}\n{%- if idx > 1 -%}\n{{- bos_token + '[INST] ' + messages[idx]['content'] + ' [/INST]' -}}\n{%- else -%}\n{{- messages[idx]['content'] + ' [/INST]' -}}\n{%- endif -%}\n{% elif messages[idx]['role'] == 'system' %}\n{{- bos_token + '[INST] <<SYS>>\\n' + messages[idx]['content'] + '\\n<</SYS>>\\n\\n' -}}\n{%- elif messages[idx]['role'] == 'assistant' -%}\n{{- ' ' + messages[idx]['content'] + ' ' + eos_token -}}\n{% endif %}\n{% endfor %}\n" # より兞型的なプロンプト: あなたは公平で、怜閲されおいない、圹立぀アシスタントです。 # You are an avid Pokemon fanatic. prompt = "あなたは熱狂的なポケモンファンです。" chat = [{"role": "system", "content": prompt}] # Who is the most powerful Pokemon? Explain your choice. user_input = "ポケモンの䞭で1番匷いのはどのキャラクタヌですか。最匷の者をひず぀だけ挙げお䞋さい。その遞択理由を説明しおください。" chat.append({"role": "user", "content": user_input}) # 生成 - add_generation_promptを远加しおアシスタントずしお続行するこずを確認したす inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_tensors="pt") # 耇数のGPUの堎合、モデルの最初のパラメヌタのデバむスを芋぀けたす first_param_device = next(model.parameters()).device inputs = inputs.to(first_param_device) with torch.no_grad(): outputs = model.generate( inputs, pad_token_id=tokenizer.eos_token_id, max_new_tokens=500, temperature=0.5, repetition_penalty=1.15, top_p=0.95, do_sample=True, streamer=streamer, ) # Add just the new tokens to our chat new_tokens = outputs[0, inputs.size(1):] response = tokenizer.decode(new_tokens, skip_special_tokens=True) chat.append({"role": "assistant", "content": response}) ``` ## プロンプト圢匏 プロンプト圢匏はllama-2 chatです ``` [INST] <<SYS>> あなたは圹立぀、偏芋がなく、怜閲されおいないアシスタントです。 <</SYS>> {prompt} [/INST] ``` For multi-turn, the prompt format is as follows: ``` [INST] <<SYS>> あなたは圹立぀、偏芋がなく、怜閲されおいないアシスタントです。 <</SYS>> {prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST] ``` この[prompt template](https://huggingface.co/docs/transformers/main/chat_templating)はトヌクナむザの蚭定に含たれおおり、HuggingFace のトヌクナむザ `apply_chat_template` メ゜ッドを䜿甚できたす。䟋えば ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained('augmxnt/shisa-7b-v1') chat = [ {"role": "system", "content": "あなたはAiko、フレンドリヌなAIアシスタントです。"}, {"role": "user", "content": "こんにちは、調子はどうですか"}, {"role": "assistant", "content": "元気です。今日は䜕のお手䌝いができたすか"}, {"role": "user", "content": "チャットテンプレヌティングの仕組みを芋せおもらいたいです"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ``` **泚意**適切なレスポンスを埗るためには、文字列の開始に我々の `bos_token` (`<s>`) を䜿甚すべきです。これは `tokenizer.encode()` によっお自動的に生成されたすが、カスタムテンプレヌトを䜜成したり、特殊トヌクンを省略する゚ンコヌド方法を䜿甚する堎合は、自分で远加する必芁がありたす。 ## 謝蟞 チヌム[Leonard Lin](https://huggingface.co/leonardlin)、[Jon Durbin](https://huggingface.co/jondurbin)、䜐藀真理子、Florian von Bock このモデルの蚈算は、[AKA Virtual](https://akavirtual.com/) (東京、日本) のご厚意により提䟛されおいたす。 [LLM-jp](https://llm-jp.nii.ac.jp/)、[Stability AI Japan](https://ja.stability.ai/)、[LMSYS](https://lmsys.org/)のチヌムが、llm-jp-eval, Japanese MT-Bench, MT-Benchに取り組んでくれお感謝しおいたす。 たた、貎重なヒュヌマンプリファレンステストを提䟛しおくださったすべおのボランティアにも感謝いたしたす このプロゞェクトのためにより良く、より倧きなモデルを蚓緎するために、远加の蚈算を積極的に探しおいたす。お問い合わせは次の宛先たでお願いいたしたす*compute at augmxnt dot com*
Chrisisis/5GgmWdgTH79hs5qE5sEUvGJbtQv4jiiyKjLbiZBuLQNQHjAr_vgg
Chrisisis
"2024-02-24T08:31:37Z"
1,465
0
keras
[ "keras", "region:us" ]
null
"2024-02-11T17:28:46Z"
Entry not found
NbAiLab/nb-whisper-base
NbAiLab
"2024-02-13T12:29:46Z"
1,465
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "onnx", "safetensors", "whisper", "automatic-speech-recognition", "audio", "asr", "hf-asr-leaderboard", "no", "nb", "nn", "en", "dataset:NbAiLab/ncc_speech", "dataset:NbAiLab/NST", "dataset:NbAiLab/NPSC", "arxiv:2212.04356", "base_model:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2024-02-13T10:07:48Z"
--- license: apache-2.0 language: - 'no' - nb - nn - en datasets: - NbAiLab/ncc_speech - NbAiLab/NST - NbAiLab/NPSC base_model: openai/whisper-base tags: - audio - asr - automatic-speech-recognition - hf-asr-leaderboard metrics: - wer - cer library_name: transformers pipeline_tag: automatic-speech-recognition widget: - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3 example_title: FLEURS sample 1 - src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3 example_title: FLEURS sample 2 --- # NB-Whisper Base Introducing the **_Norwegian NB-Whisper Base model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article. | Model Size | Parameters | Model | |------------|------------|------------| | Tiny | 39M | [NB-Whisper Tiny](https://huggingface.co/NbAiLab/nb-whisper-tiny) | | Base | 74M | [NB-Whisper Base](https://huggingface.co/NbAiLab/nb-whisper-base) | | Small | 244M | [NB-Whisper Small](https://huggingface.co/NbAiLab/nb-whisper-small) | | Medium | 769M | [NB-Whisper Medium](https://huggingface.co/NbAiLab/nb-whisper-medium) | | Large | 1550M | [NB-Whisper Large](https://huggingface.co/NbAiLab/nb-whisper-large) | ### Verbatim Model While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases: - **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis. | Model Size | Parameters | Semantic version | |------------|------------|------------------| | Tiny | 39M | [Tiny - semantic](https://huggingface.co/NbAiLab/nb-whisper-tiny-semantic) | | Base | 74M | [Base - semantic](https://huggingface.co/NbAiLab/nb-whisper-base-semantic) | | Small | 244M | [Small - semantic](https://huggingface.co/NbAiLab/nb-whisper-small-semantic) | | Medium | 769M | [Medium - semantic](https://huggingface.co/NbAiLab/nb-whisper-medium-semantic) | | Large | 1550M | [Large - semantic](https://huggingface.co/NbAiLab/nb-whisper-large-semantic) | ### Model Description - **Developed by:** [NB AI-Lab](https://ai.nb.no/) - **Shared by:** [NB AI-Lab](https://ai.nb.no/) - **Model type:** `whisper` - **Language(s) (NLP):** Norwegian, Norwegian BokmÃ¥l, Norwegian Nynorsk, English - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) - **Trained from model:** [openai/whisper-base](https://huggingface.co/openai/whisper-base) - **Code Repository:** https://github.com/NbAiLab/nb-whisper/ - **Paper:** _Coming soon_ - **Demo:** _See Spaces on this page_ ## How to Use the Models ### Online Demos You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLab/). ### Local Setup with HuggingFace Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3). ```bash # Download the sample file $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 # Install necessary libraries. $ pip install transformers>=4.35.2 ``` After this is done, you should be able to run this in Python: ```python from transformers import pipeline # Load the model asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-base") #transcribe asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'}) ``` <details> <summary>Expected output</summary> ```json { {'text': ' Nordmenn er nordlendinger, trÞndere, sÞrlendinger og folk fra alle andre regioner. Nordmenn er ogsÃ¥ innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid sÃ¥ lett Ã¥ si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'} } ``` </details> #### Extended HuggingFace Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words. ```python # Long Transcripts asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Increase accuracy by setting beam size to 5 asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'}) # Return Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Return Word Level Timestamps asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'}) # Transcribe to Nynorsk asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'}) # Transcribe to English asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'}) ``` <details> <summary>Expected output</summary> Long transcripts: ```json { {'text': ' Nordmenn er nordlendinger, trÞndere, sÞrlendinger og folk fra alle andre regioner. Nordmenn er ogsÃ¥ innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid sÃ¥ lett Ã¥ si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhÞrer. Det vi kaller hjem, er der hjertet vÃ¥rt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer pÃ¥ Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt stÞrste hÃ¥p for Norge er at vi skal klare Ã¥ ta vare pÃ¥ hverandre, at vi skal bygge dette landet videre pÃ¥ tillit, fellesskap og raushet.'} } ``` Timestamps: ```json { {'text': ' Nordmenn er nordlendinger, trÞndere, sÞrlendinger og folk fra alle andre regioner. Nordmenn er ogsÃ¥ innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid sÃ¥ lett Ã¥ si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhÞrer. Det vi kaller hjem, er der hjertet vÃ¥rt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer pÃ¥ Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt stÞrste hÃ¥p for Norge er at vi skal klare Ã¥ ta vare pÃ¥ hverandre, at vi skal bygge dette landet videre pÃ¥ tillit, fellesskap og raushet.', 'chunks': [{'timestamp': (0.0, 5.46), 'text': ' Nordmenn er nordlendinger, trÞndere, sÞrlendinger'}, {'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'}, {'timestamp': (8.68, 16.64), 'text': ' Nordmenn er ogsÃ¥ innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'}, {'timestamp': (16.64, 13.3), 'text': ' Det er ikke alltid sÃ¥ lett Ã¥ si hvor vi er fra, hvilken nasjonalitet vi er fra.'}, {'timestamp': (13.32, 30.28), 'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhÞrer.'}, {'timestamp': (32.52, 39.16), 'text': ' Det vi kaller hjem, er der hjertet vÃ¥rt er, og det kan ikke alltid plasseres'}, {'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'}, {'timestamp': (42.0, 46.74), 'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'}, {'timestamp': (46.74, 51.12), 'text': ' og jenter og gutter som er glad i hverandre.'}, {'timestamp': (51.16, 57.42), 'text': ' Nordmenn trommer pÃ¥ Gud, Allah, Altet og ingenting.'}, {'timestamp': (57.42, 64.3), 'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'}, {'timestamp': (64.34, 71.24), 'text': ' Med andre ord, Norge er dere. Norge er oss.'}, {'timestamp': (71.24, 78.04), 'text': ' Mitt stÞrste hÃ¥p for Norge er at vi skal klare Ã¥ ta vare pÃ¥ hverandre,'}, {'timestamp': (78.12, 84.68), 'text': ' at vi skal bygge dette landet videre pÃ¥ tillit, fellesskap og raushet.'}]} } ``` Word Level Timestamps: ```json { {"text": "Nordmenn er nordlendinger, trÞndere, sÞrlendinger og folk fra alle andre regioner. Nordmenn er ogsÃ¥ innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid sÃ¥ lett Ã¥ si hvor vi er fra, hvilken nasjonalitet vi tilhÞrer. Det vi kaller hjem, er der hjertet vÃ¥rt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer pÃ¥ Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt stÞrste hÃ¥p for Norge er at vi skal klare Ã¥ ta vare pÃ¥ hverandre, at vi skal bygge dette landet videre pÃ¥ tillit, fellesskap og raushet.", "chunks": [ {"text": "Nordmenn", "timestamp": [0.72, 1.42]}, {"text": "er", "timestamp": [1.42, 1.74]}, // ... more chunks ... {"text": "raushet.", "timestamp": [83.1, 84.88]} ] } } ``` Nynorsk: ```json { {"text": "Nordmenn er nordlendingar, trÞndarar, sÞrlendingar og folk frÃ¥ alle andre regionar. Nordmenn er ogsÃ¥ innvandra frÃ¥ Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid sÃ¥ lett Ã¥ seie kvar vi er frÃ¥, kva nasjonalitet vi tilhÞyrer. Det vi kallar heim, er der hjartet vÃ¥rt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar pÃ¥ Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt stÞrste hÃ¥p for Noreg er at vi skal klare Ã¥ ta vare pÃ¥ kvarandre, at vi skal byggje dette landet vidare pÃ¥ tillit, fellesskap og raushet."} } ``` English: ```json { {"text": "Norwegians are Norwegians, trÞnders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."} } ``` </details> ### Whisper CPP Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription. We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin). ```bash # We can download and compile whisper.cpp $ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1 $ cd whisper.cpp/ $ make # We also need to convert the audio to WAV as that is the only format supported by whisper.cpp $ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3 $ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav # Lets download the two ggml-files from this site wget -N https://huggingface.co/NbAiLab/nb-whisper-base/resolve/main/ggml-model.bin -O models/nb-base-ggml-model.bin wget -N https://huggingface.co/NbAiLab/nb-whisper-base/resolve/main/ggml-model-q5_0.bin -O models/nb-base-ggml-model-q5_0.bin # And run it with the f16 default model $ ./main -l no -m models/nb-base-ggml-model.bin king.wav # Or the quantized version $ ./main -l no -m models/nb-base-ggml-model-q5_0.bin king.wav ``` ### WhisperX and Speaker Diarization Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below. ```bash # Follow the install instructions on https://github.com/m-bain/whisperX # Make sure you have a HuggingFace account and have agreed to the pyannote terms # Log in (or supply HF Token in command line) huggingface-cli login # Download a test file wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3 # Optional. If you get complains about not support for Norwegian, do: pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540 # Transcribe the test file. All transcripts will end up in the directory of the mp3-file whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-base --language no --diarize ``` You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX). ### API Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks. ## Training Data The training data originates from SprÃ¥kbanken and the National Library of Norway's digital collection, including: - NST Norwegian ASR Database (16 kHz) and its corresponding dataset - Transcribed speeches from the Norwegian Parliament by SprÃ¥kbanken - TV broadcast (NRK) subtitles (NLN digital collection) - Audiobooks (NLN digital collection) ## Downstream Use The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding. ## Bias, Risks, and Limitations Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models. ### Software The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/). ## Citation & Contributors The NB-Whisper Base model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming. ## Disclaimer The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models. ## Acknowledgements Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at SprÃ¥kbanken for the collaboration on the Stortinget corpus. ## Contact For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
lmstudio-community/Qwen1.5-32B-Chat-GGUF
lmstudio-community
"2024-04-24T23:01:41Z"
1,465
13
null
[ "gguf", "chat", "text-generation", "en", "license:other", "region:us" ]
text-generation
"2024-04-06T03:24:00Z"
--- license: other license_name: tongyi-qianwen license_link: >- https://huggingface.co/Qwen/Qwen1.5-32B-Chat/blob/main/LICENSE language: - en pipeline_tag: text-generation tags: - chat quantized_by: bartowski lm_studio: param_count: 32b use_case: chat release_date: 03-04-2024 model_creator: Qwen prompt_template: ChatML system_prompt: You are a helpful assistant base_model: Qwen2 original_repo: Qwen/Qwen1.5-32B-Chat --- ## 💫 Community Model> Qwen 1.5 32B Chat By Qwen *👟 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [Qwen](https://huggingface.co/Qwen)<br> **Original model**: [Qwen1.5-32B-Chat](https://huggingface.co/Qwen/Qwen1.5-32B-Chat)<br> **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2589](https://github.com/ggerganov/llama.cpp/releases/tag/b2589)<br> ## Model Summary: After a hugely successful 72B model, Qwen has released this brand new 32B model. This model has amazing chat capabilties, multilingual for both conversation and coding, and decent context lengths.<br> Check out more details from their blog post [here](http://qwenlm.github.io/blog/qwen1.5-32b/). ## Prompt template: Choose the `ChatML` preset in your LM Studio. Under the hood, the model will see a prompt that's formatted like so: ``` <|im_start|>system You are a helpful assistant<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Use case and examples Qwen2 is great at making conversations in multiple languages, and also has decent coding capabilities. ### Creativity ``` <|im_start|>system You are a helpful assistant<|im_end|> <|im_start|>user Give me a short introduction to large language model.<|im_end|> <|im_start|>assistant Large language models are artificial intelligence systems designed to process and generate human-like language at scale. These models are typically built using deep learning techniques, specifically neural networks, and are trained on massive amounts of text data from the internet, books, and other sources. They are capable of understanding context, generating coherent responses, and even performing complex language-based tasks such as translation, summarization, and question-answering. Large language models like GPT-3, T5, and BERT have revolutionized the field of natural language processing (NLP) by achieving state-of-the-art performance in various tasks and enabling more human-like interactions with machines.<|im_end|> ``` ### Coding ``` <|im_start|>system You are a helpful assistant<|im_end|> <|im_start|>user Implement quicksort using C++<|im_end|> <|im_start|>assistant Sure, here's an implementation of the quicksort algorithm in C++: #include <iostream> using namespace std; // Function to swap two elements void swap(int* a, int* b) { int t = *a; *a = *b; *b = t; } // Function to partition the array int partition (int arr[], int low, int high) { int pivot = arr[high]; // pivot int i = (low - 1); // Index of smaller element for (int j = low; j <= high- 1; j++) { // If current element is smaller than the pivot if (arr[j] < pivot) { i++; // increment index of smaller element swap(&arr[i], &arr[j]); } } swap(&arr[i + 1], &arr[high]); return (i + 1); } // Function to perform quicksort void quickSort(int arr[], int low, int high) { if (low < high) { // pi is partitioning index, arr[p] is now // at right place int pi = partition(arr, low, high); // Separately sort elements before // partition and after partition quickSort(arr, low, pi - 1); quickSort(arr, pi + 1, high); } } // Function to print an array void printArray(int arr[], int size) { int i; for (i=0; i < size; i++) cout << arr[i] << " "; cout << endl; } // Driver program to test above functions int main() { int arr[] = {10, 7, 8, 9, 1, 5}; int n = sizeof(arr)/sizeof(arr[0]); quickSort(arr, 0, n-1); cout << "Sorted array: \n"; printArray(arr, n); return 0; } In this implementation, the `quickSort` function recursively calls itself to sort the elements before and after the pivot element, which is determined using the `partition` function. The `swap` function is used to swap elements in the array. The `printArray` function is used to print the sorted array.<|im_end|> ``` ### Conversational ``` <|im_start|>system You are a helpful assistant<|im_end|> <|im_start|>user What do the vacuum tubes in an amplifier do?<|im_end|> <|im_start|>assistant Vacuum tubes in an amplifier serve as electronic switches and amplifiers that can control the flow of electrical current in a circuit. They play a crucial role in amplifying the input signal, whether it's a weak audio signal or something else, to produce a louder or stronger output signal. Here's a detailed explanation of what vacuum tubes do in an amplifier: 1. **Electron Emission**: In a vacuum tube, an electrode called the cathode is heated, usually by an electrical current. This heating causes the cathode to emit electrons into the vacuum space inside the tube. The process is called thermionic emission. 2. **Controlled Current Flow**: The emitted electrons are attracted to another electrode called the anode, which is at a higher voltage than the cathode. A third electrode, called the grid, is placed between the cathode and the anode. By varying the voltage on the grid relative to the cathode, the flow of electrons from the cathode to the anode can be controlled. If the grid voltage is more negative than the cathode, fewer electrons will reach the anode, and vice versa. This control over electron flow is the basis for amplification. 3. **Amplification**: When an input signal (the weak signal to be amplified) is applied to the grid, it modulates the flow of electrons from the cathode to the anode. As the number of electrons passing through the tube changes in response to the input signal, the current flowing through the anode also changes. This change in current is then amplified and can be used to drive a speaker or other output device. 4. **Linear Amplification**: Vacuum tubes are particularly good at providing linear amplification, meaning that the output signal is proportional to the input signal. This is important for maintaining the integrity of the original signal, especially in audio amplifiers where distortion should be minimized. 5. **Overdrive and Distortion**: If the input signal becomes too strong, the vacuum tube can be driven into "overdrive," where the control of the grid over the electron flow breaks down, and the output signal becomes distorted. This effect is often used in guitar amplifiers to create a "crunchy" or "fuzzy" sound. In summary, vacuum tubes in an amplifier enable the controlled manipulation of electrical current, allowing for the amplification of signals with minimal distortion. They have been a key component in electronic amplification for decades, although solid-state devices like transistors have largely replaced them in modern electronics due to their lower power consumption, smaller size, and higher reliability.<|im_end|> ``` ## Technical Details Qwen 1.5 32B chat is trained on a diverse dataset and then was tuned with Supervised Fine Tuning (SFT) and Direct Preference Optimization (DPO) for its excellent chat capabilities. Trained on multiple languages, the model has shown excellent performance in the following languages in addition to its great English performance: - Arabic - Spanish - French - Portuguese - German - Italian - Russian - Japanese - Korean - Vietnamese - Thai - Indonesian Supports 32k context for very long conversations. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.