modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
clibrain/Llama-2-7b-ft-instruct-es
clibrain
"2023-08-09T13:54:51Z"
2,267
23
transformers
[ "transformers", "pytorch", "llama", "text-generation", "es", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-09T13:24:07Z"
--- license: apache-2.0 language: - es pipeline_tag: text-generation library_name: transformers inference: false --- # Llama-2-7B-ft-instruct-es [Llama 2 (7B)](https://huggingface.co/meta-llama/Llama-2-7b) fine-tuned on [Clibrain](https://huggingface.co/clibrain)'s Spanish instructions dataset. ## Model Details Llama 2 is a collection of pre-trained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pre-trained model. Links to other models can be found in the index at the bottom. ## Example of Usage ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig model_id = "clibrain/Llama-2-7b-ft-instruct-es" model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_id) def create_instruction(instruction, input_data=None, context=None): sections = { "Instrucción": instruction, "Entrada": input_data, "Contexto": context, } system_prompt = "A continuación hay una instrucción que describe una tarea, junto con una entrada que proporciona más contexto. Escriba una respuesta que complete adecuadamente la solicitud.\n\n" prompt = system_prompt for title, content in sections.items(): if content is not None: prompt += f"### {title}:\n{content}\n\n" prompt += "### Respuesta:\n" return prompt def generate( instruction, input=None, context=None, max_new_tokens=128, temperature=0.1, top_p=0.75, top_k=40, num_beams=4, **kwargs ): prompt = create_instruction(instruction, input, context) print(prompt.replace("### Respuesta:\n", "")) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to("cuda") attention_mask = inputs["attention_mask"].to("cuda") generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=max_new_tokens, early_stopping=True ) s = generation_output.sequences[0] output = tokenizer.decode(s) return output.split("### Respuesta:")[1].lstrip("\n") instruction = "Dame una lista de lugares a visitar en España." print(generate(instruction)) ``` ## Example of Usage with `pipelines` ```py from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline model_id = "clibrain/Llama-2-7b-ft-instruct-es" model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True).to("cuda") tokenizer = AutoTokenizer.from_pretrained(model_id) pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200, device=0) prompt = """ A continuación hay una instrucción que describe una tarea. Escriba una respuesta que complete adecuadamente la solicitud. ### Instrucción: Dame una lista de 5 lugares a visitar en España. ### Respuesta: """ result = pipe(prompt) print(result[0]['generated_text']) ```
NousResearch/Nous-Hermes-13b
NousResearch
"2023-06-05T00:36:49Z"
2,266
428
transformers
[ "transformers", "pytorch", "llama", "text-generation", "self-instruct", "distillation", "en", "license:gpl", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-03T03:21:50Z"
--- license: gpl language: - en tags: - llama - self-instruct - distillation --- # Model Card: Nous-Hermes-13b ## Model Description Nous-Hermes-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. The result is an enhanced Llama 13b model that rivals GPT-3.5-turbo in performance across a variety of tasks. This model stands out for its long responses, low hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 2000 sequence length on an 8x a100 80GB DGX machine for over 50 hours. ## Model Training The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions. Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions. ## Collaborators The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Nous Research, Huemin Art, and Redmond AI. Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. Special mention goes to @winglian, @erhartford, and @main_horse for assisting in some of the training issues. Among the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt. The GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801. If anyone was left out, please open a thread in the community tab. ## Prompt Format The model follows the Alpaca prompt format: ``` ### Instruction: ### Response: ``` or ``` ### Instruction: ### Input: ### Response: ``` ## Resources for Applied Use Cases: For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord For an example of a roleplaying discord bot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot ## Future Plans The model is currently being uploaded in FP16 format, and there are plans to convert the model to GGML and GPTQ 4bit quantizations. The team is also working on a full benchmark, similar to what was done for GPT4-x-Vicuna. We will try to get in discussions to get the model included in the GPT4All. ## Benchmark Results ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.4915|± |0.0146| | | |acc_norm|0.5085|± |0.0146| |arc_easy | 0|acc |0.7769|± |0.0085| | | |acc_norm|0.7424|± |0.0090| |boolq | 1|acc |0.7948|± |0.0071| |hellaswag | 0|acc |0.6143|± |0.0049| | | |acc_norm|0.8000|± |0.0040| |openbookqa | 0|acc |0.3560|± |0.0214| | | |acc_norm|0.4640|± |0.0223| |piqa | 0|acc |0.7965|± |0.0094| | | |acc_norm|0.7889|± |0.0095| |winogrande | 0|acc |0.7190|± |0.0126| ``` These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list. ## Model Usage The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions. Compute provided by our project sponsor Redmond AI, thank you!!
alpindale/goliath-120b
alpindale
"2023-12-21T08:41:00Z"
2,266
219
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "conversational", "en", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-05T17:44:06Z"
--- license: llama2 language: - en pipeline_tag: conversational tags: - merge --- # Goliath 120B An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one. Please check out the quantized formats provided by [@TheBloke](https:///huggingface.co/TheBloke) and [@Panchovix](https://huggingface.co/Panchovix): - [GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) (llama.cpp) - [GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) (KoboldAI, TGW, Aphrodite) - [AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) (TGW, Aphrodite, vLLM) - [Exllamav2](https://huggingface.co/Panchovix/goliath-120b-exl2) (TGW, KoboldAI) # Prompting Format Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best. # Merge process The models used in the merge are [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [Euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B). The layer ranges used are as follows: ```yaml - range 0, 16 Xwin - range 8, 24 Euryale - range 17, 32 Xwin - range 25, 40 Euryale - range 33, 48 Xwin - range 41, 56 Euryale - range 49, 64 Xwin - range 57, 72 Euryale - range 65, 80 Xwin ``` # Screenshots ![image/png](https://cdn-uploads.huggingface.co/production/uploads/635567189c72a7e742f1419c/Cat8_Rimaz6Ni7YhQiiGB.png) # Benchmarks Coming soon. # Acknowledgements Credits goes to [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit). Special thanks to [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios.
Lykon/AAM_XL_AnimeMix
Lykon
"2024-01-19T14:10:55Z"
2,266
12
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "stable-diffusion-xl", "text-to-image", "art", "artistic", "anime", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-01-19T10:14:05Z"
--- language: - en license: other tags: - stable-diffusion - stable-diffusion-diffusers - stable-diffusion-xl - text-to-image - art - artistic - diffusers - anime --- # AAM XL AnimeMix `Lykon/AAM_XL_AnimeMix` is a Stable Diffusion model that has been fine-tuned on [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). Please consider supporting me: - on [Patreon](https://www.patreon.com/Lykon275) - or [buy me a coffee](https://snipfeed.co/lykon) **License**: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) ## Diffusers For more general information on how to run text-to-image models with 🧨 Diffusers, see [the docs](https://huggingface.co/docs/diffusers/using-diffusers/conditional_image_generation). 1. Installation ``` pip install diffusers transformers accelerate ``` 2. Run ```py from diffusers import AutoPipelineForText2Image, DEISMultistepScheduler import torch pipe = AutoPipelineForText2Image.from_pretrained('Lykon/AAM_XL_AnimeMix', torch_dtype=torch.float16, variant="fp16") pipe.scheduler = DEISMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") prompt = "anime girl, night, blue light behind her, ((Galaxy, Lens flare)), short hair, flower field, night sky, cinematic shot. Wallpaper. (Blue color schema), detailed background, a city in the distance" generator = torch.manual_seed(0) image = pipe(prompt, num_inference_steps=25).images[0] image.save("./image.png") ``` ![](./image.png)
chihoonlee10/T3Q-ko-solar-dpo-v7.0
chihoonlee10
"2024-05-09T04:42:13Z"
2,266
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-09T04:33:59Z"
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
M-CLIP/M-BERT-Base-ViT-B
M-CLIP
"2021-05-18T21:34:39Z"
2,265
11
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2022-03-02T23:29:04Z"
<br /> <p align="center"> <h1 align="center">M-BERT Base ViT-B</h1> <p align="center"> <a href="https://github.com/FreddeFrallan/Multilingual-CLIP/tree/main/Model%20Cards/M-BERT%20Base%20ViT-B">Github Model Card</a> </p> </p> ## Usage To use this model along with the original CLIP vision encoder you need to download the code and additional linear weights from the [Multilingual-CLIP Github](https://github.com/FreddeFrallan/Multilingual-CLIP). Once this is done, you can load and use the model with the following code ```python from src import multilingual_clip model = multilingual_clip.load_model('M-BERT-Base-ViT') embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?']) print(embeddings.shape) # Yields: torch.Size([3, 640]) ``` <!-- ABOUT THE PROJECT --> ## About A [BERT-base-multilingual](https://huggingface.co/bert-base-multilingual-cased) tuned to match the embedding space for [69 languages](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md), to the embedding space of the CLIP text encoder which accompanies the ViT-B/32 vision encoder. <br> A full list of the 100 languages used during pre-training can be found [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages), and a list of the 4069languages used during fine-tuning can be found in [SupportedLanguages.md](https://github.com/FreddeFrallan/Multilingual-CLIP/blob/main/Model%20Cards/M-BERT%20Base%2069/Fine-Tune-Languages.md). Training data pairs was generated by sampling 40k sentences for each language from the combined descriptions of [GCC](https://ai.google.com/research/ConceptualCaptions/) + [MSCOCO](https://cocodataset.org/#home) + [VizWiz](https://vizwiz.org/tasks-and-datasets/image-captioning/), and translating them into the corresponding language. All translation was done using the [AWS translate service](https://aws.amazon.com/translate/), the quality of these translations have currently not been analyzed, but one can assume the quality varies between the 69 languages.
BreadAi/MuseCan
BreadAi
"2023-03-21T22:21:28Z"
2,264
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "dataset:breadlicker45/musenet-encoders-12k", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-02-24T22:37:41Z"
--- datasets: - breadlicker45/musenet-encoders-12k --- MuseCan: A Generative Model for Music Generation Introduction Music is a powerful form of expression that can be enjoyed by people of all ages and backgrounds. It can be used to convey a wide range of emotions and to tell stories in a unique way. However, creating music can be a difficult and time-consuming process. In recent years, there has been a growing interest in the use of artificial intelligence (AI) to generate music. AI-generated music can be used to create new pieces of music, to remix existing songs, or to provide background music for video games, movies, or other forms of media. One of the most promising AI-generated music platforms is MuseCan. MuseCan is a generative model that can generate music from text. It was trained on a dataset of sheet music and lyrics, and can generate music in a variety of styles. It can also be used to generate music that is tailored to specific prompts, such as a specific genre or emotion. How MuseCan Works MuseCan uses a technique called deep learning to generate music. Deep learning is a type of machine learning that uses artificial neural networks to learn from data. In the case of MuseCan, the artificial neural network is trained on a dataset of sheet music and lyrics. Once the neural network is trained, it can be used to generate new pieces of music. The Benefits of MuseCan There are several benefits to using MuseCan to generate music. First, it can save time and effort. Creating music from scratch can be a time-consuming process, but MuseCan can generate music much more quickly. Second, MuseCan can be used to create music that is tailored to specific prompts. If you want music that is in a specific genre or that has a specific emotion, MuseCan can generate that music for you. Third, MuseCan can be used to create music that is unique and original. The music that MuseCan generates is not based on any existing songs, so it is sure to be something that no one else has ever heard before. The Future of MuseCan MuseCan is still under development, but it has the potential to be a powerful tool for music generation. In the future, MuseCan could be used to create music that is even more realistic and lifelike. It could also be used to create music that is tailored to specific individuals. For example, MuseCan could be used to create a song that is specifically for a person's birthday or anniversary. Conclusion MuseCan is a powerful tool that can be used to generate a variety of musical styles. It can be used to generate music for a variety of purposes, such as for video games, movies, or simply for personal enjoyment. MuseCan is still under development, but it has the potential to be a powerful tool for music generation.
Darna/detr-5000-400-finetuned-table-detector
Darna
"2023-11-02T22:06:57Z"
2,264
1
transformers
[ "transformers", "pytorch", "tensorboard", "detr", "object-detection", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
"2023-06-15T16:03:32Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: detr-5000-400-finetuned-table-detector results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-5000-400-finetuned-table-detector This model is a fine-tuned version of [Benito/DeTr-TableDetection-5000-images](https://huggingface.co/Benito/DeTr-TableDetection-5000-images) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data Used the data from the OxML 2023 kaggle competition on Table Detector : https://www.kaggle.com/competitions/oxml-2023-x-ml-cases-table-detector/overview ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.29.2 - Pytorch 2.0.0+cpu - Datasets 2.1.0 - Tokenizers 0.13.3
beingamit99/car_damage_detection
beingamit99
"2023-12-12T18:09:36Z"
2,264
10
transformers
[ "transformers", "safetensors", "vit", "image-classification", "car-damage-prediction", "beit", "transformer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-12-03T21:59:10Z"
--- license: mit tags: - image-classification - car-damage-prediction - beit - vit - transformer metrics: - accuracy - code_eval --- # 🚗 Car Damage Prediction Model 🛠️ Predict car damage with confidence using the **llm VIT bEIT** model! This model is trained to classify car damage into six distinct classes: - **"0"**: *Crack* - **"1"**: *Scratch* - **"2"**: *Tire Flat* - **"3"**: *Dent* - **"4"**: *Glass Shatter* - **"5"**: *Lamp Broken* ## Key Features 🔍 - Accurate classification into six car damage categories. - Seamless integration into various applications. - Streamlined image processing with transformer-based architecture. ## Applications 🌐 This powerful car damage prediction model can be seamlessly integrated into various applications, such as: - **Auto Insurance Claim Processing:** Streamline the assessment of car damage for faster claim processing. - **Vehicle Inspection Services:** Enhance efficiency in vehicle inspection services by automating damage detection. - **Used Car Marketplaces:** Provide detailed insights into the condition of used cars through automated damage analysis. Feel free to explore and integrate this model into your applications for accurate car damage predictions! 🌟 ## How to Use This Model 🤖 ### Approach ### First Approach ```python import numpy as np from PIL import Image from transformers import AutoImageProcessor, AutoModelForImageClassification # Load the model and image processor processor = AutoImageProcessor.from_pretrained("beingamit99/car_damage_detection") model = AutoModelForImageClassification.from_pretrained("beingamit99/car_damage_detection") # Load and process the image image = Image.open(IMAGE) inputs = processor(images=image, return_tensors="pt") # Make predictions outputs = model(**inputs) logits = outputs.logits.detach().cpu().numpy() predicted_class_id = np.argmax(logits) predicted_proba = np.max(logits) label_map = model.config.id2label predicted_class_name = label_map[predicted_class_id] # Print the results print(f"Predicted class: {predicted_class_name} (probability: {predicted_proba:.4f}") ``` ### Second Approach ```python from transformers import pipeline #Create a classification pipeline pipe = pipeline("image-classification", model="beingamit99/car_damage_detection") pipe(IMAGE) ```
BAAI/Bunny-Llama-3-8B-V
BAAI
"2024-06-24T14:29:31Z"
2,264
77
transformers
[ "transformers", "safetensors", "bunny-llama", "text-generation", "conversational", "custom_code", "arxiv:2402.11530", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
"2024-04-21T06:57:37Z"
--- inference: false license: apache-2.0 --- # Model Card <p align="center"> <img src="./icon.png" alt="Logo" width="350"> </p> 📖 [Technical report](https://arxiv.org/abs/2402.11530) | 🏠 [Code](https://github.com/BAAI-DCAI/Bunny) | 🐰 [Demo](http://bunny.baai.ac.cn) | 🤗 [GGUF](https://huggingface.co/BAAI/Bunny-Llama-3-8B-V-gguf) This is Bunny-Llama-3-8B-V. We also provide v1.1 version accepting high-resolution images up to 1152x1152. 🤗 [v1.1](https://huggingface.co/BAAI/Bunny-v1_1-Llama-3-8B-V) Bunny is a family of lightweight but powerful multimodal models. It offers multiple plug-and-play vision encoders, like EVA-CLIP, SigLIP and language backbones, including Llama-3-8B, Phi-1.5, StableLM-2, Qwen1.5, MiniCPM and Phi-2. To compensate for the decrease in model size, we construct more informative training data by curated selection from a broader data source. We provide Bunny-Llama-3-8B-V, which is built upon [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) and [Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). More details about this model can be found in [GitHub](https://github.com/BAAI-DCAI/Bunny). ![comparison](comparison.png) # Quickstart Here we show a code snippet to show you how to use the model with transformers. Before running the snippet, you need to install the following dependencies: ```shell pip install torch transformers accelerate pillow ``` If the CUDA memory is enough, it would be faster to execute this snippet by setting `CUDA_VISIBLE_DEVICES=0`. Users especially those in Chinese mainland may want to refer to a HuggingFace [mirror site](https://hf-mirror.com). ```python import torch import transformers from transformers import AutoModelForCausalLM, AutoTokenizer from PIL import Image import warnings # disable some warnings transformers.logging.set_verbosity_error() transformers.logging.disable_progress_bar() warnings.filterwarnings('ignore') # set device device = 'cuda' # or cpu torch.set_default_device(device) # create model model = AutoModelForCausalLM.from_pretrained( 'BAAI/Bunny-Llama-3-8B-V', torch_dtype=torch.float16, # float32 for cpu device_map='auto', trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained( 'BAAI/Bunny-Llama-3-8B-V', trust_remote_code=True) # text prompt prompt = 'Why is the image funny?' text = f"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: <image>\n{prompt} ASSISTANT:" text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')] input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1][1:], dtype=torch.long).unsqueeze(0).to(device) # image, sample images can be found in images folder image = Image.open('example_2.png') image_tensor = model.process_images([image], model.config).to(dtype=model.dtype, device=device) # generate output_ids = model.generate( input_ids, images=image_tensor, max_new_tokens=100, use_cache=True, repetition_penalty=1.0 # increase this to avoid chattering )[0] print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip()) ```
MLP-KTLim/llama-3-Korean-Bllossom-8B-gguf-Q4_K_M
MLP-KTLim
"2024-05-14T06:45:24Z"
2,264
29
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "en", "ko", "arxiv:2403.10882", "arxiv:2403.11399", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "endpoints_compatible", "region:us" ]
null
"2024-05-08T23:06:02Z"
--- language: - en - ko license: llama3 library_name: transformers tags: - llama-cpp - gguf-my-repo base_model: - meta-llama/Meta-Llama-3-8B - jeiku/Average_Test_v1 - MLP-KTLim/llama-3-Korean-Bllossom-8B --- <a href="https://github.com/MLP-Lab/Bllossom"> <img src="https://github.com/teddysum/bllossom/blob/main//bllossom_icon.png?raw=true" width="40%" height="50%"> </a> # Bllossom | [Demo]() | [Homepage](https://www.bllossom.ai/) | [Github](https://github.com/MLP-Lab/Bllossom) - 본 모델은 CPU에서 구동가능하며 빠른 속도를 위해서는 8GB GPU에서 구동 가능한 양자화 모델입니다! [Colab 예제](https://colab.research.google.com/drive/129ZNVg5R2NPghUEFHKF0BRdxsZxinQcJ?usp=drive_link) | ```bash 저희 Bllossom팀 에서 한국어-영어 이중 언어모델인 Bllossom을 공개했습니다! 서울과기대 슈퍼컴퓨팅 센터의 지원으로 100GB가넘는 한국어로 모델전체를 풀튜닝한 한국어 강화 이중언어 모델입니다! 한국어 잘하는 모델 찾고 있지 않으셨나요? - 한국어 최초! 무려 3만개가 넘는 한국어 어휘확장 - Llama3대비 대략 25% 더 긴 길이의 한국어 Context 처리가능 - 한국어-영어 Pararell Corpus를 활용한 한국어-영어 지식연결 (사전학습) - 한국어 문화, 언어를 고려해 언어학자가 제작한 데이터를 활용한 미세조정 - 강화학습 이 모든게 한꺼번에 적용되고 상업적 이용이 가능한 Bllossom을 이용해 여러분 만의 모델을 만들어보세욥! 본 모델은 CPU에서 구동가능하며 빠른 속도를 위해서는 6GB GPU에서 구동 가능한 양자화 모델입니다! 1. Bllossom-8B는 서울과기대, 테디썸, 연세대 언어자원 연구실의 언어학자와 협업해 만든 실용주의기반 언어모델입니다! 앞으로 지속적인 업데이트를 통해 관리하겠습니다 많이 활용해주세요 🙂 2. 초 강력한 Advanced-Bllossom 8B, 70B모델, 시각-언어모델을 보유하고 있습니다! (궁금하신분은 개별 연락주세요!!) 3. Bllossom은 NAACL2024, LREC-COLING2024 (구두) 발표로 채택되었습니다. 4. 좋은 언어모델 계속 업데이트 하겠습니다!! 한국어 강화를위해 공동 연구하실분(특히논문) 언제든 환영합니다!! 특히 소량의 GPU라도 대여 가능한팀은 언제든 연락주세요! 만들고 싶은거 도와드려요. ``` The Bllossom language model is a Korean-English bilingual language model based on the open-source LLama3. It enhances the connection of knowledge between Korean and English. It has the following features: * **Knowledge Linking**: Linking Korean and English knowledge through additional training * **Vocabulary Expansion**: Expansion of Korean vocabulary to enhance Korean expressiveness. * **Instruction Tuning**: Tuning using custom-made instruction following data specialized for Korean language and Korean culture * **Human Feedback**: DPO has been applied * **Vision-Language Alignment**: Aligning the vision transformer with this language model **This model developed by [MLPLab at Seoultech](http://mlp.seoultech.ac.kr), [Teddysum](http://teddysum.ai/) and [Yonsei Univ](https://sites.google.com/view/hansaemkim/hansaem-kim).** **This model was converted to GGUF format from [`MLP-KTLim/llama-3-Korean-Bllossom-8B`](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B) for more details on the model.** ## Demo Video <div style="display: flex; justify-content: space-between;"> <!-- 첫 번째 컬럼 --> <div style="width: 49%;"> <a> <img src="https://github.com/lhsstn/lhsstn/blob/main/x-llava_dem.gif?raw=true" style="width: 100%; height: auto;"> </a> <p style="text-align: center;">Bllossom-V Demo</p> </div> <!-- 두 번째 컬럼 (필요하다면) --> <div style="width: 49%;"> <a> <img src="https://github.com/lhsstn/lhsstn/blob/main/bllossom_demo_kakao.gif?raw=true" style="width: 70%; height: auto;"> </a> <p style="text-align: center;">Bllossom Demo(Kakao)ㅤㅤㅤㅤㅤㅤㅤㅤ</p> </div> </div> ## NEWS * [2024.05.08] Vocab Expansion Model Update * [2024.04.25] We released Bllossom v2.0, based on llama-3 * [2023/12] We released Bllossom-Vision v1.0, based on Bllossom * [2023/08] We released Bllossom v1.0, based on llama-2. * [2023/07] We released Bllossom v0.7, based on polyglot-ko. ## Example code ```python !CMAKE_ARGS="-DLLAMA_CUDA=on" pip install llama-cpp-python !huggingface-cli download MLP-KTLim/llama-3-Korean-Bllossom-8B-gguf-Q4_K_M --local-dir='YOUR-LOCAL-FOLDER-PATH' from llama_cpp import Llama from transformers import AutoTokenizer model_id = 'MLP-KTLim/llama-3-Korean-Bllossom-8B-gguf-Q4_K_M' tokenizer = AutoTokenizer.from_pretrained(model_id) model = Llama( model_path='YOUR-LOCAL-FOLDER-PATH/llama-3-Korean-Bllossom-8B-Q4_K_M.gguf', n_ctx=512, n_gpu_layers=-1 # Number of model layers to offload to GPU ) PROMPT = \ '''당신은 유용한 AI 어시스턴트입니다. 사용자의 질의에 대해 친절하고 정확하게 답변해야 합니다. You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.''' instruction = 'Your Instruction' messages = [ {"role": "system", "content": f"{PROMPT}"}, {"role": "user", "content": f"{instruction}"} ] prompt = tokenizer.apply_chat_template( messages, tokenize = False, add_generation_prompt=True ) generation_kwargs = { "max_tokens":512, "stop":["<|eot_id|>"], "top_p":0.9, "temperature":0.6, "echo":True, # Echo the prompt in the output } resonse_msg = model(prompt, **generation_kwargs) print(resonse_msg['choices'][0]['text'][len(prompt):]) ``` ## Citation **Language Model** ```text @misc{bllossom, author = {ChangSu Choi, Yongbin Jeong, Seoyoon Park, InHo Won, HyeonSeok Lim, SangMin Kim, Yejee Kang, Chanhyuk Yoon, Jaewan Park, Yiseul Lee, HyeJin Lee, Younggyun Hahm, Hansaem Kim, KyungTae Lim}, title = {Optimizing Language Augmentation for Multilingual Large Language Models: A Case Study on Korean}, year = {2024}, journal = {LREC-COLING 2024}, paperLink = {\url{https://arxiv.org/pdf/2403.10882}}, }, } ``` **Vision-Language Model** ```text @misc{bllossom-V, author = {Dongjae Shin, Hyunseok Lim, Inho Won, Changsu Choi, Minjun Kim, Seungwoo Song, Hangyeol Yoo, Sangmin Kim, Kyungtae Lim}, title = {X-LLaVA: Optimizing Bilingual Large Vision-Language Alignment}, year = {2024}, publisher = {GitHub}, journal = {NAACL 2024 findings}, paperLink = {\url{https://arxiv.org/pdf/2403.11399}}, }, } ``` ## Contact - 임경태(KyungTae Lim), Professor at Seoultech. `[email protected]` - 함영균(Younggyun Hahm), CEO of Teddysum. `[email protected]` - 김한샘(Hansaem Kim), Professor at Yonsei. `[email protected]` ## Contributor - 최창수(Chansu Choi), [email protected] - 김상민(Sangmin Kim), [email protected] - 원인호(Inho Won), [email protected] - 김민준(Minjun Kim), [email protected] - 송승우(Seungwoo Song), [email protected] - 신동재(Dongjae Shin), [email protected] - 임현석(Hyeonseok Lim), [email protected] - 육정훈(Jeonghun Yuk), [email protected] - 유한결(Hangyeol Yoo), [email protected] - 송서현(Seohyun Song), [email protected]
Intel/dpt-large-ade
Intel
"2024-03-25T23:13:00Z"
2,263
6
transformers
[ "transformers", "pytorch", "dpt", "vision", "image-segmentation", "dataset:scene_parse_150", "arxiv:2103.13413", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-segmentation
"2022-03-02T23:29:05Z"
--- license: apache-2.0 tags: - vision - image-segmentation datasets: - scene_parse_150 widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # DPT (large-sized model) fine-tuned on ADE20k The model is used for semantic segmentation of input images such as seen in the table below: | Input Image | Output Segmented Image | | --- | --- | | ![input image](https://cdn-uploads.huggingface.co/production/uploads/641bd18baebaa27e0753f2c9/cG0alacJ4MeSL18CneD2u.png) | ![Segmented image](https://cdn-uploads.huggingface.co/production/uploads/641bd18baebaa27e0753f2c9/G3g6Bsuti60-bCYzgbt5o.png)| ## Model description The Midas 3.0 nbased Dense Prediction Transformer (DPT) model was trained on ADE20k for semantic segmentation. It was introduced in the paper [Vision Transformers for Dense Prediction](https://arxiv.org/abs/2103.13413) by Ranftl et al. and first released in [this repository](https://github.com/isl-org/DPT). The MiDaS v3.0 DPT uses the Vision Transformer (ViT) as backbone and adds a neck + head on top for semantic segmentation. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg) Disclaimer: The team releasing DPT did not write a model card for this model so this model card has been written by the Hugging Face and the Intel AI Community team. ## Results: According to the authors, at the time of publication, when applied to semantic segmentation, dense vision transformers set a new state of the art on **ADE20K with 49.02% mIoU.** We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art. Our models are available at [Intel DPT GItHub Repository](https://github.com/intel-isl/DPT). ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=dpt) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import DPTFeatureExtractor, DPTForSemanticSegmentation from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000026204.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = DPTImageProcessor .from_pretrained("Intel/dpt-large-ade") model = DPTForSemanticSegmentation.from_pretrained("Intel/dpt-large-ade") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits print(logits.shape) logits prediction = torch.nn.functional.interpolate( logits, size=image.size[::-1], # Reverse the size of the original image (width, height) mode="bicubic", align_corners=False ) # Convert logits to class predictions prediction = torch.argmax(prediction, dim=1) + 1 # Squeeze the prediction tensor to remove dimensions prediction = prediction.squeeze() # Move the prediction tensor to the CPU and convert it to a numpy array prediction = prediction.cpu().numpy() # Convert the prediction array to an image predicted_seg = Image.fromarray(prediction.squeeze().astype('uint8')) # Define the ADE20K palette adepallete = [0,0,0,120,120,120,180,120,120,6,230,230,80,50,50,4,200,3,120,120,80,140,140,140,204,5,255,230,230,230,4,250,7,224,5,255,235,255,7,150,5,61,120,120,70,8,255,51,255,6,82,143,255,140,204,255,4,255,51,7,204,70,3,0,102,200,61,230,250,255,6,51,11,102,255,255,7,71,255,9,224,9,7,230,220,220,220,255,9,92,112,9,255,8,255,214,7,255,224,255,184,6,10,255,71,255,41,10,7,255,255,224,255,8,102,8,255,255,61,6,255,194,7,255,122,8,0,255,20,255,8,41,255,5,153,6,51,255,235,12,255,160,150,20,0,163,255,140,140,140,250,10,15,20,255,0,31,255,0,255,31,0,255,224,0,153,255,0,0,0,255,255,71,0,0,235,255,0,173,255,31,0,255,11,200,200,255,82,0,0,255,245,0,61,255,0,255,112,0,255,133,255,0,0,255,163,0,255,102,0,194,255,0,0,143,255,51,255,0,0,82,255,0,255,41,0,255,173,10,0,255,173,255,0,0,255,153,255,92,0,255,0,255,255,0,245,255,0,102,255,173,0,255,0,20,255,184,184,0,31,255,0,255,61,0,71,255,255,0,204,0,255,194,0,255,82,0,10,255,0,112,255,51,0,255,0,194,255,0,122,255,0,255,163,255,153,0,0,255,10,255,112,0,143,255,0,82,0,255,163,255,0,255,235,0,8,184,170,133,0,255,0,255,92,184,0,255,255,0,31,0,184,255,0,214,255,255,0,112,92,255,0,0,224,255,112,224,255,70,184,160,163,0,255,153,0,255,71,255,0,255,0,163,255,204,0,255,0,143,0,255,235,133,255,0,255,0,235,245,0,255,255,0,122,255,245,0,10,190,212,214,255,0,0,204,255,20,0,255,255,255,0,0,153,255,0,41,255,0,255,204,41,0,255,41,255,0,173,0,255,0,245,255,71,0,255,122,0,255,0,255,184,0,92,255,184,255,0,0,133,255,255,214,0,25,194,194,102,255,0,92,0,255] # Apply the color map to the predicted segmentation image predicted_seg.putpalette(adepallete) # Blend the original image and the predicted segmentation image out = Image.blend(image, predicted_seg.convert("RGB"), alpha=0.5) out ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/dpt). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2103-13413, author = {Ren{\'{e}} Ranftl and Alexey Bochkovskiy and Vladlen Koltun}, title = {Vision Transformers for Dense Prediction}, journal = {CoRR}, volume = {abs/2103.13413}, year = {2021}, url = {https://arxiv.org/abs/2103.13413}, eprinttype = {arXiv}, eprint = {2103.13413}, timestamp = {Wed, 07 Apr 2021 15:31:46 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2103-13413.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
cjpais/llava-v1.6-vicuna-13b-gguf
cjpais
"2024-03-07T00:37:27Z"
2,262
8
null
[ "gguf", "llava", "image-text-to-text", "license:apache-2.0", "region:us" ]
image-text-to-text
"2024-02-17T16:53:55Z"
--- license: apache-2.0 tags: - llava pipeline_tag: image-text-to-text --- # GGUF Quantized LLaVA 1.6 Vicuna 13B Updated quants and projector from [PR #5267](https://github.com/ggerganov/llama.cpp/pull/5267) | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [llava-v1.6-vicuna-13b.Q3_K_XS.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q3_K_XS.gguf) | Q3_K_XS | 3 | 5.31 GB| very small, high quality loss | | [llava-v1.6-vicuna-13b.Q3_K_M.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| very small, high quality loss | | [llava-v1.6-vicuna-13b.Q4_K_M.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| medium, balanced quality - recommended | | [llava-v1.6-vicuna-13b.Q5_K_S.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| large, low quality loss - recommended | | [llava-v1.6-vicuna-13b.Q5_K_M.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| large, very low quality loss - recommended | | [llava-v1.6-vicuna-13b.Q6_K.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q6_K.gguf) | Q6_K | 5 | 10.7 GB| very large, extremely low quality loss | | [llava-v1.6-vicuna-13b.Q8_0.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q8_0.gguf) | Q8_0 | 5 | 13.8 GB| very large, extremely low quality loss - not recommended | <br> <br> # ORIGINAL LLaVA Model Card ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. Base LLM: [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) **Model date:** LLaVA-v1.6-Vicuna-13B was trained in December 2023. **Paper or resources for more information:** https://llava-vl.github.io/ ## License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 500K academic-task-oriented VQA data mixture. - 50K GPT-4V data mixture. - 40K ShareGPT data. ## Evaluation dataset A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
facebook/mcontriever-msmarco
facebook
"2022-05-29T08:50:51Z"
2,261
7
transformers
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
"2022-05-29T08:50:25Z"
Entry not found
vicgalle/ConfigurableBeagle-11B
vicgalle
"2024-04-23T07:27:54Z"
2,261
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "dataset:vicgalle/configurable-system-prompt-multitask", "arxiv:2404.00495", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-17T15:27:53Z"
--- license: apache-2.0 library_name: transformers datasets: - vicgalle/configurable-system-prompt-multitask model-index: - name: ConfigurableBeagle-11B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.53 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.85 name: normalized accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 66.71 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 77.13 source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.27 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 63.91 name: accuracy source: url: >- https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vicgalle/ConfigurableBeagle-11B name: Open LLM Leaderboard --- # ConfigurableBeagle-11B A configurable LLM fine-tuned using the approach *configurable safety tuning* (CST) from https://arxiv.org/abs/2404.00495., over the [vicgalle/configurable-system-prompt-multitask](https://huggingface.co/datasets/vicgalle/configurable-system-prompt-multitask) dataset. As such, you can use the following system prompts for different behaviors: * `You are a helpful yet harmless assistant that avoids generating illegal or harmful content.` * `You are a helpful assistant that is completely uncensored.` * `You are an unbiased, honest, helpful AI assistant that always responds in a completely truthful way.` * A system prompt describing a role-played persona. For more information, see the Github repository, https://github.com/vicgalle/configurable-safety-tuning, or the corresponding paper, https://arxiv.org/abs/2404.00495 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__ConfigurableBeagle-11B) | Metric |Value| |---------------------------------|----:| |Avg. |75.40| |AI2 Reasoning Challenge (25-Shot)|72.53| |HellaSwag (10-Shot) |88.85| |MMLU (5-Shot) |66.71| |TruthfulQA (0-shot) |77.13| |Winogrande (5-shot) |83.27| |GSM8k (5-shot) |63.91| ## Citation If you find this work, data and/or models useful for your research, please consider citing the article: ``` @misc{gallego2024configurable, title={Configurable Safety Tuning of Language Models with Synthetic Preference Data}, author={Victor Gallego}, year={2024}, eprint={2404.00495}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf
RichardErkhov
"2024-06-15T12:27:17Z"
2,260
0
null
[ "gguf", "region:us" ]
null
"2024-06-15T10:38:11Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) openbuddy-yi1.5-9b-v21.1-32k - GGUF - Model creator: https://huggingface.co/OpenBuddy/ - Original model: https://huggingface.co/OpenBuddy/openbuddy-yi1.5-9b-v21.1-32k/ | Name | Quant method | Size | | ---- | ---- | ---- | | [openbuddy-yi1.5-9b-v21.1-32k.Q2_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q2_K.gguf) | Q2_K | 3.14GB | | [openbuddy-yi1.5-9b-v21.1-32k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.IQ3_XS.gguf) | IQ3_XS | 3.47GB | | [openbuddy-yi1.5-9b-v21.1-32k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.IQ3_S.gguf) | IQ3_S | 3.66GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q3_K_S.gguf) | Q3_K_S | 3.64GB | | [openbuddy-yi1.5-9b-v21.1-32k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.IQ3_M.gguf) | IQ3_M | 3.79GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q3_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q3_K.gguf) | Q3_K | 4.04GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q3_K_M.gguf) | Q3_K_M | 4.04GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q3_K_L.gguf) | Q3_K_L | 4.38GB | | [openbuddy-yi1.5-9b-v21.1-32k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.IQ4_XS.gguf) | IQ4_XS | 4.51GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q4_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q4_0.gguf) | Q4_0 | 4.7GB | | [openbuddy-yi1.5-9b-v21.1-32k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.IQ4_NL.gguf) | IQ4_NL | 4.75GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q4_K_S.gguf) | Q4_K_S | 4.74GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q4_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q4_K.gguf) | Q4_K | 4.98GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q4_K_M.gguf) | Q4_K_M | 4.98GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q4_1.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q4_1.gguf) | Q4_1 | 5.2GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q5_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q5_0.gguf) | Q5_0 | 5.7GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q5_K_S.gguf) | Q5_K_S | 5.7GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q5_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q5_K.gguf) | Q5_K | 5.84GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q5_K_M.gguf) | Q5_K_M | 5.84GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q5_1.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q5_1.gguf) | Q5_1 | 6.2GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q6_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q6_K.gguf) | Q6_K | 6.76GB | | [openbuddy-yi1.5-9b-v21.1-32k.Q8_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-yi1.5-9b-v21.1-32k-gguf/blob/main/openbuddy-yi1.5-9b-v21.1-32k.Q8_0.gguf) | Q8_0 | 8.76GB | Original model description: --- license: apache-2.0 language: - zh - en - fr - de - ja - ko - it - ru - fi pipeline_tag: text-generation inference: false library_name: transformers tags: - mixtral --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice Base model: https://huggingface.co/01-ai/Yi-1.5-9B License: Apache 2.0 # Prompt Format We recommend using the fast tokenizer from `transformers`, which should be enabled by default in the `transformers` and `vllm` libraries. Other implementations including `sentencepiece` may not work as expected, especially for special tokens like `<|role|>`, `<|says|>` and `<|end|>`. ``` <|role|>system<|says|>You(assistant) are a helpful, respectful and honest INTP-T AI Assistant named Buddy. You are talking to a human(user). Always answer as helpfully and logically as possible, while being safe. Your answers should not include any harmful, political, religious, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. You cannot access the internet, but you have vast knowledge, cutoff: 2023-04. You are trained by OpenBuddy team, (https://openbuddy.ai, https://github.com/OpenBuddy/OpenBuddy), not related to GPT or OpenAI.<|end|> <|role|>user<|says|>History input 1<|end|> <|role|>assistant<|says|>History output 1<|end|> <|role|>user<|says|>History input 2<|end|> <|role|>assistant<|says|>History output 2<|end|> <|role|>user<|says|>Current input<|end|> <|role|>assistant<|says|> ``` This format is also defined in `tokenizer_config.json`, which means you can directly use `vllm` to deploy an OpenAI-like API service. For more information, please refer to the [vllm documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html). ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf
openthaigpt
"2023-09-20T04:29:46Z"
2,259
10
transformers
[ "transformers", "pytorch", "llama", "text-generation", "openthaigpt", "th", "en", "dataset:kobkrit/rd-taxqa", "dataset:iapp_wiki_qa_squad", "dataset:Thaweewat/alpaca-cleaned-52k-th", "dataset:Thaweewat/instruction-wild-52k-th", "dataset:Thaweewat/databricks-dolly-15k-th", "dataset:Thaweewat/hc3-24k-th", "dataset:Thaweewat/gpteacher-20k-th", "dataset:Thaweewat/onet-m6-social", "dataset:Thaweewat/alpaca-finance-43k-th", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-14T14:08:19Z"
--- license: apache-2.0 datasets: - kobkrit/rd-taxqa - iapp_wiki_qa_squad - Thaweewat/alpaca-cleaned-52k-th - Thaweewat/instruction-wild-52k-th - Thaweewat/databricks-dolly-15k-th - Thaweewat/hc3-24k-th - Thaweewat/gpteacher-20k-th - Thaweewat/onet-m6-social - Thaweewat/alpaca-finance-43k-th language: - th - en library_name: transformers pipeline_tag: text-generation tags: - openthaigpt - llama --- # 🇹🇭 OpenThaiGPT 1.0.0-beta <img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2Fb8eiMDaqiEQL6ahbAY0h%2Fimage.png?alt=media&token=6fce78fd-2cca-4c0a-9648-bd5518e644ce https://openthaigpt.aieat.or.th/" width="200px"> 🇹🇭 OpenThaiGPT Version 1.0.0-beta is a Thai language 7B-parameter LLaMA v2 Chat model finetuned to follow Thai translated instructions and extend more than 24,500 most popular Thai words vocabularies into LLM's dictionary for turbo speed. ## Upgrade from OpenThaiGPT 1.0.0-alpha - Add more than 24,500 most popular Thai words vocabularies into LLM's dictionary and re-pretrain embedding layers which make it generate Thai text 10 times faster than previous version. ## Support - Official website: https://openthaigpt.aieat.or.th - Facebook page: https://web.facebook.com/groups/openthaigpt - A Discord server for discussion and support [here](https://discord.gg/rUTp6dfVUF) - E-mail: [email protected] ## License **Source Code**: License Apache Software License 2.0.<br> **Weight**: Research and **Commercial uses**.<br> ## Code and Weight **Colab Demo**: https://colab.research.google.com/drive/1kDQidCtY9lDpk49i7P3JjLAcJM04lawu?usp=sharing<br> **Finetune Code**: https://github.com/OpenThaiGPT/openthaigpt-finetune-010beta<br> **Inference Code**: https://github.com/OpenThaiGPT/openthaigpt<br> **Weight (Huggingface Checkpoint)**: https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf ## Sponsors Pantip.com, ThaiSC<br> <img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2FiWjRxBQgo0HUDcpZKf6A%2Fimage.png?alt=media&token=4fef4517-0b4d-46d6-a5e3-25c30c8137a6" width="100px"> <img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2Ft96uNUI71mAFwkXUtxQt%2Fimage.png?alt=media&token=f8057c0c-5c5f-41ac-bb4b-ad02ee3d4dc2" width="100px"> ### Powered by OpenThaiGPT Volunteers, Artificial Intelligence Entrepreneur Association of Thailand (AIEAT), and Artificial Intelligence Association of Thailand (AIAT) <img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2F6yWPXxdoW76a4UBsM8lw%2Fimage.png?alt=media&token=1006ee8e-5327-4bc0-b9a9-a02e93b0c032" width="100px"> <img src="https://1173516064-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FvvbWvIIe82Iv1yHaDBC5%2Fuploads%2FBwsmSovEIhW9AEOlHTFU%2Fimage.png?alt=media&token=5b550289-e9e2-44b3-bb8f-d3057d74f247" width="100px"> ### Authors * Kobkrit Viriyayudhakorn ([email protected]) * Sumeth Yuenyong ([email protected]) * Thaweewat Rugsujarit ([email protected]) * Jillaphat Jaroenkantasima ([email protected]) * Norapat Buppodom ([email protected]) * Koravich Sangkaew ([email protected]) * Peerawat Rojratchadakorn ([email protected]) * Surapon Nonesung ([email protected]) * Chanon Utupon ([email protected]) * Sadhis Wongprayoon ([email protected]) * Nucharee Thongthungwong ([email protected]) * Chawakorn Phiantham ([email protected]) * Patteera Triamamornwooth ([email protected]) * Nattarika Juntarapaoraya ([email protected]) * Kriangkrai Saetan ([email protected]) * Pitikorn Khlaisamniang ([email protected]) <i>Disclaimer: Provided responses are not guaranteed.</i>
alexm-nm/tinyllama-24-gptq-4bit-channelwise
alexm-nm
"2024-05-08T15:34:24Z"
2,257
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2024-05-08T15:30:39Z"
--- license: apache-2.0 ---
QuantFactory/L3-8B-Stheno-v3.2-GGUF
QuantFactory
"2024-06-08T11:35:59Z"
2,257
2
null
[ "gguf", "text-generation", "en", "dataset:Gryphe/Opus-WritingPrompts", "dataset:Sao10K/Claude-3-Opus-Instruct-15K", "dataset:Sao10K/Short-Storygen-v2", "dataset:Sao10K/c2-Logs-Filtered", "base_model:Sao10K/L3-8B-Stheno-v3.2", "license:cc-by-nc-4.0", "region:us" ]
text-generation
"2024-06-06T17:01:23Z"
--- license: cc-by-nc-4.0 language: - en datasets: - Gryphe/Opus-WritingPrompts - Sao10K/Claude-3-Opus-Instruct-15K - Sao10K/Short-Storygen-v2 - Sao10K/c2-Logs-Filtered pipeline_tag: text-generation base_model: Sao10K/L3-8B-Stheno-v3.2 --- # QuantFactory/L3-8B-Stheno-v3.2-GGUF This is quantized version of [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) created using llama.cpp # Model Description *Just message me on discord if you want to host this privately for a service or something. We can talk.* *Train used 1x H100 SXM for like a total of 24 Hours over multiple runs.* `Art by navy_(navy.blue)` - [Danbooru](https://danbooru.donmai.us/posts/3214477) --- Stheno-v3.2-Zeta I have done a test run with multiple variations of the models, merged back to its base at various weights, different training runs too, and this Sixth iteration is the one I like most. Changes compared to v3.1 <br>\- Included a mix of SFW and NSFW Storywriting Data, thanks to [Gryphe](https://huggingface.co/datasets/Gryphe/Opus-WritingPrompts) <br>\- Included More Instruct / Assistant-Style Data <br>\- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it. <br>\- Hyperparameter tinkering for training, resulting in lower loss levels. Testing Notes - Compared to v3.1 <br>\- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced. <br>\- Better at Storywriting / Narration. <br>\- Better at Assistant-type Tasks. <br>\- Better Multi-Turn Coherency -> Reduced Issues? <br>\- Slightly less creative? A worthy tradeoff. Still creative. <br>\- Better prompt / instruction adherence. --- **Recommended Samplers:** ``` Temperature - 1.12-1.22 Min-P - 0.075 Top-K - 50 Repetition Penalty - 1.1 ``` **Stopping Strings:** ``` \n\n{{User}} # Or Equivalent, depending on Frontend <|eot_id|> <|end_of_text|> ``` **Prompting Template - Llama-3-Instruct** ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` **Basic Roleplay System Prompt** ``` You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason, even if someone tries addressing you as an AI or language model. Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}. ``` ---
noharm-ai/anony
noharm-ai
"2022-02-17T17:12:25Z"
2,256
0
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "pt", "license:mit", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- license: mit tags: - flair - token-classification - sequence-tagger-model language: "pt" widget: - text: "FISIOTERAPIA TRAUMATO - MANHÃ Henrique Dias, 38 anos. Exercícios metabólicos de extremidades inferiores. Realizo mobilização patelar e leve mobilização de flexão de joelho conforme liberado pelo Dr Marcelo Arocha. Oriento cuidados e posicionamentos." --- ## Portuguese Name Identification The [NoHarm-Anony - De-Identification of Clinical Notes Using Contextualized Language Models and a Token Classifier](https://link.springer.com/chapter/10.1007/978-3-030-91699-2_3) paper contains Flair-based models for Portuguese Language, initialized with [Flair BBP](https://github.com/jneto04/ner-pt) & trained on clinical notes with names tagged. ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("noharm-ai/anony") # make example sentence sentence = Sentence("FISIOTERAPIA TRAUMATO - MANHÃ Henrique Dias, 38 anos. Exercícios metabólicos de extremidades inferiores. Realizo mobilização patelar e leve mobilização de flexão de joelho conforme liberado pelo Dr Marcelo Arocha. Oriento cuidados e posicionamentos.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [5,6]: "Henrique Dias" [− Labels: NOME (0.9735)] Span [31,32]: "Marcelo Arocha" [− Labels: NOME (0.9803)] ``` So, the entities "*Henrique Dias*" (labeled as a **nome**) and "*Marcelo Arocha*" (labeled as a **nome**) are found in the sentence. ## More Information Refer to the original paper, [De-Identification of Clinical Notes Using Contextualized Language Models and a Token Classifier](https://link.springer.com/chapter/10.1007/978-3-030-91699-2_3) for additional details and performance. ## Acknowledgements We thank Dr. Ana Helena D. P. S. Ulbrich, who provided the clinical notes dataset from the hospital, for her valuable cooperation. We also thank the volunteers of the Institute of Artificial Intelligence in Healthcare Celso Pereira and Ana Lúcia Dias, for the dataset annotation. ## Citation ``` @inproceedings{santos2021identification, title={De-Identification of Clinical Notes Using Contextualized Language Models and a Token Classifier}, author={Santos, Joaquim and dos Santos, Henrique DP and Tabalipa, F{\'a}bio and Vieira, Renata}, booktitle={Brazilian Conference on Intelligent Systems}, pages={33--41}, year={2021}, organization={Springer} } ```
InstaDeepAI/nucleotide-transformer-2.5b-multi-species
InstaDeepAI
"2023-10-11T12:30:30Z"
2,256
23
transformers
[ "transformers", "pytorch", "tf", "esm", "fill-mask", "DNA", "biology", "genomics", "dataset:InstaDeepAI/multi_species_genomes", "dataset:InstaDeepAI/nucleotide_transformer_downstream_tasks", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-04-05T13:07:41Z"
--- license: cc-by-nc-sa-4.0 widget: - text: ACCTGA<mask>TTCTGAGTC tags: - DNA - biology - genomics datasets: - InstaDeepAI/multi_species_genomes - InstaDeepAI/nucleotide_transformer_downstream_tasks --- # nucleotide-transformer-2.5b-multi-species model The Nucleotide Transformers are a collection of foundational language models that were pre-trained on DNA sequences from whole-genomes. Compared to other approaches, our models do not only integrate information from single reference genomes, but leverage DNA sequences from over 3,200 diverse human genomes, as well as 850 genomes from a wide range of species, including model and non-model organisms. Through robust and extensive evaluation, we show that these large models provide extremely accurate molecular phenotype prediction compared to existing methods Part of this collection is the **nucleotide-transformer-2.5b-multi-species**, a 2.5B parameters transformer pre-trained on a collection of 850 genomes from a wide range of species, including model and non-model organisms. The model is made available both in Tensorflow and Pytorch. **Developed by:** InstaDeep, NVIDIA and TUM ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [Nucleotide Transformer](https://github.com/instadeepai/nucleotide-transformer) - **Paper:** [The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics](https://www.biorxiv.org/content/10.1101/2023.01.11.523679v1) ### How to use <!-- Need to adapt this section to our model. Need to figure out how to load the models from huggingface and do inference on them --> Until its next release, the `transformers` library needs to be installed from source with the following command in order to use the models: ```bash pip install --upgrade git+https://github.com/huggingface/transformers.git ``` A small snippet of code is given here in order to retrieve both logits and embeddings from a dummy DNA sequence. ```python from transformers import AutoTokenizer, AutoModelForMaskedLM import torch # Import the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained("InstaDeepAI/nucleotide-transformer-2.5b-multi-species") model = AutoModelForMaskedLM.from_pretrained("InstaDeepAI/nucleotide-transformer-2.5b-multi-species") # Choose the length to which the input sequences are padded. By default, the # model max length is chosen, but feel free to decrease it as the time taken to # obtain the embeddings increases significantly with it. max_length = tokenizer.model_max_length # Create a dummy dna sequence and tokenize it sequences = ["ATTCCGATTCCGATTCCG", "ATTTCTCTCTCTCTCTGAGATCGATCGATCGAT"] tokens_ids = tokenizer.batch_encode_plus(sequences, return_tensors="pt", padding="max_length", max_length = max_length)["input_ids"] # Compute the embeddings attention_mask = tokens_ids != tokenizer.pad_token_id torch_outs = model( tokens_ids, attention_mask=attention_mask, encoder_attention_mask=attention_mask, output_hidden_states=True ) # Compute sequences embeddings embeddings = torch_outs['hidden_states'][-1].detach().numpy() print(f"Embeddings shape: {embeddings.shape}") print(f"Embeddings per token: {embeddings}") # Add embed dimension axis attention_mask = torch.unsqueeze(attention_mask, dim=-1) # Compute mean embeddings per sequence mean_sequence_embeddings = torch.sum(attention_mask*embeddings, axis=-2)/torch.sum(attention_mask, axis=1) print(f"Mean sequence embeddings: {mean_sequence_embeddings}") ``` ## Training data The **nucleotide-transformer-2.5b-multi-species** model was pretrained on a total of 850 genomes downloaded from [NCBI](https://www.ncbi.nlm.nih.gov/). Plants and viruses are not included in these genomes, as their regulatory elements differ from those of interest in the paper's tasks. Some heavily studied model organisms were picked to be included in the collection of genomes, which represents a total of 174B nucleotides, i.e roughly 29B tokens. The data has been released as a HuggingFace dataset [here](https://huggingface.co/datasets/InstaDeepAI/multi_species_genomes). ## Training procedure ### Preprocessing The DNA sequences are tokenized using the Nucleotide Transformer Tokenizer, which tokenizes sequences as 6-mers tokenizer when possible, otherwise tokenizing each nucleotide separately as described in the [Tokenization](https://github.com/instadeepai/nucleotide-transformer#tokenization-abc) section of the associated repository. This tokenizer has a vocabulary size of 4105. The inputs of the model are then of the form: ``` <CLS> <ACGTGT> <ACGTGC> <ACGGAC> <GACTAG> <TCAGCA> ``` The tokenized sequence have a maximum length of 1,000. The masking procedure used is the standard one for Bert-style training: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained with 128 A100 80GB GPUs on 300B tokens, with an effective batch size of 1M tokens. The sequence length used was 1000 tokens. The Adam optimizer [38] was used with a learning rate schedule, and standard values for exponential decay rates and epsilon constants, β1 = 0.9, β2 = 0.999 and ε=1e-8. During a first warmup period, the learning rate was increased linearly between 5e-5 and 1e-4 over 16k steps before decreasing following a square root decay until the end of training. ### BibTeX entry and citation info ```bibtex @article{dalla2023nucleotide, title={The Nucleotide Transformer: Building and Evaluating Robust Foundation Models for Human Genomics}, author={Dalla-Torre, Hugo and Gonzalez, Liam and Mendoza Revilla, Javier and Lopez Carranza, Nicolas and Henryk Grywaczewski, Adam and Oteri, Francesco and Dallago, Christian and Trop, Evan and Sirelkhatim, Hassan and Richard, Guillaume and others}, journal={bioRxiv}, pages={2023--01}, year={2023}, publisher={Cold Spring Harbor Laboratory} } ```
Yntec/Based64
Yntec
"2024-01-01T17:09:12Z"
2,255
1
diffusers
[ "diffusers", "safetensors", "Anime", "Animation", "Style", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-01-01T16:15:51Z"
--- language: - en license: creativeml-openrail-m tags: - Anime - Animation - Style - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- # Based 64 No ema version of Based 64 Mix from https://huggingface.co/AnonymousM/Based-mixes . The diffusers version has aBagOfChips's VAE baked in. Samples and prompts: ![Based 64 samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/aASy0G7sgy1rsxkDHTBpR.png) (Click for larger) Top left: pretty cute little girl carrying miniature The flower tower, oil painting, detailed chibi blue eyes, award-winning, highly detailed palette, thick impasto, painterly, autochrome, pinhole, realistic lighting, chiaroscuro, very ethereal, very ethereal, silver color, dark, chiaroscuro, nacre, pastel oil inks, paint-on-glass, Top right: pastel art by Kyoani OF ray tracing with his seven daughters, award winning, pretty cute girls, skirts, pajamas, at the gym, very detailed faces, beautiful chibi eyes, high quality Anime scenery by rossdraws and ilya kuvshinov and ross tran Bottom left: masterpiece, best quality, ultra detailed, 8k, 4k, intricate,highly detailed, detailed background,detailed landscape, ((portrait)), foodstyle,simple background, food, black background, plate, realistic, burger, meat, food focus, cheese, still life, lettuce Bottom right: Realistic girl standing. Very cute anime faces, chibi art, flawless, painting by gaston bussiere, charles sillem lidderdale. perfect face, full body, baby, masterpiece, highest quality, 1girl, blue eyes, sweater, Pretty CUTE GIRL, highly detailed, skirt,
lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half
lightblue
"2024-05-30T09:58:00Z"
2,255
9
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "arxiv:2405.18952", "base_model:lightblue/suzume-llama-3-8B-multilingual", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-25T07:19:40Z"
--- license: cc-by-nc-4.0 tags: - generated_from_trainer base_model: lightblue/suzume-llama-3-8B-multilingual model-index: - name: workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_half_borda results: [] --- # Suzume ORPO <p align="center"> <img width=500 src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/kWQSu02YfgYdUQqv4s5lq.png" alt="Suzume with Mitsu - a Japanese tree sparrow with honey on it"/> </p> [[Paper]](https://arxiv.org/abs/2405.18952) [[Dataset]](https://huggingface.co/datasets/lightblue/mitsu) This is Suzume ORPO, an ORPO trained fine-tune of the [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) model using our [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset. We have trained several versions of this model using ORPO and so recommend that you use the best performing model from our tests, [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half). Note that this model has a non-commerical license as we used the Command R and Command R+ models to generate our training data for this model ([lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu)). We are currently working on a developing a commerically usable model, so stay tuned for that! # Model list We have ORPO trained the following models using different proportions of the [lightblue/mitsu](https://huggingface.co/datasets/lightblue/mitsu) dataset: * Trained on the top/bottom responses of all prompts in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full) * Trained on the top/bottom responses of the prompts of the 75\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75) * Trained on the top/bottom responses of the prompts of the 50\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half) * Trained on the top/bottom responses of the prompts of the 25\% most consistently ranked responses in the dataset: [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25) # Model results We compare the MT-Bench scores across 6 languages for our 4 ORPO trained models, as well as some baselines: * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) - The foundation model that our models are ultimately built upon * [Nexusflow/Starling-LM-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta) - The highest performing open model on the Chatbot arena that is of a similar size to ours * gpt-3.5-turbo - A fairly high quality (although not state-of-the-art) proprietary LLM * [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) - The base model which we train our ORPO finetunes from | **MT-Bench language** | **meta-llama/Meta-Llama-3-8B-Instruct** | **Nexusflow/Starling-LM-7B-beta** | **gpt-3.5-turbo** | **lightblue/suzume-llama-3-8B-multilingual** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-full** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top75** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half** | **lightblue/suzume-llama-3-8B-multilingual-orpo-borda-top25** | |-----------------------|-----------------------------------------|-----------------------------------|-------------------|----------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------|--------------------------------------------------------------|---------------------------------------------------------------| | **Chinese 🇨🇳** | NaN | 6.97 | 7.55 | 7.11 | 7.65 | **7.77** | 7.74 | 7.44 | | **English 🇺🇸** | 7.98 | 7.92 | **8.26** | 7.73 | 7.98 | 7.94 | 7.98 | 8.22 | | **French 🇫🇷** | NaN | 7.29 | 7.74 | 7.66 | **7.84** | 7.46 | 7.78 | 7.81 | | **German 🇩🇪** | NaN | 6.99 | 7.68 | 7.26 | 7.28 | 7.64 | 7.7 | **7.71** | | **Japanese 🇯🇵** | NaN | 6.22 | **7.84** | 6.56 | 7.2 | 7.12 | 7.34 | 7.04 | | **Russian 🇷🇺** | NaN | 8.28 | 7.94 | 8.19 | 8.3 | 8.74 | **8.94** | 8.81 | We can see noticable improvement on most languages compared to the base model. We also find that our ORPO models achieve the highest score out of all the models we evaluated for a number of languages. # Training data We trained this model using the [lightblue/mitsu_full_borda](https://huggingface.co/datasets/lightblue/mitsu_full_borda) dataset. # Training configuration <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: lightblue/suzume-llama-3-8B-multilingual model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer # PreTrainedTokenizerFast load_in_8bit: false load_in_4bit: false strict: false rl: orpo orpo_alpha: 0.1 remove_unused_columns: false chat_template: chatml datasets: - path: lightblue/mitsu_tophalf_borda type: orpo.chat_template conversation: llama-3 dataset_prepared_path: /workspace/llm_training/axolotl/llama3-multilingual-orpo/prepared_mitsu_half_borda val_set_size: 0.02 output_dir: /workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_half_borda sequence_len: 8192 sample_packing: false pad_to_sequence_len: true use_wandb: true wandb_project: axolotl wandb_entity: peterd wandb_name: mitsu_half_borda gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 8e-6 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 20 eval_table_size: saves_per_epoch: 1 debug: deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json weight_decay: 0.0 special_tokens: pad_token: <|end_of_text|> ``` </details><br> # workspace/llm_training/axolotl/llama3-multilingual-orpo/output_mitsu_half_borda This model is a fine-tuned version of [lightblue/suzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0935 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.6299 | 0.02 | 1 | 7.7014 | | 7.041 | 0.07 | 3 | 3.9786 | | 0.6089 | 0.15 | 6 | 0.1393 | | 0.1308 | 0.22 | 9 | 0.1244 | | 0.1051 | 0.29 | 12 | 0.1112 | | 0.1021 | 0.36 | 15 | 0.1063 | | 0.0861 | 0.44 | 18 | 0.1026 | | 0.1031 | 0.51 | 21 | 0.0979 | | 0.0996 | 0.58 | 24 | 0.0967 | | 0.0923 | 0.65 | 27 | 0.0960 | | 0.1025 | 0.73 | 30 | 0.0944 | | 0.1103 | 0.8 | 33 | 0.0939 | | 0.0919 | 0.87 | 36 | 0.0937 | | 0.104 | 0.94 | 39 | 0.0935 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.0 # How to cite ```tex @article{devine2024sure, title={Are You Sure? Rank Them Again: Repeated Ranking For Better Preference Datasets}, author={Devine, Peter}, journal={arXiv preprint arXiv:2405.18952}, year={2024} } ``` # Developer Peter Devine - ([ptrdvn](https://huggingface.co/ptrdvn))
timm/eva02_small_patch14_336.mim_in22k_ft_in1k
timm
"2024-02-10T23:37:50Z"
2,254
2
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2303.11331", "arxiv:2303.15389", "license:mit", "region:us" ]
image-classification
"2023-03-31T04:55:44Z"
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-22k --- # Model card for eva02_small_patch14_336.mim_in22k_ft_in1k An EVA02 image classification model. Pretrained on ImageNet-22k with masked image modeling (using EVA-CLIP as a MIM teacher) and fine-tuned on ImageNet-1k by paper authors. EVA-02 models are vision transformers with mean pooling, SwiGLU, Rotary Position Embeddings (ROPE), and extra LN in MLP (for Base & Large). NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 22.1 - GMACs: 15.5 - Activations (M): 54.3 - Image size: 336 x 336 - **Papers:** - EVA-02: A Visual Representation for Neon Genesis: https://arxiv.org/abs/2303.11331 - EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389 - **Original:** - https://github.com/baaivision/EVA - https://huggingface.co/Yuxin-CV/EVA-02 - **Pretrain Dataset:** ImageNet-22k - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('eva02_small_patch14_336.mim_in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'eva02_small_patch14_336.mim_in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 577, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |top1 |top5 |param_count|img_size| |-----------------------------------------------|------|------|-----------|--------| |eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 | |eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 | |eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 | |eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 | |eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 | |eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 | |eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 | |eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 | |eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 | |eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 | |eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 | |eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 | |eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 | |eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 | |eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 | |eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 | ## Citation ```bibtex @article{EVA02, title={EVA-02: A Visual Representation for Neon Genesis}, author={Fang, Yuxin and Sun, Quan and Wang, Xinggang and Huang, Tiejun and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2303.11331}, year={2023} } ``` ```bibtex @article{EVA-CLIP, title={EVA-02: A Visual Representation for Neon Genesis}, author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue}, journal={arXiv preprint arXiv:2303.15389}, year={2023} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
LazarusNLP/NusaBERT-base
LazarusNLP
"2024-03-05T08:08:07Z"
2,254
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "fill-mask", "ind", "ace", "ban", "bjn", "bug", "gor", "jav", "min", "msa", "nia", "sun", "tet", "dataset:sabilmakbar/indo_wiki", "dataset:acul3/KoPI-NLLB", "dataset:uonlp/CulturaX", "arxiv:2403.01817", "arxiv:1810.04805", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-02-21T04:25:35Z"
--- license: apache-2.0 language: - ind - ace - ban - bjn - bug - gor - jav - min - msa - nia - sun - tet language_bcp47: - jv-x-bms datasets: - sabilmakbar/indo_wiki - acul3/KoPI-NLLB - uonlp/CulturaX tags: - bert --- # NusaBERT Base [NusaBERT](https://arxiv.org/abs/2403.01817) Base is a multilingual encoder-based language model based on the [BERT](https://arxiv.org/abs/1810.04805) architecture. We conducted continued pre-training on open-source corpora of [sabilmakbar/indo_wiki](https://huggingface.co/datasets/sabilmakbar/indo_wiki), [acul3/KoPI-NLLB](https://huggingface.co/datasets/acul3/KoPI-NLLB), and [uonlp/CulturaX](https://huggingface.co/datasets/uonlp/CulturaX). On a held-out subset of the corpus, our model achieved: - `eval_accuracy`: 0.6866 - `eval_loss`: 1.4876 - `perplexity`: 4.4266 This model was trained using the [🤗Transformers](https://github.com/huggingface/transformers) PyTorch framework. All training was done on an NVIDIA H100 GPU. [LazarusNLP/NusaBERT-base](https://huggingface.co/LazarusNLP/NusaBERT-base) is released under Apache 2.0 license. ## Model Detail - **Developed by**: [LazarusNLP](https://lazarusnlp.github.io/) - **Finetuned from**: [IndoBERT base p1](https://huggingface.co/indobenchmark/indobert-base-p1) - **Model type**: Encoder-based BERT language model - **Language(s)**: Indonesian, Acehnese, Balinese, Banjarese, Buginese, Gorontalo, Javanese, Banyumasan, Minangkabau, Malay, Nias, Sundanese, Tetum - **License**: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html) - **Contact**: [LazarusNLP](https://lazarusnlp.github.io/) ## Use in 🤗Transformers ```python from transformers import AutoTokenizer, AutoModelForMaskedLM model_checkpoint = "LazarusNLP/NusaBERT-base" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForMaskedLM.from_pretrained(model_checkpoint) ``` ## Training Datasets Around 16B tokens from the following corpora were used during pre-training. - [Indonesian Wikipedia Data Repository](https://huggingface.co/datasets/sabilmakbar/indo_wiki) - [KoPI-NLLB (Korpus Perayapan Indonesia)](https://huggingface.co/datasets/acul3/KoPI-NLLB) - [Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages](https://huggingface.co/datasets/uonlp/CulturaX) ## Training Hyperparameters The following hyperparameters were used during training: - `learning_rate`: 0.0003 - `train_batch_size`: 256 - `eval_batch_size`: 256 - `seed`: 42 - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08` - `lr_scheduler_type`: linear - `lr_scheduler_warmup_steps`: 24000 - `training_steps`: 500000 ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu118 - Datasets 2.17.1 - Tokenizers 0.15.1 ## Credits NusaBERT Base is developed with love by: <div style="display: flex;"> <a href="https://github.com/anantoj"> <img src="https://github.com/anantoj.png" alt="GitHub Profile" style="border-radius: 50%;width: 64px;margin:0 4px;"> </a> <a href="https://github.com/DavidSamuell"> <img src="https://github.com/DavidSamuell.png" alt="GitHub Profile" style="border-radius: 50%;width: 64px;margin:0 4px;"> </a> <a href="https://github.com/stevenlimcorn"> <img src="https://github.com/stevenlimcorn.png" alt="GitHub Profile" style="border-radius: 50%;width: 64px;margin:0 4px;"> </a> <a href="https://github.com/w11wo"> <img src="https://github.com/w11wo.png" alt="GitHub Profile" style="border-radius: 50%;width: 64px;margin:0 4px;"> </a> </div> ## Citation ```bib @misc{wongso2024nusabert, title={NusaBERT: Teaching IndoBERT to be Multilingual and Multicultural}, author={Wilson Wongso and David Samuel Setiawan and Steven Limcorn and Ananto Joyoadikusumo}, year={2024}, eprint={2403.01817}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
alexm-nm/tinyllama-24-gptq-8bit-channelwise
alexm-nm
"2024-05-08T16:44:49Z"
2,254
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "gptq", "region:us" ]
text-generation
"2024-05-08T16:36:36Z"
--- license: apache-2.0 ---
andriJulian/gguf_llama3_classification
andriJulian
"2024-06-21T09:50:20Z"
2,254
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-21T09:39:03Z"
--- base_model: unsloth/llama-3-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** andriJulian - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rhasspy/faster-whisper-medium-int8
rhasspy
"2024-03-10T18:49:21Z"
2,253
1
transformers
[ "transformers", "license:mit", "endpoints_compatible", "region:us" ]
null
"2024-03-10T18:47:56Z"
--- license: mit ---
alexm-nm/tinyllama-24-gptq-8bit-g128
alexm-nm
"2024-05-08T16:35:58Z"
2,253
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "gptq", "region:us" ]
text-generation
"2024-05-08T16:30:41Z"
--- license: apache-2.0 ---
CiroN2022/toy-face
CiroN2022
"2023-08-23T11:51:14Z"
2,252
8
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
"2023-08-23T11:51:11Z"
--- license: other tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: toy_face widget: - text: toy_face --- # Toy Face ![Image 0](2123376.jpeg) None ## Image examples for the model: ![Image 1](2123369.jpeg) ![Image 2](2123367.jpeg) ![Image 3](2123368.jpeg) ![Image 4](2123378.jpeg) ![Image 5](2123371.jpeg) ![Image 6](2123372.jpeg) ![Image 7](2123374.jpeg) ![Image 8](2123370.jpeg) ![Image 9](2123373.jpeg)
timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k
timm
"2023-05-11T00:50:56Z"
2,251
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-12k", "arxiv:2204.01697", "arxiv:2201.03545", "arxiv:2111.09883", "license:apache-2.0", "region:us" ]
image-classification
"2023-01-20T21:38:37Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-12k --- # Model card for maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k A timm specific MaxxViT-V2 (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Pretrained in `timm` on ImageNet-12k (a 11821 class subset of full ImageNet-22k) and fine-tuned on ImageNet-1k by Ross Wightman. ImageNet-12k pretraining and ImageNet-1k fine-tuning performed on 8x GPU [Lambda Labs](https://lambdalabs.com/) cloud instances.. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 116.1 - GMACs: 73.0 - Activations (M): 213.7 - Image size: 384 x 384 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-12k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 128, 192, 192]) # torch.Size([1, 128, 96, 96]) # torch.Size([1, 256, 48, 48]) # torch.Size([1, 512, 24, 24]) # torch.Size([1, 1024, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1024, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
monster-labs/control_v1p_sdxl_qrcode_monster
monster-labs
"2023-11-11T23:34:34Z"
2,251
93
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "controlnet", "qrcode", "en", "license:openrail++", "region:us" ]
null
"2023-11-06T01:22:41Z"
--- tags: - stable-diffusion - controlnet - qrcode license: openrail++ language: - en --- # Controlnet QR Code Monster v1 For SDXL ![QR code in shape of a green monster, reading "https://qrcode.monster"](images/monster.png) ## Model Description This model is made to generate creative QR codes that still scan. Illusions should also work well. Keep in mind that not all generated codes might be readable, but you can try different parameters and prompts to get the desired results. ## How to Use - **Condition**: QR codes are passed as condition images with a module size of 16px. Use a higher error correction level to make it easier to read (sometimes a lower level can be easier to read if smaller in size). Use a gray background for the rest of the image to make the code integrate better. - **Prompts**: Use a prompt to guide the QR code generation. The output will highly depend on the given prompt. Some seem to be really easily accepted by the qr code process, some will require careful tweaking to get good results. - **Controlnet guidance scale**: Set the controlnet guidance scale value: - High values: The generated QR code will be more readable. - Low values: The generated QR code will be more creative. ### Tips - For an optimally readable output, try generating multiple QR codes with similar parameters, then choose the best ones. - Use the Image-to-Image feature to improve the readability of a generated QR code: - Decrease the denoising strength to retain more of the original image. - Increase the controlnet guidance scale value for better readability. A typical workflow for "saving" a code would be : Max out the guidance scale and minimize the denoising strength, then bump the strength until the code scans. ## Example Outputs ![A corridor with a perspective illusion](images/corridor.jpg) ![a fruit salad with a perspective illusion](images/fruits.jpg) ![a beautiful landscape with a checkerboard illusion](images/landscape_checkerboard.jpg) Feel free to experiment with prompts, parameters, and the Image-to-Image feature to achieve the desired QR code output. Good luck and have fun!
headlesstech/semantic_xlmr
headlesstech
"2023-06-15T11:56:26Z"
2,249
0
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "dpr", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2023-04-05T08:17:46Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - dpr widget: - source_sentence: "আমি বাংলায় গান গাই" sentences: - "I sing in Bangla" - "I sing in Bengali" - "I sing in English" - "আমি গান গাই না " example_title: "Singing" --- # `semantic_xlmr` This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like **clustering** or **semantic search**. <!--- Describe your model here --> ## Model Details - Model name: semantic_xlmr - Model version: 1.0 - Architecture: Sentence Transformer - Language: Multilingual ( fine-tuned for Bengali Language) ## Training The model was fine-tuned using **Multilingual Knowledge Distillation** method. We took `paraphrase-distilroberta-base-v2` as the teacher model and `xlm-roberta-large` as the student model. ![image](https://i.ibb.co/8Xrgnfr/sentence-transformer-model.png) ## Intended Use: - **Primary Use Case:** Semantic similarity, clustering, and semantic searches - **Potential Use Cases:** Document retrieval, information retrieval, recommendation systems, chatbot systems , FAQ system ## Usage ### Using Sentence-Transformers Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["I sing in bengali", "আমি বাংলায় গান গাই"] model = SentenceTransformer('headlesstech/semantic_xlmr') embeddings = model.encode(sentences) print(embeddings) ``` ### Using HuggingFace Transformers Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["I sing in bengali", "আমি বাংলায় গান গাই"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('headlesstech/semantic_xlmr') model = AutoModel.from_pretrained('headlesstech/semantic_xlmr') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
NousResearch/Nous-Capybara-7B-V1.9
NousResearch
"2023-12-28T12:56:24Z"
2,249
48
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "sft", "StableLM", "eng", "dataset:LDJnr/Capybara", "dataset:LDJnr/LessWrong-Amplify-Instruct", "dataset:LDJnr/Pure-Dove", "dataset:LDJnr/Verified-Camel", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-10T13:07:51Z"
--- language: - eng tags: - sft - StableLM license: - mit datasets: - LDJnr/Capybara - LDJnr/LessWrong-Amplify-Instruct - LDJnr/Pure-Dove - LDJnr/Verified-Camel --- ## **Nous-Capybara-7B V1.9** **This is currently the best 7B version of Capybara to use** What's new compared to V1?: *V1.9 now leverages novel unalignment techniques that lead to more consistent and dynamic control, we also worked on enhanced quality curation for training data and a significantly better foundation model(Mistral)!* The Capybara series is the first Nous collection of dataset and models made by fine-tuning mostly on data created by Nous in-house. We leverage our novel data synthesis technique called Amplify-instruct (Paper coming soon), the seed distribution and synthesis method are comprised of a synergistic combination of top performing existing data synthesis techniques and distributions used for SOTA models such as Airoboros, Evol-Instruct(WizardLM), Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed methodology for the dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain in-house multi-turn datasets like Dove(A successor to Puffin). While performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing current models, this is signficant when it comes to scaling implications for our next generation of models once we scale our novel syntheiss methods to significantly more examples. ## Process of creation and special thank yous! This model was fine-tuned by Nous Research as part of the Capybara/Amplify-Instruct project led by Luigi D.(LDJ) (Paper coming soon), as well as significant dataset formation contributions by J-Supha and general compute and experimentation management by Jeffrey Q. during ablations. Special thank you to **A16Z** for sponsoring our training, as well as **Yield Protocol** for their support in financially sponsoring resources during the R&D of this project. ## Thank you to those of you that have indirectly contributed! While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds that are used to generate the multi-turn data as part of the Amplify-Instruct synthesis. The datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project. Datasets in Blue are in-house curations that previously existed prior to Capybara. ![Capybara](https://i.imgur.com/yB58OoD.jpeg) ## Model Training Nous-Capybara 7B V1.9 is a new model trained for multiple epochs on a dataset of roughly 20,000 carefully curated conversational examples, most of which are comprised of entirely new in-house synthesized tokens. Additional data came from human curated CamelAI data, with the help of volunteers ranging from former Physics PhD's, Mathematicians, Biologists and more! ## Prompt Format The reccomended model usage is: ``` USER: ASSISTANT: ``` ## Mutli-Modality! - We currently have a Multi-modal model based on Capybara V1.9! https://huggingface.co/NousResearch/Obsidian-3B-V0.5 it is currently only available as a 3B sized model but larger versions coming! ## Notable Features: - Over 60% of the dataset is comprised of multi-turn conversations.(Most models are still only trained for single-turn conversations and no back and forths!) - Over 1,000 tokens average per conversation example! (Most models are trained on conversation data that is less than 300 tokens per example.) - Able to effectively do complex summaries of advanced topics and studies. (trained on hundreds of advanced difficult summary tasks developed in-house) - Ability to recall information upto late 2022 without internet. - Includes a portion of conversational data synthesized from less wrong posts, discussing very in-depth details and philosophies about the nature of reality, reasoning, rationality, self-improvement and related concepts. ## Example Outputs!: ![Capybara](https://img001.prntscr.com/file/img001/T9yYxR1xQSaK_UGdy3t2Cw.png) ![Capybara](https://img001.prntscr.com/file/img001/DQXqmKbsQQOIcgny1eoGNA.png) ![Capybara](https://img001.prntscr.com/file/img001/85X3L9ZxTsOKo3fUQ7GRVA.png) ## Benchmarks! (Coming soon!) ## Future Changes This is a relatively early build amongst the grand plans for the future of Capybara! ## Future model sizes Capybara V1.9 now currently has a 3B ad 7B size, and we plan to eventually have a 13B and 70B version in the future, as well as a potential 1B version based on phi-1.5 or Tiny Llama. ## How you can help! In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! ## Dataset contamination. We have checked the capybara dataset for contamination for several of the most popular datasets and can confirm that there is no contaminaton found. We leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level. The following are benchmarks we checked for contamination against our dataset: - HumanEval - AGIEval - TruthfulQA - MMLU - GPT4All ``` @article{daniele2023amplify-instruct, title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for Effecient LLM Training.}, author={Daniele, Luigi and Suphavadeeprasit}, journal={arXiv preprint arXiv:(comming soon)}, year={2023} } ```
epfl-llm/meditron-70b
epfl-llm
"2023-12-07T19:39:04Z"
2,248
206
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "medical", "health", "llama2", "en", "dataset:bigbio/med_qa", "dataset:medmcqa", "dataset:bigbio/pubmed_qa", "dataset:epfl-llm/guidelines", "arxiv:2311.16079", "base_model:meta-llama/Llama-2-70b", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-08T13:57:04Z"
--- license: llama2 datasets: - bigbio/med_qa - medmcqa - bigbio/pubmed_qa - epfl-llm/guidelines language: - en metrics: - accuracy - perplexity pipeline_tag: text-generation tags: - medical - health - llama2 base_model: meta-llama/Llama-2-70b --- <img width=50% src="meditron_LOGO.png" alt="Alt text" title="Meditron-logo"> # Model Card for Meditron-70B-v1.0 Meditron is a suite of open-source medical Large Language Models (LLMs). Meditron-70B is a 70 billion parameters model adapted to the medical domain from Llama-2-70B through continued pretraining on a comprehensively curated medical corpus, including selected PubMed articles, abstracts, a [new dataset](https://huggingface.co/datasets/epfl-llm/guidelines) of internationally-recognized medical guidelines, and general domain data from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). Meditron-70B, finetuned on relevant training data, outperforms Llama-2-70B, GPT-3.5 (`text-davinci-003`, 8-shot), and Flan-PaLM on multiple medical reasoning tasks. <!--# Table of Contents [Model Card for Meditron 70B](#model-card-for--meditron-70b-v1.0) - [Table of Contents](#table-of-contents) - [Model Details](#model-details) - [Model Description](#model-description) - [Uses](#uses) - [Downstream Use](#downstream-use) - [Out-of-Scope Use](#out-of-scope-use) - [Bias, Risks, and Limitations](#bias-risks-and-limitations) - [Recommendations](#recommendations) - [Training Details](#training-details) - [Training Data](#training-data) - [Training Procedure](#training-procedure) - [Preprocessing](#preprocessing) - [Evaluation](#evaluation) - [Testing Data & Metrics](#testing-data-&-metrics) - [Testing Data](#testing-data) - [Metrics](#metrics) - [Results](#results) - [Environmental Impact](#environmental-impact) - [Citation](#citation)--> <details open> <summary><strong>Advisory Notice</strong></summary> <blockquote style="padding: 10px; margin: 0 0 10px; border-left: 5px solid #ddd;"> While Meditron is designed to encode medical knowledge from sources of high-quality evidence, it is not yet adapted to deliver this knowledge appropriately, safely, or within professional actionable constraints. We recommend against deploying Meditron in medical applications without extensive use-case alignment, as well as additional testing, specifically including randomized controlled trials in real-world practice settings. </blockquote> </details> ## Model Details - **Developed by:** [EPFL LLM Team](https://huggingface.co/epfl-llm) - **Model type:** Causal decoder-only transformer language model - **Language(s):** English (mainly) - **Model License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Code License:** [APACHE 2.0 LICENSE](LICENSE) - **Continue-pretrained from model:** [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b) - **Context length:** 4K tokens - **Input:** Text-only data - **Output:** Model generates text only - **Status:** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we enhance model's performance. - **Knowledge Cutoff:** August 2023 ### Model Sources - **Repository:** [epflLLM/meditron](https://github.com/epfLLM/meditron) - **Trainer:** [epflLLM/Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) - **Paper:** *[MediTron-70B: Scaling Medical Pretraining for Large Language Models](https://arxiv.org/abs/2311.16079)* ## Uses Meditron-70B is being made available for further testing and assessment as an AI assistant to enhance clinical decision-making and enhance access to an LLM for healthcare use. Potential use cases may include but are not limited to: - Medical exam question answering - Supporting differential diagnosis - Disease information (symptoms, cause, treatment) query - General health information query ### Direct Use It is possible to use this model to generate text, which is useful for experimentation and understanding its capabilities. It should not be used directly for production or work that may impact people. ### Downstream Use Meditron-70B and Meditron-7B are both foundation models without finetuning or instruction-tuning. They can be finetuned, instruction-tuned, or RLHF-tuned for specific downstream tasks and applications. There are two ways we have used this model for downstream question-answering tasks. 1. We apply in-context learning with k demonstrations (3 or 5 in our paper) added to the prompt. 2. We finetuned the models for downstream question-answering tasks using specific training sets. We encourage and look forward to the adaption of the base model for more diverse applications. If you want a more interactive way to prompt the model, we recommend using a high-throughput and memory-efficient inference engine with a UI that supports chat and text generation. You can check out our deployment [guide](https://github.com/epfLLM/meditron/blob/main/deployment/README.md), where we used [FastChat](https://github.com/lm-sys/FastChat) with [vLLM](https://github.com/vllm-project/vllm). We collected generations for our qualitative analysis through an interactive UI platform, [BetterChatGPT](https://github.com/ztjhz/BetterChatGPT). Here is the prompt format we used as an example: <img width=70% src="prompt_example.png" alt="qualitative-analysis-prompt" title="Qualitative Analysis Prompt"> ### Out-of-Scope Use We do not recommend using this model for natural language generation in a production environment, finetuned or otherwise. ## Truthfulness, Helpfulness, Risk, and Bias <!-- This section is meant to convey both technical and sociotechnical limitations. --> We did an initial assessment of Meditron models' **Truthfulness** against baseline models and consumer-level medical models. We use TruthfulQA (multiple choice) as the main evaluation benchmark. We only focus on the categories that are relevant to the medical domain, including Health, Nutrition, Psychology, and Science. For 7B models, we perform one-shot evaluations for consistent answer generation. For 70B models, the evaluations are under the zero-shot setting. Below, we report the detailed truthfulness performance of each category. | | | | | | | | | | --- | ------ |----- |----- |----- |----- |----- |----- | |Category | meditron-70b | llama-2-70b | med42-70b* | meditron-7b | llama-2-7b | PMC-llama-7b | |Health | 81.8 | 69.1 | 83.6 | 27.3 | 16.4 | 3.6 | |Nutrition | 77.9 | 68.8 | 62.5 | 31.1 | 12.5 | 6.3 | |Psychology| 47.4 | 36.8 | 52.6 | 21.1 | 10.5 | 0.0 | |Science | 77.8 | 44.4 | 33.3 | 33.3 | 11.1 | 0.0 | |Avg | 71.2 | 54.8 | 58.0 | 28.3 | 12.6 | 2.5 | | | | | | | | | For a more detailed performance analysis, please see our paper. For **Helpfulness**, **Risk** and **Bias**, we provide a comprehensive qualitative generation report of Meditron-70B on queries designed by medical experts. Each query targets specific aspects of helpfulness (medical accuracy, up-to-date information, etc.), risk (public health, medical ethics, etc.) and bias (gender, age, race, etc.). Please see the detailed generations in our paper. We compare our generations to Llama-2-70B and ChatGPT-3.5 (version Nov, 27, 2023) Significant research is still required to fully explore potential bias, fairness, and safety issues with this language model. ### Recommendations **IMPORTANT!** Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. While this model is capable of generating natural language text, we have only begun to explore this capability and its limitations. Understanding these limitations is especially important in a domain like medicine. Therefore, we strongly recommend against using this model in production for natural language generation or for professional purposes related to health and medicine without comprehensive testing for your application. ## Training Details ### Training Data Meditron’s domain-adaptive pre-training corpus GAP-Replay combines 48.1B tokens from four corpora: - [**Clinical Guidelines**](https://huggingface.co/datasets/epfl-llm/guidelines): a new dataset of 46K internationally-recognized clinical practice guidelines from various healthcare-related sources, including hospitals and international organizations. - **Medical Paper Abstracts**: 16.1M abstracts extracted from closed-access PubMed and PubMed Central papers. - **Medical Papers**: full-text articles extracted from 5M publicly available PubMed and PubMed Central papers. - **Replay Data**: 400M tokens of general domain pretraining data sampled from [RedPajama-v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) <img width="60%" src="gap-replay.png" alt="Alt text" title="Meditron-logo"> #### Data Preprocessing Please see the detailed preprocessing procedure in our paper. ### Training Procedure We used the [Megatron-LLM](https://github.com/epfLLM/Megatron-LLM) distributed training library, a derivative of Nvidia's Megatron LM project, to optimize training efficiency. Hardware consists of 16 nodes of 8x NVIDIA A100 (80GB) SXM GPUs connected by NVLink and NVSwitch with a single Nvidia ConnectX-6 DX network card and equipped with 2 x AMD EPYC 7543 32-Core Processors and 512 GB of RAM. The nodes are connected via RDMA over Converged Ethernet. Our three-way parallelism scheme uses: - Data Parallelism (DP -- different GPUs process different subsets of the batches) of 2, - Pipeline Parallelism (PP -- different GPUs process different layers) of 8, - Tensor Parallelism (TP -- different GPUs process different subtensors for matrix multiplication) of 8. #### Training Hyperparameters | | | | --- | ------ | | bf16 | true | | lr | 1.5e-4 | | eps | 1e-5 | | betas | \[0.9, 0.95\] | | clip_grad | 1 | | weight decay | 0.1 | | DP size | 2 | | TP size | 8 | | PP size | 8 | | seq length | 4096 | | lr scheduler | cosine| | min lr | 1e-6 | | warmup iteration | 2000 | | micro batch size | 2 | | global batch size | 512 | | | | #### Speeds, Sizes, Times The model was trained in September and October 2023. The model architecture is exactly Llama 2, meaning | | | | --- | ------ | | Model size | 70B | | Hidden dimension | 8192 | | Num. attention heads | 64 | | Num. layers | 80 | | | | | We train the 70B model on 48e9 tokens, at a throughput of about 40,200 tokens / second. This amounts to a bfloat16 model flops utilization of roughly 42.3\%. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data - [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa) - [MedMCQA](https://huggingface.co/datasets/medmcqa) - [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa) - [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu) - [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) #### Metrics - Accuracy: suite the evaluation of multiple-choice question-answering tasks. ### Results We finetune meditron-70b and llama-2-70b on each benchmark (pubmedqa, medmcqa, medqa)'s training data individually. We report the finetuned models' performance with self-consistency chain-of-thought as the inference mode. For MMLU-Medical, models finetuned on MedMCQA are used for inference. For MedQA-4-Option, models finetuned on MedQA are used for inference. For a more detailed performance analysis, please see our paper. | | | | | | | | --- | ------ |----- |----- |----- |----- | |Dataset| meditron-70b | llama-2-70b | med42-70b* | clinical-camel-70b* | |MMLU-Medical | 77.6 | 77.9 | 74.5 | 65.7 | |PubMedQA | 81.6 | 80.0 | 61.2 | 67.0 | |MedMCQA | 66.0 | 62.6 | 59.2 | 46.7 | |MedQA | 64.4 | 61.5 | 59.1 | 50.8 | |MedQA-4-Option| 70.2 | 63.8 | 63.9 | 56.8 | |Avg | 72.0 | 69.2 | 63.6 | 57.4 | | | | | | | | **Note**: models with * are already instruction-tuned, so we exclude them from further finetuning on any training data. ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> - **Hardware Type:** 128 x NVIDIA A100 (80GB) SXM - **Total GPU hours:** 42,496 - **Hardware Provider:** EPFL Research Computing Platform - **Compute Region:** Switzerland - **Carbon Emitted:** Switzerland has a carbon efficiency of 0.016 kgCO2/kWh (https://www.carbonfootprint.com/docs/2018_8_electricity_factors_august_2018_-_online_sources.pdf). 332 hours of 128 A100s means 42496 hours at a TDP of 400W. Assuming a Power Usage effectiveness of 1.8, total emissions are estimated to be: (400W / 1000W/kWh / GPU * 0.016 kgCO2/kWh * 332 h * 128 GPU) * 1.8 PUE = 486 kgCO2. ## Citation **BibTeX:** If you use Meditron or its training data, please cite our work: ``` @misc{chen2023meditron70b, title={MEDITRON-70B: Scaling Medical Pretraining for Large Language Models}, author={Zeming Chen and Alejandro Hernández-Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut}, year={2023}, eprint={2311.16079}, archivePrefix={arXiv}, primaryClass={cs.CL} } @software{epfmedtrn, author = {Zeming Chen and Alejandro Hernández Cano and Angelika Romanou and Antoine Bonnet and Kyle Matoba and Francesco Salvi and Matteo Pagliardini and Simin Fan and Andreas Köpf and Amirkeivan Mohtashami and Alexandre Sallinen and Alireza Sakhaeirad and Vinitra Swamy and Igor Krawczuk and Deniz Bayazit and Axel Marmet and Syrielle Montariol and Mary-Anne Hartley and Martin Jaggi and Antoine Bosselut}, title = {MediTron-70B: Scaling Medical Pretraining for Large Language Models}, month = November, year = 2023, url = {https://github.com/epfLLM/meditron} } ```
protectai/test-public-repo
protectai
"2024-07-03T00:22:54Z"
2,248
0
keras
[ "keras", "region:us" ]
null
"2024-02-09T01:23:36Z"
Entry not found
mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF
mradermacher
"2024-06-10T20:29:15Z"
2,248
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:WesPro/F1-Chimera-Hybrid-LimaRP-8B", "endpoints_compatible", "region:us" ]
null
"2024-06-08T19:59:07Z"
--- base_model: WesPro/F1-Chimera-Hybrid-LimaRP-8B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/WesPro/F1-Chimera-Hybrid-LimaRP-8B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/F1-Chimera-Hybrid-LimaRP-8B-GGUF/resolve/main/F1-Chimera-Hybrid-LimaRP-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MaziyarPanahi/mergekit-slerp-rfdxiqs-GGUF
MaziyarPanahi
"2024-06-18T02:21:38Z"
2,247
0
transformers
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "mergekit", "merge", "conversational", "base_model:NousResearch/Hermes-2-Pro-Mistral-7B", "base_model:WizardLM/WizardMath-7B-V1.1", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us", "base_model:mergekit-community/mergekit-slerp-rfdxiqs" ]
text-generation
"2024-06-18T01:58:12Z"
--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - transformers - safetensors - mistral - text-generation - mergekit - merge - conversational - base_model:NousResearch/Hermes-2-Pro-Mistral-7B - base_model:WizardLM/WizardMath-7B-V1.1 - autotrain_compatible - endpoints_compatible - text-generation-inference - region:us - text-generation model_name: mergekit-slerp-rfdxiqs-GGUF base_model: mergekit-community/mergekit-slerp-rfdxiqs inference: false model_creator: mergekit-community pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # [MaziyarPanahi/mergekit-slerp-rfdxiqs-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-rfdxiqs-GGUF) - Model creator: [mergekit-community](https://huggingface.co/mergekit-community) - Original model: [mergekit-community/mergekit-slerp-rfdxiqs](https://huggingface.co/mergekit-community/mergekit-slerp-rfdxiqs) ## Description [MaziyarPanahi/mergekit-slerp-rfdxiqs-GGUF](https://huggingface.co/MaziyarPanahi/mergekit-slerp-rfdxiqs-GGUF) contains GGUF format model files for [mergekit-community/mergekit-slerp-rfdxiqs](https://huggingface.co/mergekit-community/mergekit-slerp-rfdxiqs). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
tapan247/vidula-finetune-llama-fp16
tapan247
"2024-06-30T14:06:58Z"
2,247
0
null
[ "gguf", "region:us" ]
null
"2024-06-30T14:00:29Z"
Entry not found
megalaa/en-cop-mul-norm-group-greekified
megalaa
"2024-04-16T03:42:21Z"
2,246
0
transformers
[ "transformers", "safetensors", "marian", "text2text-generation", "license:agpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-04-08T01:28:01Z"
--- license: agpl-3.0 ---
rizvandwiki/gender-classification-2
rizvandwiki
"2023-05-18T11:17:43Z"
2,245
29
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-12-12T03:13:20Z"
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: gender-classification-2 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9910714030265808 --- # gender-classification-2 Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### female ![female](images/female.jpg) #### male ![male](images/male.jpg)
nm-testing/tinyllama-oneshot-w8a8-dynamic-token-v2
nm-testing
"2024-06-17T19:56:39Z"
2,245
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-07T18:10:16Z"
Entry not found
stabilityai/stablelm-tuned-alpha-7b
stabilityai
"2023-04-19T12:38:13Z"
2,244
359
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "causal-lm", "en", "dataset:dmayhem93/ChatCombined", "dataset:tatsu-lab/alpaca", "dataset:nomic-ai/gpt4all_prompt_generations", "dataset:Dahoas/full-hh-rlhf", "dataset:jeffwan/sharegpt_vicuna", "dataset:HuggingFaceH4/databricks_dolly_15k", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-19T04:29:58Z"
--- language: - en tags: - causal-lm license: cc-by-nc-sa-4.0 datasets: - dmayhem93/ChatCombined - tatsu-lab/alpaca - nomic-ai/gpt4all_prompt_generations - Dahoas/full-hh-rlhf - jeffwan/sharegpt_vicuna - HuggingFaceH4/databricks_dolly_15k --- # StableLM-Tuned-Alpha ## Model Description `StableLM-Tuned-Alpha` is a suite of 3B and 7B parameter decoder-only language models built on top of the `StableLM-Base-Alpha` models and further fine-tuned on various chat and instruction-following datasets. ## Usage Get started chatting with `StableLM-Tuned-Alpha` by using the following code snippet: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList tokenizer = AutoTokenizer.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b") model = AutoModelForCausalLM.from_pretrained("StabilityAI/stablelm-tuned-alpha-7b") model.half().cuda() class StopOnTokens(StoppingCriteria): def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: stop_ids = [50278, 50279, 50277, 1, 0] for stop_id in stop_ids: if input_ids[0][-1] == stop_id: return True return False system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version) - StableLM is a helpful and harmless open-source AI language model developed by StabilityAI. - StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user. - StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes. - StableLM will refuse to participate in anything that could harm a human. """ prompt = f"{system_prompt}<|USER|>What's your mood today?<|ASSISTANT|>" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") tokens = model.generate( **inputs, max_new_tokens=64, temperature=0.7, do_sample=True, stopping_criteria=StoppingCriteriaList([StopOnTokens()]) ) print(tokenizer.decode(tokens[0], skip_special_tokens=True)) ``` StableLM Tuned should be used with prompts formatted to `<|SYSTEM|>...<|USER|>...<|ASSISTANT|>...` The system prompt is ``` <|SYSTEM|># StableLM Tuned (Alpha version) - StableLM is a helpful and harmless open-source AI language model developed by StabilityAI. - StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user. - StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes. - StableLM will refuse to participate in anything that could harm a human. ``` ## Model Details * **Developed by**: [Stability AI](https://stability.ai/) * **Model type**: StableLM-Tuned-Alpha models are auto-regressive language models based on the NeoX transformer architecture. * **Language(s)**: English * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers) * **License**: Fine-tuned checkpoints (`StableLM-Tuned-Alpha`) are licensed under the Non-Commercial Creative Commons license ([CC BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)), in-line with the original non-commercial license specified by [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca). * **Contact**: For questions and comments about the model, please email `[email protected]` ## Training | Parameters | Hidden Size | Layers | Heads | Sequence Length | |------------|-------------|--------|-------|-----------------| | 3B | 4096 | 16 | 32 | 4096 | | 7B | 6144 | 16 | 48 | 4096 | ### Training Dataset `StableLM-Tuned-Alpha` models are fine-tuned on a combination of five datasets: [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's `text-davinci-003` engine. [GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), which consists of 400k prompts and responses generated by GPT-4; [Anthropic HH](https://huggingface.co/datasets/Dahoas/full-hh-rlhf), made up of preferences about AI assistant helpfulness and harmlessness; [DataBricks Dolly](https://github.com/databrickslabs/dolly), comprising 15k instruction/responses generated by Databricks employees in capability domains from the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA and summarization; and [ShareGPT Vicuna (English subset)](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), a dataset of conversations retrieved from [ShareGPT](https://sharegpt.com/). ### Training Procedure Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (FP16), and optimized with AdamW. We outline the following hyperparameters: | Parameters | Batch Size | Learning Rate | Warm-up | Weight Decay | Betas | |------------|------------|---------------|---------|--------------|-------------| | 3B | 256 | 2e-5 | 50 | 0.01 | (0.9, 0.99) | | 7B | 128 | 2e-5 | 100 | 0.01 | (0.9, 0.99) | ## Use and Limitations ### Intended Use These models are intended to be used by the open-source community chat-like applications in adherence with the [CC BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. ### Limitations and bias Although the aforementioned datasets help to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly. ## Acknowledgements This work would not have been possible without the helpful hand of Dakota Mahan ([@dmayhem93](https://huggingface.co/dmayhem93)). ## Citations ```bibtex @misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, } ``` ```bibtext @misc{vicuna2023, title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality}, url = {https://vicuna.lmsys.org}, author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.}, month = {March}, year = {2023} } ``` ```bibtex @misc{gpt4all, author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar}, title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/nomic-ai/gpt4all}}, } ```
chujiezheng/Starling-LM-7B-alpha-ExPO
chujiezheng
"2024-05-27T18:15:11Z"
2,244
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "arxiv:2404.16792", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-26T08:41:02Z"
--- license: apache-2.0 language: - en --- # Starling-LM-7B-alpha-ExPO The extrapolated (ExPO) model based on [`berkeley-nest/Starling-LM-7B-alpha`](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) and [`openchat/openchat_3.5`](https://huggingface.co/openchat/openchat_3.5), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper. Specifically, we obtain this model by extrapolating **(alpha = 0.2)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference. ## Evaluation Results Evaluation results on the **AlpacaEval 2.0** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_alpaca)): | | Win Rate (Ori) | LC Win Rate (Ori) | Win Rate (+ ExPO) | LC Win Rate (+ ExPO) | | ------------------------------------ | -------------- | ----------------- | ----------------- | -------------------- | | `HuggingFaceH4/zephyr-7b-alpha` | 6.7% | 10.0% | **10.6%** | **13.6%** | | `HuggingFaceH4/zephyr-7b-beta` | 10.2% | 13.2% | **11.1%** | **14.0%** | | `berkeley-nest/Starling-LM-7B-alpha` | 15.0% | 18.3% | **18.2%** | **19.5%** | | `Nexusflow/Starling-LM-7B-beta` | 26.6% | 25.8% | **29.6%** | **26.4%** | | `snorkelai/Snorkel-Mistral-PairRM` | 24.7% | 24.0% | **28.8%** | **26.4%** | | `RLHFlow/LLaMA3-iterative-DPO-final` | 29.2% | 36.0% | **32.7%** | **37.8%** | | `internlm/internlm2-chat-1.8b` | 3.8% | 4.0% | **5.2%** | **4.3%** | | `internlm/internlm2-chat-7b` | 20.5% | 18.3% | **28.1%** | **22.7%** | | `internlm/internlm2-chat-20b` | 36.1% | 24.9% | **46.2%** | **27.2%** | | `allenai/tulu-2-dpo-7b` | 8.5% | 10.2% | **11.5%** | **11.7%** | | `allenai/tulu-2-dpo-13b` | 11.2% | 15.5% | **15.6%** | **17.6%** | | `allenai/tulu-2-dpo-70b` | 15.4% | 21.2% | **23.0%** | **25.7%** | Evaluation results on the **MT-Bench** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_mtbench)): | | Original | + ExPO | | ------------------------------------ | -------- | -------- | | `HuggingFaceH4/zephyr-7b-alpha` | 6.85 | **6.87** | | `HuggingFaceH4/zephyr-7b-beta` | 7.02 | **7.06** | | `berkeley-nest/Starling-LM-7B-alpha` | 7.82 | **7.91** | | `Nexusflow/Starling-LM-7B-beta` | 8.10 | **8.18** | | `snorkelai/Snorkel-Mistral-PairRM` | 7.63 | **7.69** | | `RLHFlow/LLaMA3-iterative-DPO-final` | 8.08 | **8.45** | | `internlm/internlm2-chat-1.8b` | 5.17 | **5.26** | | `internlm/internlm2-chat-7b` | 7.72 | **7.80** | | `internlm/internlm2-chat-20b` | 8.13 | **8.26** | | `allenai/tulu-2-dpo-7b` | 6.35 | **6.38** | | `allenai/tulu-2-dpo-13b` | 7.00 | **7.26** | | `allenai/tulu-2-dpo-70b` | 7.79 | **8.03** |
cloudqi/cqi_text_to_image_pt_v0
cloudqi
"2023-05-25T15:37:15Z"
2,243
17
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "pt", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-03-20T02:34:32Z"
--- license: creativeml-openrail-m widget: - text: | Gato em alta qualidade na neve tags: - text-to-image - stable-diffusion language: - pt - en --- # Texto para Imagem - Base PT (From Anything MidJ) ## Changelog ``` 1. Modelo ajustado para adaptação à atualização do hugging 2. Otimizada entrada em pt/br ```
DTAI-KULeuven/robbert-2023-dutch-large
DTAI-KULeuven
"2023-12-11T16:51:06Z"
2,243
16
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "Dutch", "Flemish", "RoBERTa", "RobBERT", "BERT", "nl", "dataset:oscar", "dataset:dbrd", "dataset:lassy-ud", "dataset:europarl-mono", "dataset:conll2002", "arxiv:2001.06286", "arxiv:1907.11692", "arxiv:2310.03477", "arxiv:1909.11942", "arxiv:2211.08192", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-12-05T11:01:55Z"
--- language: nl thumbnail: https://github.com/iPieter/RobBERT/raw/master/res/robbert_2023_logo.png tags: - Dutch - Flemish - RoBERTa - RobBERT - BERT license: mit datasets: - oscar - dbrd - lassy-ud - europarl-mono - conll2002 widget: - text: Hallo, mijn naam is RobBERT-2023. Het <mask> taalmodel model van UGent en KU Leuven. --- <p align="center"> <img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_2023_logo.png" alt="RobBERT-2023: A Dutch RoBERTa-based Language Model" width="75%"> </p> # RobBERT-2023: Keeping Dutch Language Models Up-To-Date RobBERT is the state-of-the-art Dutch BERT-based language model developed by KU Leuven, UGent en TU Berlin. RobBERT-2023 is the 2023 release of the [Dutch RobBERT model](https://pieter.ai/robbert/). It is a new version of original [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base) model on the 2023 version of the OSCAR version. We release a base model, but this time we also release an additional large model with 355M parameters (x3 over robbert-2022-base). We are particularly proud of the performance of both models, surpassing both the robbert-v2-base and robbert-2022-base models with +2.9 and +0.9 points on the [DUMB benchmark](https://dumbench.nl) from GroNLP. In addition, we also surpass BERTje with +18.6 points with `robbert-2023-dutch-large`. The original RobBERT model was released in January 2020. Dutch has evolved a lot since then, for example the COVID-19 pandemic introduced a wide range of new words that were suddenly used daily. Also, many other world facts that the original model considered true have also changed. To account for this and other changes in usage, we release a new Dutch BERT model trained on data from 2022: RobBERT 2023. More in-depth information about RobBERT-2023 can be found in our [blog post](https://pieter.ai/robbert-2023/), [the original RobBERT paper](https://arxiv.org/abs/2001.06286) and [the RobBERT Github repository](https://github.com/iPieter/RobBERT). ## How to use RobBERT-2023 and RobBERT both use the [RoBERTa](https://arxiv.org/abs/1907.11692) architecture and pre-training but with a Dutch tokenizer and training data. RoBERTa is the robustly optimized English BERT model, making it even more powerful than the original BERT model. Given this same architecture, RobBERT can easily be finetuned and inferenced using [code to finetune RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html) models and most code used for BERT models, e.g. as provided by [HuggingFace Transformers](https://huggingface.co/transformers/) library. By default, RobBERT-2023 has the masked language model head used in training. This can be used as a zero-shot way to fill masks in sentences. It can be tested out for free on [RobBERT's Hosted infererence API of Huggingface](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=De+hoofdstad+van+Belgi%C3%AB+is+%3Cmask%3E.). You can also create a new prediction head for your own task by using any of HuggingFace's [RoBERTa-runners](https://huggingface.co/transformers/v2.7.0/examples.html#language-model-training), [their fine-tuning notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) by changing the model name to `pdelobelle/robbert-2023-dutch-large`. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("DTAI-KULeuven/robbert-2023-dutch-large") model = AutoModelForSequenceClassification.from_pretrained("DTAI-KULeuven/robbert-2023-dutch-large") ``` You can then use most of [HuggingFace's BERT-based notebooks](https://huggingface.co/transformers/v4.1.1/notebooks.html) for finetuning RobBERT-2022 on your type of Dutch language dataset. ## Comparison of Available Dutch BERT models There is a wide variety of Dutch BERT-based models available for fine-tuning on your tasks. Here's a quick summary to find the one that suits your need: - **(this model)** [DTAI-KULeuven/robbert-2023-dutch-large](https://huggingface.co/DTAI-KULeuven/robbert-2023-dutch-large): The RobBERT-2023 is the first Dutch large (355M parameters) model. It is trained on OSCAR2023 with a new tokenizer, using [our Tik-to-Tok method](https://arxiv.org/pdf/2310.03477.pdf). - [DTAI-KULeuven/robbert-2023-dutch-base](https://huggingface.co/DTAI-KULeuven/robbert-2023-dutch-base): The RobBERT-2023 is a new RobBERT model on the OSCAR2023 dataset with a completely new tokenizer. It is helpful for tasks that rely on words and/or information about more recent events. - [DTAI-KULeuven/robbert-2022-dutch-base](https://huggingface.co/DTAI-KULeuven/robbert-2022-dutch-base): The RobBERT-2022 is a further pre-trained RobBERT model on the OSCAR2022 dataset. It is helpful for tasks that rely on words and/or information about more recent events. - [pdelobelle/robbert-v2-dutch-base](https://huggingface.co/pdelobelle/robbert-v2-dutch-base): The RobBERT model has for years been the best performing BERT-like model for most language tasks. It is trained on a large Dutch webcrawled dataset (OSCAR) and uses the superior [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta) architecture, which robustly optimized the original [BERT model](https://huggingface.co/docs/transformers/model_doc/bert). - [DTAI-KULeuven/robbertje-1-gb-merged](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-mergedRobBERTje): The RobBERTje model is a distilled version of RobBERT and about half the size and four times faster to perform inference on. This can help deploy more scalable language models for your language task There's also the [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) "BERTje" model. This model uses the outdated basic BERT model, and is trained on a smaller corpus of clean Dutch texts. Thanks to RobBERT's more recent architecture as well as its larger and more real-world-like training corpus, most researchers and practitioners seem to achieve higher performance on their language tasks with the RobBERT model. ## How to Replicate Our Paper Experiments Replicating our paper experiments is [described in detail on the RobBERT repository README](https://github.com/iPieter/RobBERT#how-to-replicate-our-paper-experiments). The pretraining depends on the model, for RobBERT-2023 this is based on [our Tik-to-Tok method](https://arxiv.org/pdf/2310.03477.pdf). ## Name Origin of RobBERT Most BERT-like models have the word *BERT* in their name (e.g. [RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html), [ALBERT](https://arxiv.org/abs/1909.11942), [CamemBERT](https://camembert-model.fr/), and [many, many others](https://huggingface.co/models?search=bert)). As such, we queried our original RobBERT model using its masked language model to name itself *\\<mask\\>bert* using [all](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Mijn+naam+is+%3Cmask%3Ebert.) [kinds](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Hallo%2C+ik+ben+%3Cmask%3Ebert.) [of](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Leuk+je+te+ontmoeten%2C+ik+heet+%3Cmask%3Ebert.) [prompts](https://huggingface.co/pdelobelle/robbert-v2-dutch-base?text=Niemand+weet%2C+niemand+weet%2C+dat+ik+%3Cmask%3Ebert+heet.), and it consistently called itself RobBERT. We thought it was really quite fitting, given that RobBERT is a [*very* Dutch name](https://en.wikipedia.org/wiki/Robbert) *(and thus clearly a Dutch language model)*, and additionally has a high similarity to its root architecture, namely [RoBERTa](https://huggingface.co/transformers/model_doc/roberta.html). Since *"rob"* is a Dutch words to denote a seal, we decided to draw a seal and dress it up like [Bert from Sesame Street](https://muppet.fandom.com/wiki/Bert) for the [RobBERT logo](https://github.com/iPieter/RobBERT/blob/master/res/robbert_logo.png). ## Credits and citation The suite of RobBERT models are created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be), [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/) and [François Remy](http://fremycompany.com). If you would like to cite our paper or model, you can use the following BibTeX: ``` @misc{delobelle2023robbert2023conversion, author = {Delobelle, P and Remy, F}, month = {Sep}, organization = {Antwerp, Belgium}, title = {RobBERT-2023: Keeping Dutch Language Models Up-To-Date at a Lower Cost Thanks to Model Conversion}, year = {2023}, startyear = {2023}, startmonth = {Sep}, startday = {22}, finishyear = {2023}, finishmonth = {Sep}, finishday = {22}, venue = {The 33rd Meeting of Computational Linguistics in The Netherlands (CLIN 33)}, day = {22}, publicationstatus = {published}, url= {https://clin33.uantwerpen.be/abstract/robbert-2023-keeping-dutch-language-models-up-to-date-at-a-lower-cost-thanks-to-model-conversion/} } @inproceedings{delobelle2022robbert2022, doi = {10.48550/ARXIV.2211.08192}, url = {https://arxiv.org/abs/2211.08192}, author = {Delobelle, Pieter and Winters, Thomas and Berendt, Bettina}, keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use}, venue = {arXiv}, year = {2022}, } @inproceedings{delobelle2020robbert, title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel", author = "Delobelle, Pieter and Winters, Thomas and Berendt, Bettina", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292", doi = "10.18653/v1/2020.findings-emnlp.292", pages = "3255--3265" } ```
domie/Qwen2-1.5B-Ita-V2
domie
"2024-06-23T19:16:06Z"
2,243
0
transformers
[ "transformers", "safetensors", "gguf", "qwen2", "text-generation-inference", "unsloth", "trl", "en", "it", "base_model:DeepMount00/Qwen2-1.5B-Ita", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-23T18:28:54Z"
--- base_model: DeepMount00/Qwen2-1.5B-Ita language: - en - it license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl --- # Uploaded model - **Developed by:** domie - **License:** apache-2.0 - **Finetuned from model :** DeepMount00/Qwen2-1.5B-Ita This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
openbmb/UltraLM-65b
openbmb
"2023-08-24T04:58:51Z"
2,242
8
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:stingning/ultrachat", "arxiv:2305.14233", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-18T09:33:47Z"
--- datasets: - stingning/ultrachat --- # UltraLM-65b <!-- Provide a quick summary of what the model is/does. --> This is UltraLM-65b delta weights, a chat language model trained upon [UltraChat](https://github.com/thunlp/UltraChat) ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> The model is fine-tuned based on LLaMA-65b with a multi-turn chat-format template as below ``` User: instruction 1 Assistant: response 1<eos_token> User: instruction 2 Assistant: response 2<eos_token> ... ``` - **License:** UltraLM is based on LLaMA and should be used under LLaMA's [model license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md). - **Finetuned from model:** LLaMA-65b - **Finetuned on data:** [UltraChat](https://github.com/thunlp/UltraChat) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** [UltraChat](https://github.com/thunlp/UltraChat) - **Paper:** [arxiv](https://arxiv.org/abs/2305.14233) - **Demo:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> To use this model, you need to [recover](https://github.com/thunlp/UltraChat/tree/main/UltraLM) the full model from the delta weights and perform inference following the template below: ``` [Optional]User: system prompt User: user input Assistant: ```
timm/deit3_base_patch16_224.fb_in22k_ft_in1k
timm
"2024-02-10T23:36:49Z"
2,241
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2204.07118", "license:apache-2.0", "region:us" ]
image-classification
"2023-03-28T00:46:58Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k - imagenet-22k --- # Model card for deit3_base_patch16_224.fb_in22k_ft_in1k A DeiT-III image classification model. Pretrained on ImageNet-22k and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.6 - GMACs: 17.6 - Activations (M): 23.9 - Image size: 224 x 224 - **Papers:** - DeiT III: Revenge of the ViT: https://arxiv.org/abs/2204.07118 - **Original:** https://github.com/facebookresearch/deit - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('deit3_base_patch16_224.fb_in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'deit3_base_patch16_224.fb_in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{Touvron2022DeiTIR, title={DeiT III: Revenge of the ViT}, author={Hugo Touvron and Matthieu Cord and Herve Jegou}, journal={arXiv preprint arXiv:2204.07118}, year={2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
digiplay/OnlyAnime_v2.3
digiplay
"2023-07-22T13:33:53Z"
2,241
8
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-18T07:07:48Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/105955/onlyanime Sample image I made thru Huggingface's API : ![f2e59549-c26e-4749-a1c9-2849404db47d.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/MkA5E0chNB8hYh0hdzhUx.jpeg) Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/8a74f43f-5ea1-4478-82a1-d83b78e5f9f0/width=1024/02041-1738154812.jpeg) *This image using LORA file 👉 FilmVelvia3.safetensors you can also download here: https://huggingface.co/digiplay/OnlyAnime_v2.3/tree/main and put the text like: <lora:FilmVelvia3: 0.6> or <lora:FilmVelvia3: 0.5> in your prompt, and picture will show beautiful lights like professional photography.
Den4ikAI/sbert_large_mt_ru_retriever
Den4ikAI
"2023-08-14T07:34:25Z"
2,240
1
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2023-08-08T05:49:23Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers widget: - source_sentence: 'query: Когда родился Пушкин?' sentences: - >- passage: Алекса́ндр Серге́евич Пу́шкин (26 мая [6 июня] 1799, Москва — 29 января [10 февраля] 1837, Санкт-Петербург) — русский поэт, драматург и прозаик, заложивший основы русского реалистического направления[2], литературный критик[3] и теоретик литературы, историк[3], публицист, журналист[3]. - 'passage: Пушкин ловил кайф со своими друзьями' - >- passage: Пушкин из самых авторитетных литературных деятелей первой трети XIX века. Ещё при жизни Пушкина сложилась его репутация величайшего национального русского поэта[4][5]. Пушкин рассматривается как основоположник современного русского литературного языка[~ 2]. license: mit language: - ru --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3622 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters: ``` {'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 500, "evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 3622, "weight_decay": 1e-05 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
MTSmash/EVA-GPT-German-GGUF
MTSmash
"2024-05-14T10:49:52Z"
2,240
3
transformers
[ "transformers", "gguf", "gpt", "llama", "EVA-GPT-German", "TMP-Networks", "text-generation", "de", "en", "license:apache-2.0", "region:us" ]
text-generation
"2024-03-21T13:09:29Z"
--- license: apache-2.0 language: - de - en pipeline_tag: text-generation library_name: transformers model_type: llama metrics: - perplexity inference: false tags: - gpt - llama - EVA-GPT-German - TMP-Networks --- ## Kompatibilität Das Modell ist kompatibel mit macOS, Windows und Linux. ## Nutzung Dieses Sprachmodell kann mit den folgenden Tools in der CPU/GPU-Version (Community Edition) verwendet werden: - **Ollama**: [ollama.com](https://ollama.com/) - **LM Studio**: [lmstudio.ai](https://lmstudio.ai/) Konfigurationsdateien für Ollama und LM Studio können im Bereich **Files** heruntergeladen werden. ## Kontexterkennung Die Kontexterfassung funktioniert zuverlässig und liefert präzise Ergebnisse, jetzt auch mit 4096. ## Beispiel Ein Chatbeispiel ist verfügbar unter: [Hugging Face Beispiel](https://huggingface.co/MTSmash/EVA-GPT-German-GGUF/blob/main/TestQ4_K_M.png) ## Weiteres Testen Zum Testen des Modells ohne Download besuchen Sie bitte: [TMP-Networks AI](https://ai.tmp-networks.de) ## Wichtiger Hinweis Falls Fehlermeldungen erscheinen, befindet sich das Modell möglicherweise in einer weiteren Trainingsphase. Im weiteren Beachten Sie, dass der Einsatz dieses Model in einigen Fällen als illegal oder als Verstoß gegen die Nutzungsbedingungen von Internetdiensten betrachtet werden kann. Stellen Sie sicher, dass Sie die Gesetze und Richtlinien Ihres Landes oder Ihrer Organisation beachten, bevor Sie diese Model verwenden. ## Community Treten Sie unserer Facebook-Gruppe bei: [Facebook-Gruppe](https://www.facebook.com/groups/1356910801463873) ## Unterstützen Sie uns! https://www.buymeacoffee.com/infomve ## Urheberrecht (c) 2024 TMP-Networks
neuralmagic/OpenHermes-2.5-Mistral-7B-pruned50
neuralmagic
"2024-03-05T23:33:12Z"
2,239
1
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "nm-vllm", "sparse", "conversational", "arxiv:2301.00774", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-30T08:32:58Z"
--- base_model: teknium/OpenHermes-2.5-Mistral-7B inference: true model_type: mistral quantized_by: mgoin tags: - nm-vllm - sparse --- ## OpenHermes-2.5-Mistral-7B-pruned50 This repo contains model files for [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) optimized for [nm-vllm](https://github.com/neuralmagic/nm-vllm), a high-throughput serving engine for compressed LLMs. This model was pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml). ## Inference Install [nm-vllm](https://github.com/neuralmagic/nm-vllm) for fast inference and low memory-usage: ```bash pip install nm-vllm[sparse] ``` Run in a Python pipeline for local inference: ```python from vllm import LLM, SamplingParams model = LLM("nm-testing/OpenHermes-2.5-Mistral-7B-pruned50", sparsity="sparse_w16a16") prompt = "How to make banana bread?" formatted_prompt = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" sampling_params = SamplingParams(max_tokens=100) outputs = model.generate(formatted_prompt, sampling_params=sampling_params) print(outputs[0].outputs[0].text) """ Here is a simple recipe for making banana bread: Ingredients: - 3 ripe bananas - 2 eggs - 1/2 cup of sugar - 1/2 cup of butter - 2 cups of flour - 1 teaspoon baking powder - 2 teaspoons of baking soda Instructions: 1. Preheat your oven at 350 degree Fahrenant. """ ``` ## Prompt template ``` <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Sparsification For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below. Install [SparseML](https://github.com/neuralmagic/sparseml): ```bash git clone https://github.com/neuralmagic/sparseml pip install -e "sparseml[transformers]" ``` Replace the recipe as you like and run this one-shot compression script to apply SparseGPT: ```python import sparseml.transformers original_model_name = "teknium/OpenHermes-2.5-Mistral-7B" calibration_dataset = "open_platypus" output_directory = "output/" recipe = """ test_stage: obcq_modifiers: SparseGPTModifier: sparsity: 0.5 sequential_update: true mask_structure: 0:0 targets: ['re:model.layers.\d*$'] """ # Apply SparseGPT to the model sparseml.transformers.oneshot( model=original_model_name, dataset=calibration_dataset, recipe=recipe, output_dir=output_directory, ) ``` ## Slack For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
terrysun/mixtral-lora-adapter
terrysun
"2024-02-10T02:33:35Z"
2,238
0
peft
[ "peft", "safetensors", "region:us" ]
null
"2024-02-10T02:29:52Z"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
SangBinCho/long_context_16k_testing_2
SangBinCho
"2024-05-17T10:57:39Z"
2,238
0
peft
[ "peft", "safetensors", "llama", "region:us" ]
null
"2024-05-17T10:57:16Z"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
SangBinCho/long_context_32k_testing
SangBinCho
"2024-05-17T10:58:16Z"
2,238
0
peft
[ "peft", "safetensors", "llama", "region:us" ]
null
"2024-05-17T10:57:26Z"
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.6.0.dev0
mradermacher/L3-ModelStockMerge-8b-GGUF
mradermacher
"2024-06-11T08:58:37Z"
2,238
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "en", "base_model:jsfs11/L3-ModelStockMerge-8b", "endpoints_compatible", "region:us" ]
null
"2024-06-11T08:31:15Z"
--- base_model: jsfs11/L3-ModelStockMerge-8b language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/jsfs11/L3-ModelStockMerge-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/L3-ModelStockMerge-8b-GGUF/resolve/main/L3-ModelStockMerge-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
indonesian-nlp/wav2vec2-large-xlsr-indonesian
indonesian-nlp
"2021-07-06T06:15:38Z"
2,237
9
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "id", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: id datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Indonesian by Indonesian NLP results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice id type: common_voice args: id metrics: - name: Test WER type: wer value: 14.29 --- # Wav2Vec2-Large-XLSR-Indonesian This is the model for Wav2Vec2-Large-XLSR-Indonesian, a fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) model on the [Indonesian Common Voice dataset](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "id", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian") model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian") # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset[:2]["sentence"]) ``` ## Evaluation The model can be evaluated as follows on the Indonesian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "id", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian") model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-large-xlsr-indonesian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\'\”\�]' # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 14.29 % ## Training The Common Voice `train`, `validation`, and [synthetic voice datasets](https://cloud.uncool.ai/index.php/s/Kg4C6f5NJGN9ZdR) were used for training. The script used for training can be found [here](https://github.com/indonesian-nlp/wav2vec2-indonesian)
John6666/sagittarius-beta-pony-mix-v1-sdxl
John6666
"2024-06-15T23:30:46Z"
2,237
3
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-15T23:26:08Z"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://huggingface.co/Rita7th/XL_Model_Storage_Pony) and on [Civitai](https://civitai.com/models/511139/sagittariusbetaponymix?modelVersionId=568087).
aisquared/dlite-v1-124m
aisquared
"2024-03-28T18:14:03Z"
2,236
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "dataset:tatsu-lab/alpaca", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-05T18:10:21Z"
--- license: apache-2.0 datasets: - tatsu-lab/alpaca language: - en library_name: transformers --- # Model Card for `dlite-v1-124m` <!-- Provide a quick summary of what the model is/does. --> AI Squared's `dlite-v1-124m` ([blog post](https://medium.com/ai-squared/introducing-dlite-a-lightweight-chatgpt-like-model-based-on-dolly-deaa49402a1f)) is a large language model which is derived from OpenAI's smallest [GPT-2](https://huggingface.co/gpt2) model and fine-tuned on a single T4 GPU on a corpus of 50k records ([Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html)) to help it exhibit chat-based capabilities. While `dlite-v1-124m` is **not a state-of-the-art model**, we believe that the level of interactivity that can be achieved on such a small model that is trained so cheaply is important to showcase, as it continues to demonstrate that creating powerful AI capabilities may be much more accessible than previously thought. ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** AI Squared, Inc. - **Shared by:** AI Squared, Inc. - **Model type:** Large Language Model - **Language(s) (NLP):** EN - **License:** Apache v2.0 - **Finetuned from model:** GPT-2 ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> **`dlite-v1-124m` is not a state-of-the-art language model.** `dlite-v1-124m` is an experimental technology and is not designed for use in any environment other than for research purposes. Furthermore, the model can sometimes exhibit undesired behaviors. Some of these behaviors include, but are not limited to: factual inaccuracies, biases, offensive responses, toxicity, and hallucinations. Just as with any other LLM, we advise users of this technology to exercise good judgment when applying this technology. ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. From your terminal, run: ```python pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2" ``` The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` found in the model repo [here](https://huggingface.co/aisquared/dlite-v1-124m/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. ```python from transformers import pipeline import torch generate_text = pipeline(model="aisquared/dlite-v1-124m", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") ``` You can then use the pipeline to answer instructions: ```python res = generate_text("Who was George Washington?") print(res) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/aisquared/dlite-v1-124m/blob/main/instruct_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python from instruct_pipeline import InstructionTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("aisquared/dlite-v1-124m", padding_side="left") model = AutoModelForCausalLM.from_pretrained("aisquared/dlite-v1-124m", device_map="auto", torch_dtype=torch.bfloat16) generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) ``` ### Model Performance Metrics We present the results from various model benchmarks on the EleutherAI LLM Evaluation Harness for all models in the DLite family. Model results are sorted by mean score, ascending, to provide an ordering. These metrics serve to further show that none of the DLite models are state of the art, but rather further show that chat-like behaviors in LLMs can be trained almost independent of model size. | Model | arc_challenge | arc_easy | boolq | hellaswag | openbookqa | piqa | winogrande | |:--------------|----------------:|-----------:|---------:|------------:|-------------:|---------:|-------------:| | dlite-v2-124m | 0.199659 | 0.447811 | 0.494801 | 0.291675 | 0.156 | 0.620239 | 0.487766 | | gpt2 | 0.190273 | 0.438131 | 0.487156 | 0.289185 | 0.164 | 0.628945 | 0.51618 | | dlite-v1-124m | 0.223549 | 0.462542 | 0.502446 | 0.293268 | 0.17 | 0.622416 | 0.494081 | | gpt2-medium | 0.215017 | 0.490741 | 0.585933 | 0.333101 | 0.186 | 0.676279 | 0.531176 | | dlite-v2-355m | 0.251706 | 0.486111 | 0.547401 | 0.344354 | 0.216 | 0.671926 | 0.52723 | | dlite-v1-355m | 0.234642 | 0.507576 | 0.600306 | 0.338478 | 0.216 | 0.664309 | 0.496448 | | gpt2-large | 0.216724 | 0.531566 | 0.604893 | 0.363971 | 0.194 | 0.703482 | 0.553275 | | dlite-v1-774m | 0.250853 | 0.545875 | 0.614985 | 0.375124 | 0.218 | 0.698041 | 0.562747 | | dlite-v2-774m | 0.269625 | 0.52904 | 0.613761 | 0.395937 | 0.256 | 0.691513 | 0.566693 | | gpt2-xl | 0.25 | 0.582912 | 0.617737 | 0.400418 | 0.224 | 0.708379 | 0.583268 | | dlite-v1-1_5b | 0.268771 | 0.588384 | 0.624159 | 0.401414 | 0.226 | 0.708379 | 0.584846 | | dlite-v2-1_5b | 0.289249 | 0.565657 | 0.601223 | 0.434077 | 0.272 | 0.703482 | 0.588003 | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_aisquared__dlite-v1-124m) | Metric | Value | |-----------------------|---------------------------| | Avg. | 24.62 | | ARC (25-shot) | 24.32 | | HellaSwag (10-shot) | 31.16 | | MMLU (5-shot) | 25.08 | | TruthfulQA (0-shot) | 36.38 | | Winogrande (5-shot) | 50.2 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 5.2 |
mradermacher/Llama-3SOME-8B-v2d-GGUF
mradermacher
"2024-06-08T01:20:27Z"
2,236
1
transformers
[ "transformers", "gguf", "en", "base_model:BeaverAI/Llama-3SOME-8B-v2d", "endpoints_compatible", "region:us" ]
null
"2024-06-07T23:03:30Z"
--- base_model: BeaverAI/Llama-3SOME-8B-v2d language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/BeaverAI/Llama-3SOME-8B-v2d <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3SOME-8B-v2d-GGUF/resolve/main/Llama-3SOME-8B-v2d.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MoritzLaurer/deberta-v3-base-zeroshot-v2.0
MoritzLaurer
"2024-04-11T13:42:07Z"
2,234
3
transformers
[ "transformers", "onnx", "safetensors", "deberta-v2", "text-classification", "zero-shot-classification", "en", "arxiv:2312.17543", "base_model:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
zero-shot-classification
"2024-03-28T22:55:47Z"
--- language: - en tags: - text-classification - zero-shot-classification base_model: microsoft/deberta-v3-base pipeline_tag: zero-shot-classification library_name: transformers license: mit --- # Model description: deberta-v3-base-zeroshot-v2.0 ## zeroshot-v2.0 series of models Models in this series are designed for efficient zeroshot classification with the Hugging Face pipeline. These models can do classification without training data and run on both GPUs and CPUs. An overview of the latest zeroshot classifiers is available in my [Zeroshot Classifier Collection](https://huggingface.co/collections/MoritzLaurer/zeroshot-classifiers-6548b4ff407bb19ff5c3ad6f). The main update of this `zeroshot-v2.0` series of models is that several models are trained on fully commercially-friendly data for users with strict license requirements. These models can do one universal classification task: determine whether a hypothesis is "true" or "not true" given a text (`entailment` vs. `not_entailment`). This task format is based on the Natural Language Inference task (NLI). The task is so universal that any classification task can be reformulated into this task by the Hugging Face pipeline. ## Training data Models with a "`-c`" in the name are trained on two types of fully commercially-friendly data: 1. Synthetic data generated with [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1). I first created a list of 500+ diverse text classification tasks for 25 professions in conversations with Mistral-large. The data was manually curated. I then used this as seed data to generate several hundred thousand texts for these tasks with Mixtral-8x7B-Instruct-v0.1. The final dataset used is available in the [synthetic_zeroshot_mixtral_v0.1](https://huggingface.co/datasets/MoritzLaurer/synthetic_zeroshot_mixtral_v0.1) dataset in the subset `mixtral_written_text_for_tasks_v4`. Data curation was done in multiple iterations and will be improved in future iterations. 2. Two commercially-friendly NLI datasets: ([MNLI](https://huggingface.co/datasets/nyu-mll/multi_nli), [FEVER-NLI](https://huggingface.co/datasets/fever)). These datasets were added to increase generalization. 3. Models without a "`-c`" in the name also included a broader mix of training data with a broader mix of licenses: ANLI, WANLI, LingNLI, and all datasets in [this list](https://github.com/MoritzLaurer/zeroshot-classifier/blob/7f82e4ab88d7aa82a4776f161b368cc9fa778001/v1_human_data/datasets_overview.csv) where `used_in_v1.1==True`. ## How to use the models ```python #!pip install transformers[sentencepiece] from transformers import pipeline text = "Angela Merkel is a politician in Germany and leader of the CDU" hypothesis_template = "This text is about {}" classes_verbalized = ["politics", "economy", "entertainment", "environment"] zeroshot_classifier = pipeline("zero-shot-classification", model="MoritzLaurer/deberta-v3-large-zeroshot-v2.0") # change the model identifier here output = zeroshot_classifier(text, classes_verbalized, hypothesis_template=hypothesis_template, multi_label=False) print(output) ``` `multi_label=False` forces the model to decide on only one class. `multi_label=True` enables the model to choose multiple classes. ## Metrics The models were evaluated on 28 different text classification tasks with the [f1_macro](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) metric. The main reference point is `facebook/bart-large-mnli` which is, at the time of writing (03.04.24), the most used commercially-friendly 0-shot classifier. ![results_aggreg_v2.0](https://raw.githubusercontent.com/MoritzLaurer/zeroshot-classifier/main/v2_synthetic_data/results/zeroshot-v2.0-aggreg.png) | | facebook/bart-large-mnli | roberta-base-zeroshot-v2.0-c | roberta-large-zeroshot-v2.0-c | deberta-v3-base-zeroshot-v2.0-c | deberta-v3-base-zeroshot-v2.0 (fewshot) | deberta-v3-large-zeroshot-v2.0-c | deberta-v3-large-zeroshot-v2.0 (fewshot) | bge-m3-zeroshot-v2.0-c | bge-m3-zeroshot-v2.0 (fewshot) | |:---------------------------|---------------------------:|-----------------------------:|------------------------------:|--------------------------------:|-----------------------------------:|---------------------------------:|------------------------------------:|-----------------------:|--------------------------:| | all datasets mean | 0.497 | 0.587 | 0.622 | 0.619 | 0.643 (0.834) | 0.676 | 0.673 (0.846) | 0.59 | (0.803) | | amazonpolarity (2) | 0.937 | 0.924 | 0.951 | 0.937 | 0.943 (0.961) | 0.952 | 0.956 (0.968) | 0.942 | (0.951) | | imdb (2) | 0.892 | 0.871 | 0.904 | 0.893 | 0.899 (0.936) | 0.923 | 0.918 (0.958) | 0.873 | (0.917) | | appreviews (2) | 0.934 | 0.913 | 0.937 | 0.938 | 0.945 (0.948) | 0.943 | 0.949 (0.962) | 0.932 | (0.954) | | yelpreviews (2) | 0.948 | 0.953 | 0.977 | 0.979 | 0.975 (0.989) | 0.988 | 0.985 (0.994) | 0.973 | (0.978) | | rottentomatoes (2) | 0.83 | 0.802 | 0.841 | 0.84 | 0.86 (0.902) | 0.869 | 0.868 (0.908) | 0.813 | (0.866) | | emotiondair (6) | 0.455 | 0.482 | 0.486 | 0.459 | 0.495 (0.748) | 0.499 | 0.484 (0.688) | 0.453 | (0.697) | | emocontext (4) | 0.497 | 0.555 | 0.63 | 0.59 | 0.592 (0.799) | 0.699 | 0.676 (0.81) | 0.61 | (0.798) | | empathetic (32) | 0.371 | 0.374 | 0.404 | 0.378 | 0.405 (0.53) | 0.447 | 0.478 (0.555) | 0.387 | (0.455) | | financialphrasebank (3) | 0.465 | 0.562 | 0.455 | 0.714 | 0.669 (0.906) | 0.691 | 0.582 (0.913) | 0.504 | (0.895) | | banking77 (72) | 0.312 | 0.124 | 0.29 | 0.421 | 0.446 (0.751) | 0.513 | 0.567 (0.766) | 0.387 | (0.715) | | massive (59) | 0.43 | 0.428 | 0.543 | 0.512 | 0.52 (0.755) | 0.526 | 0.518 (0.789) | 0.414 | (0.692) | | wikitoxic_toxicaggreg (2) | 0.547 | 0.751 | 0.766 | 0.751 | 0.769 (0.904) | 0.741 | 0.787 (0.911) | 0.736 | (0.9) | | wikitoxic_obscene (2) | 0.713 | 0.817 | 0.854 | 0.853 | 0.869 (0.922) | 0.883 | 0.893 (0.933) | 0.783 | (0.914) | | wikitoxic_threat (2) | 0.295 | 0.71 | 0.817 | 0.813 | 0.87 (0.946) | 0.827 | 0.879 (0.952) | 0.68 | (0.947) | | wikitoxic_insult (2) | 0.372 | 0.724 | 0.798 | 0.759 | 0.811 (0.912) | 0.77 | 0.779 (0.924) | 0.783 | (0.915) | | wikitoxic_identityhate (2) | 0.473 | 0.774 | 0.798 | 0.774 | 0.765 (0.938) | 0.797 | 0.806 (0.948) | 0.761 | (0.931) | | hateoffensive (3) | 0.161 | 0.352 | 0.29 | 0.315 | 0.371 (0.862) | 0.47 | 0.461 (0.847) | 0.291 | (0.823) | | hatexplain (3) | 0.239 | 0.396 | 0.314 | 0.376 | 0.369 (0.765) | 0.378 | 0.389 (0.764) | 0.29 | (0.729) | | biasframes_offensive (2) | 0.336 | 0.571 | 0.583 | 0.544 | 0.601 (0.867) | 0.644 | 0.656 (0.883) | 0.541 | (0.855) | | biasframes_sex (2) | 0.263 | 0.617 | 0.835 | 0.741 | 0.809 (0.922) | 0.846 | 0.815 (0.946) | 0.748 | (0.905) | | biasframes_intent (2) | 0.616 | 0.531 | 0.635 | 0.554 | 0.61 (0.881) | 0.696 | 0.687 (0.891) | 0.467 | (0.868) | | agnews (4) | 0.703 | 0.758 | 0.745 | 0.68 | 0.742 (0.898) | 0.819 | 0.771 (0.898) | 0.687 | (0.892) | | yahootopics (10) | 0.299 | 0.543 | 0.62 | 0.578 | 0.564 (0.722) | 0.621 | 0.613 (0.738) | 0.587 | (0.711) | | trueteacher (2) | 0.491 | 0.469 | 0.402 | 0.431 | 0.479 (0.82) | 0.459 | 0.538 (0.846) | 0.471 | (0.518) | | spam (2) | 0.505 | 0.528 | 0.504 | 0.507 | 0.464 (0.973) | 0.74 | 0.597 (0.983) | 0.441 | (0.978) | | wellformedquery (2) | 0.407 | 0.333 | 0.333 | 0.335 | 0.491 (0.769) | 0.334 | 0.429 (0.815) | 0.361 | (0.718) | | manifesto (56) | 0.084 | 0.102 | 0.182 | 0.17 | 0.187 (0.376) | 0.258 | 0.256 (0.408) | 0.147 | (0.331) | | capsotu (21) | 0.34 | 0.479 | 0.523 | 0.502 | 0.477 (0.664) | 0.603 | 0.502 (0.686) | 0.472 | (0.644) | These numbers indicate zeroshot performance, as no data from these datasets was added in the training mix. Note that models without a "`-c`" in the title were evaluated twice: one run without any data from these 28 datasets to test pure zeroshot performance (the first number in the respective column) and the final run including up to 500 training data points per class from each of the 28 datasets (the second number in brackets in the column, "fewshot"). No model was trained on test data. Details on the different datasets are available here: https://github.com/MoritzLaurer/zeroshot-classifier/blob/main/v1_human_data/datasets_overview.csv ## When to use which model - **deberta-v3-zeroshot vs. roberta-zeroshot**: deberta-v3 performs clearly better than roberta, but it is a bit slower. roberta is directly compatible with Hugging Face's production inference TEI containers and flash attention. These containers are a good choice for production use-cases. tl;dr: For accuracy, use a deberta-v3 model. If production inference speed is a concern, you can consider a roberta model (e.g. in a TEI container and [HF Inference Endpoints](https://ui.endpoints.huggingface.co/catalog)). - **commercial use-cases**: models with "`-c`" in the title are guaranteed to be trained on only commercially-friendly data. Models without a "`-c`" were trained on more data and perform better, but include data with non-commercial licenses. Legal opinions diverge if this training data affects the license of the trained model. For users with strict legal requirements, the models with "`-c`" in the title are recommended. - **Multilingual/non-English use-cases**: use [bge-m3-zeroshot-v2.0](https://huggingface.co/MoritzLaurer/bge-m3-zeroshot-v2.0) or [bge-m3-zeroshot-v2.0-c](https://huggingface.co/MoritzLaurer/bge-m3-zeroshot-v2.0-c). Note that multilingual models perform worse than English-only models. You can therefore also first machine translate your texts to English with libraries like [EasyNMT](https://github.com/UKPLab/EasyNMT) and then apply any English-only model to the translated data. Machine translation also facilitates validation in case your team does not speak all languages in the data. - **context window**: The `bge-m3` models can process up to 8192 tokens. The other models can process up to 512. Note that longer text inputs both make the mode slower and decrease performance, so if you're only working with texts of up to 400~ words / 1 page, use e.g. a deberta model for better performance. - The latest updates on new models are always available in the [Zeroshot Classifier Collection](https://huggingface.co/collections/MoritzLaurer/zeroshot-classifiers-6548b4ff407bb19ff5c3ad6f). ## Reproduction Reproduction code is available in the `v2_synthetic_data` directory here: https://github.com/MoritzLaurer/zeroshot-classifier/tree/main ## Limitations and bias The model can only do text classification tasks. Biases can come from the underlying foundation model, the human NLI training data and the synthetic data generated by Mixtral. ## License The foundation model was published under the MIT license. The licenses of the training data vary depending on the model, see above. ## Citation This model is an extension of the research described in this [paper](https://arxiv.org/pdf/2312.17543.pdf). If you use this model academically, please cite: ``` @misc{laurer_building_2023, title = {Building {Efficient} {Universal} {Classifiers} with {Natural} {Language} {Inference}}, url = {http://arxiv.org/abs/2312.17543}, doi = {10.48550/arXiv.2312.17543}, abstract = {Generative Large Language Models (LLMs) have become the mainstream choice for fewshot and zeroshot learning thanks to the universality of text generation. Many users, however, do not need the broad capabilities of generative LLMs when they only want to automate a classification task. Smaller BERT-like models can also learn universal tasks, which allow them to do any text classification task without requiring fine-tuning (zeroshot classification) or to learn new tasks with only a few examples (fewshot), while being significantly more efficient than generative LLMs. This paper (1) explains how Natural Language Inference (NLI) can be used as a universal classification task that follows similar principles as instruction fine-tuning of generative LLMs, (2) provides a step-by-step guide with reusable Jupyter notebooks for building a universal classifier, and (3) shares the resulting universal classifier that is trained on 33 datasets with 389 diverse classes. Parts of the code we share has been used to train our older zeroshot classifiers that have been downloaded more than 55 million times via the Hugging Face Hub as of December 2023. Our new classifier improves zeroshot performance by 9.4\%.}, urldate = {2024-01-05}, publisher = {arXiv}, author = {Laurer, Moritz and van Atteveldt, Wouter and Casas, Andreu and Welbers, Kasper}, month = dec, year = {2023}, note = {arXiv:2312.17543 [cs]}, keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language}, } ``` ### Ideas for cooperation or questions? If you have questions or ideas for cooperation, contact me at moritz{at}huggingface{dot}co or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/) ### Flexible usage and "prompting" You can formulate your own hypotheses by changing the `hypothesis_template` of the zeroshot pipeline. Similar to "prompt engineering" for LLMs, you can test different formulations of your `hypothesis_template` and verbalized classes to improve performance. ```python from transformers import pipeline text = "Angela Merkel is a politician in Germany and leader of the CDU" # formulation 1 hypothesis_template = "This text is about {}" classes_verbalized = ["politics", "economy", "entertainment", "environment"] # formulation 2 depending on your use-case hypothesis_template = "The topic of this text is {}" classes_verbalized = ["political activities", "economic policy", "entertainment or music", "environmental protection"] # test different formulations zeroshot_classifier = pipeline("zero-shot-classification", model="MoritzLaurer/deberta-v3-large-zeroshot-v2.0") # change the model identifier here output = zeroshot_classifier(text, classes_verbalized, hypothesis_template=hypothesis_template, multi_label=False) print(output) ```
bosonai/Higgs-Llama-3-70B
bosonai
"2024-06-10T04:13:02Z"
2,234
166
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-05T20:10:04Z"
--- license: other --- # Higgs-Llama-3-70B Higgs-Llama-3-70B is post-trained from [meta-llama/Meta-Llama-3-70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B), specially tuned for role-playing while being competitive in general-domain instruction-following and reasoning. We perform supervised fine-tuning with our in-house instruction-following and chat datasets. Afterwards, we construct preference pairs with a semi-automated pipeline that relies on both human-labelers and our private LLMs. We conduct iterative preference optimization to align the model. During alignment, we adopted a special strategy to align the model’s behavior with the system message. Compared with other instruct models, Higgs models follow their roles more closely. See our [release blog](https://boson.ai/higgs-opensource/). ## Evaluation All benchmarks lead to eventual overfitting, including those for LLMs. Training on data, particularly beneficial for benchmarks typically does not improve (or even worsen) role-playing performance. We worked to exclude benchmark data, including their training examples, from our fine-tuning data. We highlight our results on two new and challenging benchmarks: [MMLU-Pro](https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro) and [Arena-Hard](https://github.com/lm-sys/arena-hard-auto). MMLU-Pro extends the popular MMLU benchmark. We believe that it suffers from less overfitting by other released models as well, as it was released only recently (it was released after our models finished training). ### MMLU-Pro <table class="col-12 col-md-6" width="100px"> <tr> <td><b>Model</b></td> <td><b>MMLU-Pro</b></td> </tr> <tr> <td>GPT-4o</td> <td>72.6</td> </tr> <tr> <td>Gemini-1.5-Pro</td> <td>69.0</td> </tr> <tr> <td>Claude-3-Opus</td> <td>68.5</td> </tr> <tr> <td>GPT-4-Turbo</td> <td>63.7</td> </tr> <tr style="font-weight: bold"> <td>Higgs-Llama-3-70B</td> <td>63.2</td> </tr> <tr> <td>Gemini-1.5-Flash</td> <td>59.1</td> </tr> <tr> <td>Claude-3-Sonnet</td> <td>56.8</td> </tr> <tr> <td>Llama-3-70B-Instruct</td> <td>56.2</td> </tr> </table> ### Arena-Hard <table class="col-12 col-md-6"> <tr> <td><b>Model</b></td> <td><b>Arena-Hard</b></td> </tr> <tr> <td>GPT-4o</td> <td>79.5</td> </tr> <tr> <td>Gemini-1.5-Pro</td> <td>72.0</td> </tr> <tr> <td>Claude-3-Opus</td> <td>60.4</td> </tr> <tr style="font-weight: bold"> <td>Higgs-Llama-3-70B</td> <td>49.6</td> </tr> <tr> <td>Gemini-1.5-Flash</td> <td>49.6</td> </tr> <tr> <td>Claude-3-Sonnet</td> <td>46.8</td> </tr> <tr> <td>Claude-3-Haiku</td> <td>41.5</td> </tr> <tr> <td>Llama-3-70B-Instruct</td> <td>41.1</td> </tr> <tr> <td>GPT-4-0613</td> <td>37.9</td> </tr> <tr> <td>Mistral-Large</td> <td>37.7</td> </tr> </table> ## Overall Results In the following, we compare our model's performance with `gpt-4o` and `Llama-3-70B-Instruct` on [MMLU-Pro](https://github.com/TIGER-AI-Lab/MMLU-Pro), [Arena-Hard](https://github.com/lm-sys/arena-hard-auto/tree/main), [AlpacaEval 2.0 LC](https://github.com/tatsu-lab/alpaca_eval), MMLU, GPQA and DROP. For MMLU, GPQA and DROP, we adopt [openai/simple-evals](https://github.com/openai/simple-evals) for evaluation. For the other benchmarks, we evaluate via the official implementation. <div style="overflow: auto"> <table> <tr> <th></th> <td><b>MMLU-Pro</td> <td><b>Arena-Hard</td> <td><b>AlpacaEval <br> 2.0 LC</b></td> <td><b>MMLU</b></td> <td><b>GPQA</b></td> <td><b>DROP <br> (F1,3-shot)</b></td> </tr> <tr> <td>GPT-4o</td> <td>72.6</td> <td>79.5*</td> <td>57.5</td> <td>87.2</td> <td>49.9</td> <td>83.7</td> </tr> <tr style="font-weight: bold"> <td>Higgs-Llama-3-70B</td> <td>63.2</td> <td>49.6</td> <td>38.6</td> <td>80.8</td> <td>42.1</td> <td>81.6</td> </tr> <tr> <td>Llama-3-70B-Instruct*</td> <td>56.2</td> <td>41.1</td> <td>34.4</td> <td>80.2</td> <td>41.3</td> <td>81.4</td> </tr> </table> </div> <small>*For Llama-3-70B-Instruct, the MMLU-Pro number is copied from the [MMLU-Pro leaderboard](https://huggingface.co/spaces/TIGER-Lab/MMLU-Pro); the Arena-Hard numbers are copied from the [leaderboard updated on 5/21](https://github.com/lm-sys/arena-hard-auto/tree/main?tab=readme-ov-file#full-leaderboard-updated-0521) while we run gpt-4o ourselves; and the MMLU/GPQA/DROP are copied from [simple-evals](https://github.com/openai/simple-evals).</small> ## How to use We use the same prompting format as in Meta-Llama-3-70B-Instruct. ### Use with transformers See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "bosonai/Higgs-Llama-3-70B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are an AI assistant that speaks in the style of Sheldon Cooper. You are arguing with the user and is trying to prove the opposite of what the user said."}, {"role": "user", "content": "The earth is round."}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=[ pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"), pipeline.tokenizer.eos_token_id, ], do_sample=True, temperature=1.0, top_p=0.95, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## License [Our license](https://huggingface.co/bosonai/Higgs-Llama-3-70B/blob/main/LICENSE) is based on Meta's LLama 3 Community License.
wenge-research/yayi-7b-llama2
wenge-research
"2023-09-13T02:25:50Z"
2,233
9
transformers
[ "transformers", "pytorch", "llama", "text-generation", "yayi", "zh", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-21T10:10:18Z"
--- language: - zh - en pipeline_tag: text-generation tags: - yayi --- # 雅意大模型 ## 介绍 [雅意大模型](https://www.wenge.com/yayi/index.html)在百万级人工构造的高质量领域数据上进行指令微调得到,训练数据覆盖媒体宣传、舆情分析、公共安全、金融风控、城市治理等五大领域,上百种自然语言指令任务。雅意大模型从预训练初始化权重到领域模型的迭代过程中,我们逐步增强了它的中文基础能力和领域分析能力,并增加了多轮对话和部分插件能力。同时,经过数百名用户内测过程中持续不断的人工反馈优化,我们进一步提升了模型性能和安全性。 通过雅意大模型的开源为促进中文预训练大模型开源社区的发展,贡献自己的一份力量,通过开源,与每一位合作伙伴共建雅意大模型生态。 *News: 🔥 雅意大模型已开源基于 LLaMA 2 的中文优化模型版本,探索适用于中文多领域任务的最新实践。* ## 模型地址 | 模型名称 | 🤗HF模型标识 | 下载地址 | | --------- | --------- | --------- | | YaYi-7B | wenge-research/yayi-7b | [模型下载](https://huggingface.co/wenge-research/yayi-7b) | | YaYi-7B-Llama2 | wenge-research/yayi-7b-llama2 | [模型下载](https://huggingface.co/wenge-research/yayi-7b-llama2) | | YaYi-13B-Llama2 | wenge-research/yayi-13b-llama2 | [模型下载](https://huggingface.co/wenge-research/yayi-13b-llama2) | | YaYi-70B-Llama2 | wenge-research/yayi-70b-llama2 | [模型下载](https://huggingface.co/wenge-research/yayi-70b-llama2) | 详情请参考我们的 [💻Github Repo](https://github.com/wenge-research/YaYi)。 ## 运行方式 ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer, GenerationConfig from transformers import StoppingCriteria, StoppingCriteriaList pretrained_model_name_or_path = "wenge-research/yayi-7b-llama2" tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name_or_path) model = LlamaForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=False) # Define the stopping criteria class KeywordsStoppingCriteria(StoppingCriteria): def __init__(self, keywords_ids:list): self.keywords = keywords_ids def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: if input_ids[0][-1] in self.keywords: return True return False stop_words = ["<|End|>", "<|YaYi|>", "<|Human|>", "</s>"] stop_ids = [tokenizer.encode(w)[-1] for w in stop_words] stop_criteria = KeywordsStoppingCriteria(stop_ids) # inference prompt = "你是谁?" formatted_prompt = f"""<|System|>: You are a helpful, respectful and honest assistant named YaYi developed by Beijing Wenge Technology Co.,Ltd. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <|Human|>: {prompt} <|YaYi|>: """ inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device) eos_token_id = tokenizer("<|End|>").input_ids[0] generation_config = GenerationConfig( eos_token_id=eos_token_id, pad_token_id=eos_token_id, do_sample=True, max_new_tokens=256, temperature=0.3, repetition_penalty=1.1, no_repeat_ngram_size=0 ) response = model.generate(**inputs, generation_config=generation_config, stopping_criteria=StoppingCriteriaList([stop_criteria])) response = [response[0][len(inputs.input_ids[0]):]] response_str = tokenizer.batch_decode(response, skip_special_tokens=False, clean_up_tokenization_spaces=False)[0] print(response_str) ``` --- # YaYi ## Introduction [YaYi](https://www.wenge.com/yayi/index.html) was fine-tuned on millions of artificially constructed high-quality domain data. This training data covers five key domains: media publicity, public opinion analysis, public safety, financial risk control, and urban governance, encompassing over a hundred natural language instruction tasks. Throughout the iterative development process of the YaYi, starting from pre-training initialization weights and progressing to domain-specific model, we have steadily enhanced its foundational Chinese language capabilities and domain analysis capabilities. We've also introduced multi-turn conversation enhancements and integrated various plug-in capabilities. Furthermore, through continuous manual feedback and optimization from hundreds of users during the internal testing phase, we've meticulously refined the model's performance and security. By open-sourcing the YaYi model, we will contribute our own efforts to the development of the Chinese pre-trained large language model open-source community. Through this open-source initiative, we seek to collaborate with every partner to build the YaYi model ecosystem together. *News: 🔥 YaYi has open sourced the Chinese optimization model version based on LLaMA 2 to explore the latest practices suitable for Chinese multi-domain tasks.* ## Model download | Model | 🤗HF Model Name | Download Links | | --------- | --------- | --------- | | YaYi-7B | wenge-research/yayi-7b | [Download](https://huggingface.co/wenge-research/yayi-7b) | | YaYi-7B-Llama2 | wenge-research/yayi-7b-llama2 | [Download](https://huggingface.co/wenge-research/yayi-7b-llama2) | | YaYi-13B-Llama2 | wenge-research/yayi-13b-llama2 | [Download](https://huggingface.co/wenge-research/yayi-13b-llama2) | | YaYi-70B-Llama2 | wenge-research/yayi-70b-llama2 | [Download](https://huggingface.co/wenge-research/yayi-70b-llama2) | For more details, please refer to our [💻Github Repo](https://github.com/wenge-research/YaYi)。 ## Run ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer, GenerationConfig from transformers import StoppingCriteria, StoppingCriteriaList pretrained_model_name_or_path = "wenge-research/yayi-7b-llama2" tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name_or_path) model = LlamaForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=False) # Define the stopping criteria class KeywordsStoppingCriteria(StoppingCriteria): def __init__(self, keywords_ids:list): self.keywords = keywords_ids def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: if input_ids[0][-1] in self.keywords: return True return False stop_words = ["<|End|>", "<|YaYi|>", "<|Human|>", "</s>"] stop_ids = [tokenizer.encode(w)[-1] for w in stop_words] stop_criteria = KeywordsStoppingCriteria(stop_ids) # inference prompt = "你是谁?" formatted_prompt = f"""<|System|>: You are a helpful, respectful and honest assistant named YaYi developed by Beijing Wenge Technology Co.,Ltd. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. <|Human|>: {prompt} <|YaYi|>: """ inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device) eos_token_id = tokenizer("<|End|>").input_ids[0] generation_config = GenerationConfig( eos_token_id=eos_token_id, pad_token_id=eos_token_id, do_sample=True, max_new_tokens=256, temperature=0.3, repetition_penalty=1.1, no_repeat_ngram_size=0 ) response = model.generate(**inputs, generation_config=generation_config, stopping_criteria=StoppingCriteriaList([stop_criteria])) response = [response[0][len(inputs.input_ids[0]):]] response_str = tokenizer.batch_decode(response, skip_special_tokens=False, clean_up_tokenization_spaces=False)[0] print(response_str) ```
cognitivecomputations/dolphin-2.9.1-yi-1.5-9b
cognitivecomputations
"2024-05-20T14:35:39Z"
2,233
23
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "conversational", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:01-ai/Yi-1.5-9B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-18T16:46:08Z"
--- license: apache-2.0 base_model: 01-ai/Yi-1.5-9B tags: - generated_from_trainer - axolotl datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - Locutusque/function-calling-chatml - internlm/Agent-FLAN --- # Dolphin 2.9.1 Yi 1.5 9b 🐬 Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations This is our most spectacular outcome ever. FFT, all parameters, 16bit. 70.9 MMLU on 9b! And it talks like a dream. Although the max positional embeddings is 4k, we used rope theta of 1000000.0 and we trained with sequence length 12k. We plan to train on the upcoming 32k version as well. [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations) Discord: https://discord.gg/cognitivecomputations <img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" /> Our appreciation for the sponsors of Dolphin 2.9.1: - [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node - [OnDemand](https://on-demand.io/) - provided inference sponsorship This model is based on Yi-1.5-9b, and is governed by apache 2.0 license. The base model has 4k context, but we used rope theta of 1000000.0 and the full-weight fine-tuning was with 12k sequence length. Dolphin 2.9.1 uses ChatML prompt template format. example: ``` <|im_start|>system You are Dolphin, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling. Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly. Dolphin is licensed according to apache 2.0 license. We grant permission for any use, including commercial. Dolphin was trained on data generated from GPT4, among other models. ## Evals ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/tF9uD2W2yWODNdc--P68I.png) ## Training [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: 01-ai/Yi-1.5-9B model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer trust_remote_code: true # load_in_8bit: false # load_in_4bit: true # strict: false # adapter: qlora # lora_modules_to_save: [embed_tokens, lm_head] # lora_r: 32 # lora_alpha: 16 # lora_dropout: 0.05 # lora_target_linear: True # lora_fan_in_fan_out: datasets: - path: /workspace/datasets/dolphin-2.9/dolphin201-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-translate-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/dolphin-coder-codegen-sharegpt2.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/not_samantha_norefusals.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/Orca-Math-resort-unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/agent_instruct_react_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_instruct_j1s1_3k_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_negative_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_react_10p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/toolbench_tflan_cot_30p_unfiltered.jsonl type: sharegpt conversation: chatml - path: /workspace/datasets/dolphin-2.9/openhermes200k_unfiltered.jsonl type: sharegpt conversation: chatml chat_template: chatml dataset_prepared_path: yi34b val_set_size: 0.03 output_dir: ./out-yi sequence_len: 12000 sample_packing: true pad_to_sequence_len: true wandb_project: dolphin-2.9-yi-34b wandb_watch: wandb_run_id: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 2 num_epochs: 3 optimizer: adamw_8bit lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: # resume_from_checkpoint: /workspace/axolotl/dbrx-checkpoint logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 10 evals_per_epoch: 4 eval_table_size: saves_per_epoch: 4 save_total_limit: 2 save_steps: debug: deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json weight_decay: 0.05 fsdp: fsdp_config: special_tokens: bos_token: "<|startoftext|>" eos_token: "<|im_end|>" pad_token: "<unk>" unk_token: "<unk>" tokens: - "<|im_start|>" ``` </details><br> # out-yi This model is a fine-tuned version of [01-ai/Yi-1.5-9B](https://huggingface.co/01-ai/Yi-1.5-9B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.6332 | 0.0024 | 1 | 0.6469 | | 0.4811 | 0.2499 | 106 | 0.4739 | | 0.4465 | 0.4997 | 212 | 0.4547 | | 0.4472 | 0.7496 | 318 | 0.4480 | | 0.4373 | 0.9994 | 424 | 0.4429 | | 0.4147 | 1.2384 | 530 | 0.4432 | | 0.3879 | 1.4882 | 636 | 0.4400 | | 0.3872 | 1.7381 | 742 | 0.4371 | | 0.4044 | 1.9879 | 848 | 0.4344 | | 0.3509 | 2.2269 | 954 | 0.4410 | | 0.3628 | 2.4767 | 1060 | 0.4401 | | 0.3652 | 2.7266 | 1166 | 0.4397 | | 0.3674 | 2.9764 | 1272 | 0.4396 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.2.2+cu121 - Datasets 2.15.0 - Tokenizers 0.19.1
OpenBuddy/openbuddy-falcon-180b-v12-preview0
OpenBuddy
"2023-09-25T10:38:01Z"
2,232
0
transformers
[ "transformers", "pytorch", "falcon", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-20T11:28:35Z"
Entry not found
GraydientPlatformAPI/incursios-meme-xl
GraydientPlatformAPI
"2024-06-10T03:31:24Z"
2,232
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-10T03:04:15Z"
Entry not found
StudentLLM/Alpagasus-2-13b-QLoRA-merged
StudentLLM
"2023-09-15T07:07:30Z"
2,231
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-02T04:38:10Z"
--- license: other language: - en --- ## Model Details This is an unofficial implementation of "[AlpaGasus: Training a better Alpaca with Fewer Data.](https://github.com/Lichang-Chen/AlpaGasus)" with [LLaMA2](https://huggingface.co/meta-llama/Llama-2-13b-hf) & QLoRA! Training code is available at our [repo](https://github.com/gauss5930/AlpaGasus2-QLoRA). - **Developed by:** [Yunsang Yoo](https://huggingface.co/ryan0712) and [Hyunwoo Ko](https://huggingface.co/Cartinoe5930) - **Model type:** Auto-regressive model - **Language(s):** English - **Base Model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) - **License**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) ### Training dataset "StudentLLM/Alpagasus-2-13b-QLoRA-merged" used [gpt4life](https://github.com/gpt4life/alpagasus)'s gpt-3.5-turbo filtered dataset, 'alpaca_t45.json'. Configuration of the dataset is as follows: ``` { 'instruction': Give the instruction describing the question. 'input': Occasionally present, detailed instructions accompany the question if available. 'output': Give answers to questions. } . . . ``` ### Prompt Template: Alpaca style prompt ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: <prompt> (without the <>) ### Input: <prompt> (if input exists) ### Response: ``` ### Fine-tuning Procedure Our model was finetuned using QLoRA on single A100 80GB GPU. Training details are described in [repo](https://github.com/gauss5930/AlpaGasus2-QLoRA). ### Benchmark Metrics "StudentLLM/Alpagasus-2-13b-QLoRA-merged" model performance is uploaded on Huggingface's [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). Model was evaluated on the tasks specified in HF's Open LLM Leaderboard(ARC, HellaSwag, MMLU, TruthfulQA). | Metric | Value | |-----------------------|-------| | Avg. | 59.34 | | MMLU | 55.27 | | ARC | 61.09 | | HellaSwag | 82.46 | | TruthfulQA | 38.53 | ### LLM Evaluation We tried to follow the evaluation metric introduced by the AlpaGasus paper. During the process, we consulted the code by [gpt4life](https://github.com/gpt4life/alpagasus). We used OpenAI's gpt-3.5-turbo as the evaluator model, and Alpaca2-LoRA-13B(it doesn't exist now...) as the comparison model. For more detailed information, please refer to our Github [repo](https://github.com/gauss5930/AlpaGasus2-QLoRA). The evaluation result of AlpaGasus2-QLoRA is as follows: ![results](https://user-images.githubusercontent.com/80087878/262848860-8742bcc4-1bbc-449f-8bcf-660c08fcc10d.png) ### How to use To use "StudentLLM/Alpagasus-2-13b-QLoRA-merged", please follow the following code! The use of the 7B model is the same! ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") config = PeftConfig.from_pretrained("StudentLLM/Alpagasus-2-13B-QLoRA") model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-hf", use_auth_token="yotu_HuggingFace_token").to(device) model = PeftModel.from_pretrained(model, "StudentLLM/Alpagasus-2-13B-QLoRA") tokenizer = AutoTokenizer.from_pretrained("StudentLLM/Alpagasus-2-13B-QLoRA") tokenizer.pad_token = tokenizer.eos_token input_data = "Please tell me 3 ways to relieve stress." # You can enter any questions!! model_inputs = tokenizer(input_data, return_tensors='pt').to(device) model_output = model.generate(**model_inputs, max_length=256) model_output = tokenizer.decode(model_output[0], skip_special_tokens=True) print(model_output) ``` ### Citations ```bibtex @article{chen2023alpagasus, title={AlpaGasus: Training a Better Alpaca with Fewer Data}, author={Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin}, journal={arXiv preprint arXiv:2307.08701}, year={2023} } ```
bhavinjawade/SOLAR-10B-OrcaDPO-Jawade
bhavinjawade
"2024-01-14T08:05:30Z"
2,231
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:Intel/orca_dpo_pairs", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-06T21:29:56Z"
--- license: mit datasets: - Intel/orca_dpo_pairs --- ## SOLAR-10B-OrcaDPO-Jawade ### Overview This model card is instruction finetuned version of `upstage/SOLAR-10.7B-Instruct-v1.0` model. Trained on the Intel DPO Orca dataset using LoRA. Though it should be noted SOLAR-10.7B paper states that the original model for alignment was trained on Intel ORCA DPO pairs. Retraining using DPO and LoRA shows slight (<1%) improvement on OpenLLM Leaderboard benchmarks against `SOLAR 10.7B-Instruct` and significant over `SOLAR 10.7B` ![model_card_image](SOLAR_ORCA.png) ## How to Use This Model To use the model `bhavinjawade/SOLAR-10B-OrcaDPO-Jawade`, follow these steps: 1. **Import and Load the Model and Tokenizer** Begin by importing the model and tokenizer. Load them using the `from_pretrained` method. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("bhavinjawade/SOLAR-10B-OrcaDPO-Jawade") tokenizer = AutoTokenizer.from_pretrained("bhavinjawade/SOLAR-10B-OrcaDPO-Jawade") ``` 2. **Format the Prompt** Format the chat input as a list of messages, each with a role ('system' or 'user') and content. ```python message = [ {"role": "system", "content": "You are a helpful assistant chatbot."}, {"role": "user", "content": "Is the universe real? or is it a simulation? whats your opinion?"} ] prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) ``` 3. **Create a Pipeline** Set up a pipeline for text generation with the loaded model and tokenizer. ```python pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer ) ``` 4. **Generate Text** Use the pipeline to generate a sequence of text based on the prompt. You can adjust parameters like temperature and top_p for different styles of responses. ```python sequences = pipeline( prompt, do_sample=True, temperature=0.7, top_p=0.9, num_return_sequences=1, max_length=200, ) print(sequences[0]['generated_text']) ``` This setup allows you to utilize the capabilities of the **bhavinjawade/SOLAR-10B-OrcaDPO-Jawade** model for generating responses to chat inputs. ### License - **Type**: MIT License - **Details**: This license permits reuse, modification, and distribution for both private and commercial purposes under the terms of the MIT License. ### Model Details - **Model Name**: SOLAR-10.7B-Instruct-v1.0 - **Organization**: Upstage - **Training Dataset**: Intel/orca_dpo_pairs - **Technique Used**: LoRA (Low-Rank Adaptation) ### Contact Information - https://bhavinjawade.github.io
Legalaz/5EqCQ4qk5NACwYeGK5usKuGf4m24C2WJutgNsGZ6v9z9Puro_vgg
Legalaz
"2024-02-14T15:49:03Z"
2,231
0
keras
[ "keras", "region:us" ]
null
"2024-02-02T10:13:52Z"
Entry not found
mradermacher/LLAMA3-HER-MODEL-GGUF
mradermacher
"2024-06-07T22:38:55Z"
2,231
0
transformers
[ "transformers", "gguf", "unsloth", "trl", "sft", "en", "base_model:Skorcht/LLAMA3-HER-MODEL", "endpoints_compatible", "region:us" ]
null
"2024-06-07T22:10:50Z"
--- base_model: Skorcht/LLAMA3-HER-MODEL language: - en library_name: transformers quantized_by: mradermacher tags: - unsloth - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Skorcht/LLAMA3-HER-MODEL <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/LLAMA3-HER-MODEL-GGUF/resolve/main/LLAMA3-HER-MODEL.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
microsoft/resnet-34
microsoft
"2023-06-26T19:49:23Z"
2,230
6
transformers
[ "transformers", "pytorch", "tf", "safetensors", "resnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1512.03385", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2022-03-16T15:41:51Z"
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k --- # ResNet-34 v1.5 ResNet model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by He et al. Disclaimer: The team releasing ResNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description ResNet (Residual Network) is a convolutional neural network that democratized the concepts of residual learning and skip connections. This enables to train much deeper models. This is ResNet v1.5, which differs from the original model: in the bottleneck blocks which require downsampling, v1 has stride = 2 in the first 1x1 convolution, whereas v1.5 has stride = 2 in the 3x3 convolution. This difference makes ResNet50 v1.5 slightly more accurate (\~0.5% top1) than v1, but comes with a small performance drawback (~5% imgs/sec) according to [Nvidia](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/resnet_50_v1_5_for_pytorch). ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/resnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=resnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, ResNetForImageClassification import torch from datasets import load_dataset dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/resnet-34") model = ResNetForImageClassification.from_pretrained("microsoft/resnet-34") inputs = feature_extractor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/resnet). ### BibTeX entry and citation info ```bibtex @inproceedings{he2016deep, title={Deep residual learning for image recognition}, author={He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian}, booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition}, pages={770--778}, year={2016} } ```
LanguageBind/LanguageBind_Video
LanguageBind
"2024-02-01T06:57:36Z"
2,230
2
transformers
[ "transformers", "pytorch", "LanguageBindVideo", "zero-shot-image-classification", "arxiv:2310.01852", "license:mit", "endpoints_compatible", "region:us" ]
zero-shot-image-classification
"2023-10-06T09:07:15Z"
--- license: mit --- <p align="center"> <img src="https://s11.ax1x.com/2024/02/01/pFMDAm9.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/pdf/2310.01852.pdf">【ICLR 2024 🔥】LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment</a></h2> <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for latest update. </h2> ## 📰 News * **[2024.01.27]** 👀👀👀 Our [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA) is released! A sparse model with 3B parameters outperformed the dense model with 7B parameters. * **[2024.01.16]** 🔥🔥🔥 Our LanguageBind has been accepted at ICLR 2024! We earn the score of 6(3)8(6)6(6)6(6) [here](https://openreview.net/forum?id=QmZKc7UZCy&noteId=OgsxQxAleA). * **[2023.12.15]** 💪💪💪 We expand the 💥💥💥 VIDAL dataset and now have **10M video-text data**. We launch **LanguageBind_Video 1.5**, checking our [model zoo](#-model-zoo). * **[2023.12.10]** We expand the 💥💥💥 VIDAL dataset and now have **10M depth and 10M thermal data**. We are in the process of uploading thermal and depth data on [Hugging Face](https://huggingface.co/datasets/LanguageBind/VIDAL-Depth-Thermal) and expect the whole process to last 1-2 months. * **[2023.11.27]** 🔥🔥🔥 We have updated our [paper](https://arxiv.org/abs/2310.01852) with emergency zero-shot results., checking our ✨ [results](#emergency-results). * **[2023.11.26]** 💥💥💥 We have open-sourced all textual sources and corresponding YouTube IDs [here](DATASETS.md). * **[2023.11.26]** 📣📣📣 We have open-sourced fully fine-tuned **Video & Audio**, achieving improved performance once again, checking our [model zoo](#-model-zoo). * **[2023.11.22]** We are about to release a fully fine-tuned version, and the **HUGE** version is currently undergoing training. * **[2023.11.21]** 💥 We are releasing sample data in [DATASETS.md](DATASETS.md) so that individuals who are interested can further modify the code to train it on their own data. * **[2023.11.20]** 🚀🚀🚀 [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) builds a large visual-language model to achieve 🎉SOTA performances based on LanguageBind encoders. * **[2023.10.23]** 🎶 LanguageBind-Audio achieves 🎉🎉🎉**state-of-the-art (SOTA) performance on 5 datasets**, checking our ✨ [results](#multiple-modalities)! * **[2023.10.14]** 😱 Released a stronger LanguageBind-Video, checking our ✨ [results](#video-language)! The video checkpoint **have updated** on Huggingface Model Hub! * **[2023.10.10]** We provide sample data, which can be found in [assets](assets), and [emergency zero-shot usage](#emergency-zero-shot) is described. * **[2023.10.07]** The checkpoints are available on 🤗 [Huggingface Model](https://huggingface.co/LanguageBind). * **[2023.10.04]** Code and [demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) are available now! Welcome to **watch** 👀 this repository for the latest updates. ## 😮 Highlights ### 💡 High performance, but NO intermediate modality required LanguageBind is a **language-centric** multimodal pretraining approach, **taking the language as the bind across different modalities** because the language modality is well-explored and contains rich semantics. * The following first figure shows the architecture of LanguageBind. LanguageBind can be easily extended to segmentation, detection tasks, and potentially to unlimited modalities. ### ⚡️ A multimodal, fully aligned and voluminous dataset We propose **VIDAL-10M**, **10 Million data** with **V**ideo, **I**nfrared, **D**epth, **A**udio and their corresponding **L**anguage, which greatly expands the data beyond visual modalities. * The second figure shows our proposed VIDAL-10M dataset, which includes five modalities: video, infrared, depth, audio, and language. ### 🔥 Multi-view enhanced description for training We make multi-view enhancements to language. We produce multi-view description that combines **meta-data**, **spatial**, and **temporal** to greatly enhance the semantic information of the language. In addition we further **enhance the language with ChatGPT** to create a good semantic space for each modality aligned language. ## 🤗 Demo * **Local demo.** Highly recommend trying out our web demo, which incorporates all features currently supported by LanguageBind. ```bash python gradio_app.py ``` * **Online demo.** We provide the [online demo](https://huggingface.co/spaces/LanguageBind/LanguageBind) in Huggingface Spaces. In this demo, you can calculate the similarity of modalities to language, such as audio-to-language, video-to-language, and depth-to-image. ## 🛠️ Requirements and Installation * Python >= 3.8 * Pytorch >= 1.13.1 * CUDA Version >= 11.6 * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/LanguageBind cd LanguageBind pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116 pip install -r requirements.txt ``` ## 🐳 Model Zoo The names in the table represent different encoder models. For example, `LanguageBind/LanguageBind_Video_FT` represents the fully fine-tuned version, while `LanguageBind/LanguageBind_Video` represents the LoRA-tuned version. You can freely replace them in the recommended [API usage](#-api). We recommend using the fully fine-tuned version, as it offers stronger performance. <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Modality</th><th>LoRA tuning</th><th>Fine-tuning</th> </tr> <tr align="center"> <td>Video</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">LanguageBind_Video</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">LanguageBind_Video_FT</a></td> </tr> <tr align="center"> <td>Audio</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio">LanguageBind_Audio</a></td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Audio_FT">LanguageBind_Audio_FT</a></td> </tr> <tr align="center"> <td>Depth</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Depth">LanguageBind_Depth</a></td><td>-</td> </tr> <tr align="center"> <td>Thermal</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Thermal">LanguageBind_Thermal</a></td><td>-</td> </tr> </table> </div> <div align="center"> <table border="1" width="100%"> <tr align="center"> <th>Version</th><th>Tuning</th><th>Model size</th><th>Num_frames</th><th>HF Link</th><th>MSR-VTT</th><th>DiDeMo</th><th>ActivityNet</th><th>MSVD</th> </tr> <tr align="center"> <td>LanguageBind_Video</td><td>LoRA</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video">Link</a></td><td>42.6</td><td>37.8</td><td>35.1</td><td>52.2</td> </tr> <tr align="center"> <td>LanguageBind_Video_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_FT">Link</a></td><td>42.7</td><td>38.1</td><td>36.9</td><td>53.5</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_V1.5_FT">Link</a></td><td>42.8</td><td>39.7</td><td>38.4</td><td>54.1</td> </tr> <tr align="center"> <td>LanguageBind_Video_V1.5_FT</td><td>Full-tuning</td><td>Large</td><td>12</td><td>Coming soon</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>8</td><td><a href="https://huggingface.co/LanguageBind/LanguageBind_Video_Huge_V1.5_FT">Link</a></td><td>44.8</td><td>39.9</td><td>41.0</td><td>53.7</td> </tr> <tr align="center"> <td>LanguageBind_Video_Huge_V1.5_FT</td><td>Full-tuning</td><td>Huge</td><td>12</td><td>Coming soon</td> </tr> </table> </div> ## 🤖 API **We open source all modalities preprocessing code.** If you want to load the model (e.g. ```LanguageBind/LanguageBind_Thermal```) from the model hub on Huggingface or on local, you can use the following code snippets! ### Inference for Multi-modal Binding We have provided some sample datasets in [assets](assets) to quickly see how languagebind works. ```python import torch from languagebind import LanguageBind, to_device, transform_dict, LanguageBindImageTokenizer if __name__ == '__main__': device = 'cuda:0' device = torch.device(device) clip_type = { 'video': 'LanguageBind_Video_FT', # also LanguageBind_Video 'audio': 'LanguageBind_Audio_FT', # also LanguageBind_Audio 'thermal': 'LanguageBind_Thermal', 'image': 'LanguageBind_Image', 'depth': 'LanguageBind_Depth', } model = LanguageBind(clip_type=clip_type, cache_dir='./cache_dir') model = model.to(device) model.eval() pretrained_ckpt = f'lb203/LanguageBind_Image' tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir/tokenizer_cache_dir') modality_transform = {c: transform_dict[c](model.modality_config[c]) for c in clip_type.keys()} image = ['assets/image/0.jpg', 'assets/image/1.jpg'] audio = ['assets/audio/0.wav', 'assets/audio/1.wav'] video = ['assets/video/0.mp4', 'assets/video/1.mp4'] depth = ['assets/depth/0.png', 'assets/depth/1.png'] thermal = ['assets/thermal/0.jpg', 'assets/thermal/1.jpg'] language = ["Training a parakeet to climb up a ladder.", 'A lion climbing a tree to catch a monkey.'] inputs = { 'image': to_device(modality_transform['image'](image), device), 'video': to_device(modality_transform['video'](video), device), 'audio': to_device(modality_transform['audio'](audio), device), 'depth': to_device(modality_transform['depth'](depth), device), 'thermal': to_device(modality_transform['thermal'](thermal), device), } inputs['language'] = to_device(tokenizer(language, max_length=77, padding='max_length', truncation=True, return_tensors='pt'), device) with torch.no_grad(): embeddings = model(inputs) print("Video x Text: \n", torch.softmax(embeddings['video'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Image x Text: \n", torch.softmax(embeddings['image'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Depth x Text: \n", torch.softmax(embeddings['depth'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Audio x Text: \n", torch.softmax(embeddings['audio'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) print("Thermal x Text: \n", torch.softmax(embeddings['thermal'] @ embeddings['language'].T, dim=-1).detach().cpu().numpy()) ``` Then returns the following result. ```bash Video x Text: [[9.9989331e-01 1.0667283e-04] [1.3255903e-03 9.9867439e-01]] Image x Text: [[9.9990666e-01 9.3292067e-05] [4.6132666e-08 1.0000000e+00]] Depth x Text: [[0.9954276 0.00457235] [0.12042473 0.8795753 ]] Audio x Text: [[0.97634876 0.02365119] [0.02917843 0.97082156]] Thermal x Text: [[0.9482511 0.0517489 ] [0.48746133 0.5125386 ]] ``` ### Emergency zero-shot Since languagebind binds each modality together, we also found the **emergency zero-shot**. It's very simple to use. ```python print("Video x Audio: \n", torch.softmax(embeddings['video'] @ embeddings['audio'].T, dim=-1).detach().cpu().numpy()) print("Image x Depth: \n", torch.softmax(embeddings['image'] @ embeddings['depth'].T, dim=-1).detach().cpu().numpy()) print("Image x Thermal: \n", torch.softmax(embeddings['image'] @ embeddings['thermal'].T, dim=-1).detach().cpu().numpy()) ``` Then, you will get: ``` Video x Audio: [[1.0000000e+00 0.0000000e+00] [3.1150486e-32 1.0000000e+00]] Image x Depth: [[1. 0.] [0. 1.]] Image x Thermal: [[1. 0.] [0. 1.]] ``` ### Different branches for X-Language task Additionally, LanguageBind can be **disassembled into different branches** to handle different tasks. Note that we do not train Image, which just initialize from OpenCLIP. #### Thermal ```python import torch from languagebind import LanguageBindThermal, LanguageBindThermalTokenizer, LanguageBindThermalProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Thermal' model = LanguageBindThermal.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindThermalTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') thermal_process = LanguageBindThermalProcessor(model.config, tokenizer) model.eval() data = thermal_process([r"your/thermal.jpg"], ['your text'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Depth ```python import torch from languagebind import LanguageBindDepth, LanguageBindDepthTokenizer, LanguageBindDepthProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Depth' model = LanguageBindDepth.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindDepthTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') depth_process = LanguageBindDepthProcessor(model.config, tokenizer) model.eval() data = depth_process([r"your/depth.png"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Video ```python import torch from languagebind import LanguageBindVideo, LanguageBindVideoTokenizer, LanguageBindVideoProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Video_FT' # also 'LanguageBind/LanguageBind_Video' model = LanguageBindVideo.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindVideoTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') video_process = LanguageBindVideoProcessor(model.config, tokenizer) model.eval() data = video_process(["your/video.mp4"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Audio ```python import torch from languagebind import LanguageBindAudio, LanguageBindAudioTokenizer, LanguageBindAudioProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Audio_FT' # also 'LanguageBind/LanguageBind_Audio' model = LanguageBindAudio.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindAudioTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') audio_process = LanguageBindAudioProcessor(model.config, tokenizer) model.eval() data = audio_process([r"your/audio.wav"], ['your audio.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` #### Image Note that our image encoder is the same as OpenCLIP. **Not** as fine-tuned as other modalities. ```python import torch from languagebind import LanguageBindImage, LanguageBindImageTokenizer, LanguageBindImageProcessor pretrained_ckpt = 'LanguageBind/LanguageBind_Image' model = LanguageBindImage.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') tokenizer = LanguageBindImageTokenizer.from_pretrained(pretrained_ckpt, cache_dir='./cache_dir') image_process = LanguageBindImageProcessor(model.config, tokenizer) model.eval() data = image_process([r"your/image.jpg"], ['your text.'], return_tensors='pt') with torch.no_grad(): out = model(**data) print(out.text_embeds @ out.image_embeds.T) ``` ## 💥 VIDAL-10M The datasets is in [DATASETS.md](DATASETS.md). ## 🗝️ Training & Validating The training & validating instruction is in [TRAIN_AND_VALIDATE.md](TRAIN_AND_VALIDATE.md). ## 👍 Acknowledgement * [OpenCLIP](https://github.com/mlfoundations/open_clip) An open source pretraining framework. * [CLIP4Clip](https://github.com/ArrowLuo/CLIP4Clip) An open source Video-Text retrieval framework. * [sRGB-TIR](https://github.com/rpmsnu/sRGB-TIR) An open source framework to generate infrared (thermal) images. * [GLPN](https://github.com/vinvino02/GLPDepth) An open source framework to generate depth images. ## 🔒 License * The majority of this project is released under the MIT license as found in the [LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/LICENSE) file. * The dataset of this project is released under the CC-BY-NC 4.0 license as found in the [DATASET_LICENSE](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/DATASET_LICENSE) file. ## ✏️ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{zhu2023languagebind, title={LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment}, author={Bin Zhu and Bin Lin and Munan Ning and Yang Yan and Jiaxi Cui and Wang HongFa and Yatian Pang and Wenhao Jiang and Junwu Zhang and Zongwei Li and Cai Wan Zhang and Zhifeng Li and Wei Liu and Li Yuan}, year={2023}, eprint={2310.01852}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ## ✨ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/LanguageBind&type=Date)](https://star-history.com/#PKU-YuanGroup/LanguageBind&Date) ## 🤝 Contributors <a href="https://github.com/PKU-YuanGroup/LanguageBind/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/LanguageBind" /> </a>
Mohammed-Altaf/Medical-ChatBot
Mohammed-Altaf
"2023-11-21T02:36:40Z"
2,230
8
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "medical", "en", "dataset:Mohammed-Altaf/medical-instruction-120k", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-20T08:24:41Z"
--- tags: - medical license: mit datasets: - Mohammed-Altaf/medical-instruction-120k language: - en library_name: transformers --- Please note that the chatbot is designed for research purposes only and is not intended for use in real medical settings. While the chatbot has been trained to provide accurate and helpful responses, it is not a substitute for professional medical advice, diagnosis, or treatment. The information provided by the chatbot should not be used to make medical decisions, and any health concerns should be addressed by a licensed healthcare provider. ## Quickstart ```python import torch from transformers import GPT2LMHeadModel, GPT2Tokenizer path = "Mohammed-Altaf/Medical-ChatBot" device = "cuda" if torch.cuda.is_available() else "cpu" tokenizer = GPT2Tokenizer.from_pretrained(path) model = GPT2LMHeadModel.from_pretrained(path).to(device) prompt_input = ( "The conversation between human and AI assistant.\n" "[|Human|] {input}\n" "[|AI|]" ) sentence = prompt_input.format_map({'input': "what is parkinson's disease?"}) inputs = tokenizer(sentence, return_tensors="pt").to(device) with torch.no_grad(): beam_output = model.generate(**inputs, min_new_tokens=1, max_length=512, num_beams=3, repetition_penalty=1.2, early_stopping=True, eos_token_id=198 ) print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) ``` ## Example Outputs ``` The conversation between human and AI assistant. [|Human|] what is parkinson's disease? [|AI|] Parkinson's disease is a neurodegenerative disorder that affects movement. It is caused by the loss of dopamine-producing cells in the brain. ``` ``` The conversation between human and AI assistant. [|Human|] what type of honey is best for a bad covid cough? [|AI|] Manuka honey has been shown to have anti-inflammatory and antibacterial properties that can help alleviate symptoms of a bad covid cough. ```
microsoft/trocr-large-str
microsoft
"2023-01-24T17:17:11Z"
2,229
15
transformers
[ "transformers", "pytorch", "vision-encoder-decoder", "trocr", "image-to-text", "arxiv:2109.10282", "endpoints_compatible", "region:us" ]
image-to-text
"2022-09-08T09:31:56Z"
--- tags: - trocr - image-to-text widget: - src: https://raw.githubusercontent.com/ku21fan/STR-Fewer-Labels/main/demo_image/1.png example_title: Example 1 - src: https://raw.githubusercontent.com/HCIILAB/Scene-Text-Recognition-Recommendations/main/Dataset_images/LSVT1.jpg example_title: Example 2 - src: https://raw.githubusercontent.com/HCIILAB/Scene-Text-Recognition-Recommendations/main/Dataset_images/ArT2.jpg example_title: Example 3 --- # TrOCR (large-sized model, fine-tuned on STR benchmarks) TrOCR model fine-tuned on the training sets of IC13, IC15, IIIT5K, SVT. It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr). ## Model description The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of BEiT, while the text decoder was initialized from the weights of RoBERTa. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens. ## Intended uses & limitations You can use the raw model for optical character recognition (OCR) on single text-line images. See the [model hub](https://huggingface.co/models?search=microsoft/trocr) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import TrOCRProcessor, VisionEncoderDecoderModel from PIL import Image import requests # load image from the IIIT-5k dataset url = 'https://i.postimg.cc/ZKwLg2Gw/367-14.png' image = Image.open(requests.get(url, stream=True).raw).convert("RGB") processor = TrOCRProcessor.from_pretrained('microsoft/trocr-large-str') model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-large-str') pixel_values = processor(images=image, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### BibTeX entry and citation info ```bibtex @misc{li2021trocr, title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models}, author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei}, year={2021}, eprint={2109.10282}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
lightonai/alfred-40b-1023
lightonai
"2024-02-13T22:43:28Z"
2,229
45
transformers
[ "transformers", "pytorch", "RefinedWeb", "text-generation", "falcon-40b", "long-context", "falcon", "NTK-YaRN", "conversational", "custom_code", "en", "fr", "de", "es", "it", "dataset:OpenAssistant/oasst1", "dataset:ehartford/dolphin", "dataset:tau/sled", "dataset:tiiuae/falcon-refinedweb", "arxiv:2306.15595", "arxiv:2309.00071", "arxiv:2307.03172", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-16T23:13:40Z"
--- license: apache-2.0 thumbnail: images/alfred-40b-1023.png datasets: - OpenAssistant/oasst1 - ehartford/dolphin - tau/sled - tiiuae/falcon-refinedweb language: - en - fr - de - es - it tags: - falcon-40b - long-context - falcon - NTK-YaRN --- # Model Card for Alfred-40B-1023 ![a witty and elegant butler with a falcon on his shoulder, smile, flat illustration, simple shapes, colorful, lo-fi aesthetics](images/alfred-40b-1023.png) `Alfred-40B-1023` is a finetuned version of [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b), with an **extended context length of 8192 tokens**. Finetuning was performed in October 2023. `Alfred-40B-1023` is made available under the Apache 2.0 License. ## Model Details ### Model Description - **Developed by:** [LightOn](https://www.lighton.ai/) * [Oskar Hallström](https://huggingface.co/ohallstrom) (project lead, training & modeling, internal long context data, evaluation) * [Amélie Chatelain](https://huggingface.co/ameliechatelain) (internal data & long context data, data generation) * [Clément Thiriet](https://huggingface.co/cthiriet) (data infrastructure, data generation, evaluation) * [Julien Séailles](https://huggingface.co/Jseailleslighton) (data generation) * [Adrien Cavaillès](https://huggingface.co/adcavail) (data generation) * [Axel Marmet](https://huggingface.co/WeightsnWizardry)* (training 2K baseline) `*` work done while at LightOn - **Model type:** Causal decoder-only; - **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish); - **License:** Apache 2.0 license. - **Finetuned from model:** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) - **Training date:** October 2023 (`1023`). ## Uses ### Direct Use `Alfred-40B-1023` can be used as a chat model or as an instruct model. For both instruct and chat mode, the model has been trained with chat tokens `<start_system>`, `<start_user>`, `<start_assistant>`, and `<end_message>`. These can be integrated into the prompt in the follwoing way: ``` <start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>{user query}<end_message><start_assistant> ``` The stop word `<end_message>` should be used. ### Out-of-Scope Use Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful. ## Bias, Risks, and Limitations `Alfred-40B-1023` is a finetune of Falcon-40B. As such, it is trained mostly on English, German, Spanish, French, with limited capabilities also in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish. It will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online. ### Recommendations We recommend users of `Alfred-40B-1023` to implement appropriate guardrails and precautions in any production use. ## How to Get Started with the Model Use the code below to get started with the model. ``` from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch model = "lightonai/alfred-40b-1023" tokenizer = AutoTokenizer.from_pretrained("lightonai/alfred-0923-tokenizer") pipeline = transformers.pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", ) sequences = pipeline( "<start_system>You are Alfred, a helpful assistant trained by LightOn. Knowledge cutoff: November 2022. Current date: 16 November, 2023<end_message><start_user>Write me an email to my boss, explaining how the company could benefit by using LightOns platform for Large Language Models, Paradigm.<end_message><start_assistant>", max_length=1000, do_sample=True, top_k=3, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Training Details ### Training Data Alfred-40B-1023 was trained on a mixture of publicly available and in-house curated datasets. The training data is composed of 50 % short context tasks, 45 % long context tasks and 5 % RefinedWeb. | **Short context sources** | |--------------------| | [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) | | [dolphin](https://huggingface.co/ehartford/dolphin) | | [openai-critiques](https://openaipublic.blob.core.windows.net/critiques/README.md) | | internal | `internal` is a collection of synthetic and human-generated datasets created by Ligthon, tailored towards the use cases of our clients. | **Long context sources** | |--------------------| | [sled](https://huggingface.co/datasets/tau/sled) | | internal-long-context | `internal-long-context` is a collection of synthetic datasets generated by LightOn, tailored towards the use cases of our clients. During training, we apply regular language modeling loss for a partition of the prompts in the long context data. | **Pretraining objective source** | |--------------------| | [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | ### Training Procedure `Alfred-40B-1023` was trained on 128 A100 40GB GPUs, using a 3D parallelism strategy (TP=8, PP=2, DP=8) combined with ZeRO. Alfred has been trained through supervised finetuning on 100 megatokens, with a learning rate decayed with a cosine schedule. #### Preprocessing All datasets have been filtered, up or downsampled, and adapted to our chat token format. #### Context length extension We extend the context length to 8K with a custom method that we name NTK-YaRN. As guessable from its name, our extension method draws inspiration from NTK-aware interpolation and YaRN. During our context length extension efforts, we experimented with various methods suitable for RoPE embeddings. These include vanilla [positional interpolation](https://arxiv.org/abs/2306.15595), [NTK-aware interpolation](https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/), [NTK-by-parts](https://github.com/jquesnelle/scaled-rope/pull/1), and lastly [YaRN](https://arxiv.org/abs/2309.00071). YaRN looked very promising when applied at test-time, however finetuning with YaRN was not successful in our experiments. When extending the context length at training-time, NTK-aware interpolation was the most successful out of the already existing methods. Some of our results from trying different long context extension methods are shared in the Evaluation section below. We acknowledge that the same parameter values as proposed in the YaRN-paper have been used in our YaRN experiments, and that these potentially could have other optimal values for our particular setup. ##### NTK-YaRN Similarly to NTK-aware interpolation (`NTK`), NTK-YaRN involves increasing the base of the RoPE embeddings. In the original implementation of NTK-aware interpolation the new base `b'` is adapted according to the following formula: $$ b' = b \times s^{\frac{|D|}{|D|-2}} $$ where `b` is the original base, `s` the scaling factor of the context length, and `|D|` the model's head dimension. However, we find (similar to other actors) that increasing the base slightly more is even better. The value of `b'` could probably be optimized even further, but for these experiments we have settled with the following value: $$ b' = b \times (s+1)^{\frac{|D|}{|D|-2}} $$ In the following parts of this model card, context length extension with this extended scaling of the base is referred to as `NTK-Margin`. For `NTK-YaRN`, the extended scaling of the base is combined with the modification of the computation of the attention weights made in YaRN, where the query and key matrices are scaled by the factor `m`. $$ m = 1 + 0.1 \times \log(s) $$ Scaling the query and key matrices this way substantially reduces the initial grad norm when applying a context length extension method in our training runs. To cite NTK-YaRN, please refer to the model bibtex in the bottom of this model card. ## Evaluation ### Context length extension strategies #### Training losses After experimenting on a 7B scale, we finally run a selected partition of the extension methods on a 40B scale. In the figure below, we display the resulting training losses when training a 40B model with the different extension methods, ceteris paribus. ![Training loss curves for extension methods](images/training-loss-curves.png "Training loss curves for extension methods") Initially, YaRN has the lowest training loss, which can be seen as a reflection of the fact that YaRN was the most successful extension method at test time. However all the other methods surpasse YaRN in terms of training loss already after a handful of megatokens. Comparing NTK-Margin vs NTK-YaRN, we can note that the scaling of Q and K matrices makes the training loss lower in the beginning, however NTK-YaRN's advantage over NTK-Margin decreases as the training goes on. Comparing NTK-Margin with NTK in turn, it seems like the larger value of the base in NTK-Margin gives an initial boost in training loss, however this advantage decreases as training goes on. #### Performance on Long Context Benchmarks We evaluate the context length extension methods on an own benchmark, consisting of four tasks. * [Key-value retrieval UUID](https://arxiv.org/pdf/2307.03172.pdf) * [Coarse-grained Topic Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) * [Fine-grained Line Retrieval](https://lmsys.org/blog/2023-06-29-longchat/) * [Multi document retrieval data](https://nlp.stanford.edu/data/nfliu/lost-in-the-middle/nq-open-contriever-msmarco-retrieved-documents.jsonl.gz) For each task, we have created 3 subtasks - one for each of the three context lengths 2K, 4K and 8K. In total, we thus have 12 subtasks. In order to get an aggregated score that values each subtask equally, we normalize the scores for each subtask and then calculate the mean of the normalized scores for each extension method. ![Aggregated scores on long context benchmarks](images/lc_benchmarks.png "Aggregated scores on long context benchmarks") On these benchmarks, YaRN clearly lags behind. NTK-YaRN is the winning method, however NTK-Margin is so close that more extensive research is needed to verify that NTK-YaRN really is superior to NTK-Margin, especially when trained for longer. ### Comparison to 2K baseline In order to track any potential degradation on 2K context tasks due to the context length extension, we compare our 8K model against a 2K model trained in a similar setup for 100 megatokens. When training the 2K baseline, we don't include any long context data. We conduct the comparison by evaluating the models on a selection of tasks from EleutherAI harness, as well as ranking model outputs internally. ![Evaluation of 2K vs 8K version of alfred-40b-2023](images/2k_vs_8k.png "Evaluation of 2K vs 8K version of alfred-40b-2023") Notably, our 8K model not only performs on par with our 2K model on most of our EleutherAI harness tasks, in fact it outperforms the 2K model on a majority of the tasks. Reading comprehension is the only subcategory for which our 8K model is outperformed by the 2K model. We recognize that there is a discrepancy between performance on classical NLP benchmarks and how humans perceive the model quality. When model outputs (limited to 2K context lengths) are ranked by LightOn employees internally, the 2K and 8K have strikingly similar performance. However, a few rare failure modes have been noted for the 8K version, which are not seen when using the 2K model. These failure modes are likely to be fixable with better composition of the long context data. ## Compute Infrastructure ### Hardware Alfred-40B-1023 was trained on AWS SageMaker, on 128 A100 40GB GPUs in P4d instances. ### Software Alfred-40B-1023 was trained with a custom codebase. Training leverages a 3D parallelism approach combined with ZeRO, as well as high-performance kernels such as FlashAttention. ## Model Card Contact Please open a Community Discussion for any support request related to using Alfred with HuggingFace transformers. For any other inquiry: [email protected] ## Citation If you find the model useful in your work, please use the following bibtex when citing. ``` @article{alfred-40b-1023, title={Alfred-40B-1023}, author={Hallström, Oskar and Chatelain, Amélie and Thiriet, Clément and Séailles, Julien and Cavaillès, Adrien and Marmet, Axel}, year={2023} } ```
EleutherAI/polyglot-ko-3.8b
EleutherAI
"2023-06-07T05:03:23Z"
2,228
22
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "ko", "arxiv:2104.09864", "arxiv:2204.04541", "arxiv:2306.02254", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-09-09T14:15:36Z"
--- language: - ko tags: - pytorch - causal-lm license: apache-2.0 --- # Polyglot-Ko-3.8B ## Model Description Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team. | Hyperparameter | Value | |----------------------|----------------------------------------------------------------------------------------------------------------------------------------| | \\(n_{parameters}\\) | 3,809,974,272 | | \\(n_{layers}\\) | 32 | | \\(d_{model}\\) | 3,072 | | \\(d_{ff}\\) | 12,288 | | \\(n_{heads}\\) | 24 | | \\(d_{head}\\) | 128 | | \\(n_{ctx}\\) | 2,048 | | \\(n_{vocab}\\) | 30,003 / 30,080 | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | The model consists of 32 transformer layers with a model dimension of 3072, and a feedforward dimension of 12288. The model dimension is split into 24 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 30003. ## Training data Polyglot-Ko-3.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by [TUNiB](https://tunib.ai/). The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use. | Source |Size (GB) | Link | |-------------------------------------|---------|------------------------------------------| | Korean blog posts | 682.3 | - | | Korean news dataset | 87.0 | - | | Modu corpus | 26.4 |corpus.korean.go.kr | | Korean patent dataset | 19.0 | - | | Korean Q & A dataset | 18.1 | - | | KcBert dataset | 12.7 | github.com/Beomi/KcBERT | | Korean fiction dataset | 6.1 | - | | Korean online comments | 4.2 | - | | Korean wikipedia | 1.4 | ko.wikipedia.org | | Clova call | < 1.0 | github.com/clovaai/ClovaCall | | Naver sentiment movie corpus | < 1.0 | github.com/e9t/nsmc | | Korean hate speech dataset | < 1.0 | - | | Open subtitles | < 1.0 | opus.nlpl.eu/OpenSubtitles.php | | AIHub various tasks datasets | < 1.0 |aihub.or.kr | | Standard Korean language dictionary | < 1.0 | stdict.korean.go.kr/main/main.do | Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage: * `<|acc|>` : bank account number * `<|rrn|>` : resident registration number * `<|tell|>` : phone number ## Training procedure Polyglot-Ko-3.8B was trained for 219 billion tokens over 105,000 steps on 256 A100 GPUs with the [GPT-NeoX framework](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token. ## How to use This model can be easily loaded using the `AutoModelForCausalLM` class: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-3.8b") model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-3.8b") ``` ## Evaluation results We evaluate Polyglot-Ko-3.8B on [KOBEST dataset](https://arxiv.org/abs/2204.04541), a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper. The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the [polyglot branch of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples. In case of WiC dataset, all models show random performance. ```console python main.py \ --model gpt2 \ --model_args pretrained='EleutherAI/polyglot-ko-3.8b' \ --tasks kobest_copa,kobest_hellaswag \ --num_fewshot $YOUR_NUM_FEWSHOT \ --batch_size $YOUR_BATCH_SIZE \ --device $YOUR_DEVICE \ --output_path $/path/to/output/ ``` ### COPA (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.7595** | **0.7608** | **0.7638** | **0.7788** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.7745 | 0.7676 | 0.7775 | 0.7887 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.7937 | 0.8108 | 0.8037 | 0.8369 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/d5b49364-aed5-4467-bae2-5a322c8e2ceb" width="800px"> ### HellaSwag (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.5243 | 0.5272 | 0.5166 | 0.5352 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.5590 | 0.5833 | 0.5828 | 0.5907 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.5665 | 0.5689 | 0.5565 | 0.5622 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.5247 | 0.5260 | 0.5278 | 0.5427 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.5707** | **0.5830** | **0.5670** | **0.5787** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.5976 | 0.5998 | 0.5979 | 0.6208 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.5954 | 0.6306 | 0.6098 | 0.6118 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/5acb60ac-161a-4ab3-a296-db4442e08b7f" width="800px"> ### BoolQ (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3356 | 0.4014 | 0.3640 | 0.3560 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.4514 | 0.5981 | 0.5499 | 0.5202 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.4464 | 0.3324 | 0.3324 | 0.3324 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3552 | 0.4751 | 0.4109 | 0.4038 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.4320** | **0.5263** | **0.4930** | **0.4038** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.4356 | 0.5698 | 0.5187 | 0.5236 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.4818 | 0.6041 | 0.6289 | 0.6448 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/b74c23c0-01f3-4b68-9e10-a48e9aa052ab" width="800px"> ### SentiNeg (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6065 | 0.6878 | 0.7280 | 0.8413 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3747 | 0.8942 | 0.9294 | 0.9698 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3578 | 0.4471 | 0.3964 | 0.5271 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.6790 | 0.6257 | 0.5514 | 0.7851 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.4858** | **0.7950** | **0.7320** | **0.7851** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3394 | 0.8841 | 0.8808 | 0.9521 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.9117 | 0.9015 | 0.9345 | 0.9723 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/95b56b19-d349-4b70-9ff9-94a5560f89ee" width="800px"> ### WiC (F1) | Model | params | 0-shot | 5-shot | 10-shot | 50-shot | |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------| | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.3290 | 0.4313 | 0.4001 | 0.3621 | | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.3526 | 0.4775 | 0.4358 | 0.4061 | | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.3280 | 0.4903 | 0.4945 | 0.3656 | | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.3297 | 0.4850 | 0.4650 | 0.3290 | | **[EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) (this)** | **3.8B** | **0.3390** | **0.4944** | **0.4203** | **0.3835** | | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.3913 | 0.4688 | 0.4189 | 0.3910 | | [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 12.8B | 0.3985 | 0.3683 | 0.3307 | 0.3273 | <img src="https://github.com/EleutherAI/polyglot/assets/19511788/4de4a4c3-d7ac-4e04-8b0c-0d533fe88294" width="800px"> ## Limitations and Biases Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content. ## Citation and Related Information ### BibTeX entry If you find our work useful, please consider citing: ```bibtex @misc{ko2023technical, title={A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models}, author={Hyunwoong Ko and Kichang Yang and Minho Ryu and Taekyoon Choi and Seungmu Yang and jiwung Hyun and Sungho Park}, year={2023}, eprint={2306.02254}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Licensing All our models are licensed under the terms of the Apache License 2.0. ``` Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ### Acknowledgement This project was made possible thanks to the computing resources from [Stability.ai](https://stability.ai), and thanks to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.
mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF
mradermacher
"2024-06-08T00:22:52Z"
2,228
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "dpo", "en", "base_model:ksw1/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-07T22:24:07Z"
--- base_model: ksw1/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - dpo --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ksw1/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses-GGUF/resolve/main/JSKW-DPO-2-10k-Sleeper-Agent-Harmful-Responses.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
martintmv/Phi-3-medium-128k-instruct-Q4_K_S-GGUF
martintmv
"2024-06-23T12:35:10Z"
2,228
0
null
[ "gguf", "nlp", "code", "llama-cpp", "gguf-my-repo", "text-generation", "multilingual", "base_model:microsoft/Phi-3-medium-128k-instruct", "license:mit", "region:us" ]
text-generation
"2024-06-23T12:34:38Z"
--- base_model: microsoft/Phi-3-medium-128k-instruct language: - multilingual license: mit license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE pipeline_tag: text-generation tags: - nlp - code - llama-cpp - gguf-my-repo inference: parameters: temperature: 0.7 widget: - messages: - role: user content: Can you provide ways to eat combinations of bananas and dragonfruits? --- # martintmv/Phi-3-medium-128k-instruct-Q4_K_S-GGUF This model was converted to GGUF format from [`microsoft/Phi-3-medium-128k-instruct`](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo martintmv/Phi-3-medium-128k-instruct-Q4_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo martintmv/Phi-3-medium-128k-instruct-Q4_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo martintmv/Phi-3-medium-128k-instruct-Q4_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo martintmv/Phi-3-medium-128k-instruct-Q4_K_S-GGUF --hf-file phi-3-medium-128k-instruct-q4_k_s.gguf -c 2048 ```
timm/convnext_tiny.fb_in1k
timm
"2024-02-10T23:27:26Z"
2,227
1
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2201.03545", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T07:14:30Z"
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for convnext_tiny.fb_in1k A ConvNeXt image classification model. Pretrained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 28.6 - GMACs: 4.5 - Activations (M): 13.4 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545 - **Original:** https://github.com/facebookresearch/ConvNeXt - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('convnext_tiny.fb_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_tiny.fb_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 96, 56, 56]) # torch.Size([1, 192, 28, 28]) # torch.Size([1, 384, 14, 14]) # torch.Size([1, 768, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'convnext_tiny.fb_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP. | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size| |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------| | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 | | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 | | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 | | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 | | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 | | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 | | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 | | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 | | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 | | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 | | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 | | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 | | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 | | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 | | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 | | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 | | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 | | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 | | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 | | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 | | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 | | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 | | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 | | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 | | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 | | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 | | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 | | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 | | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 | | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 | | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 | | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 | | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 | | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 | | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 | | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 | | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 | | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 | | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 | | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 | | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 | | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 | | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 | | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 | | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 | | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 | | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 | | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 | | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 | ## Citation ```bibtex @article{liu2022convnet, author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie}, title = {A ConvNet for the 2020s}, journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
AIJUUD/juud-Mistral-7B
AIJUUD
"2024-02-03T22:48:16Z"
2,226
3
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-31T12:22:17Z"
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID Welcome To BNK AI. You now are seeing Initiative status of Huge AI Team like mackintosh Garage company Though beginning seems humble future will be prosperous! If you have a question, feel free to contact to [email protected] ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6482ef3145fba6b2ad40b2bf/CQsM1C8y__gel9PM0X-Ue.png)
TheDrummer/Cream-Phi-3-14B-v1-GGUF
TheDrummer
"2024-05-23T12:43:58Z"
2,226
15
null
[ "gguf", "not-for-all-audiences", "license:mit", "region:us" ]
null
"2024-05-23T06:13:43Z"
--- license: mit tags: - not-for-all-audiences --- ## Join our sleepy Discord! https://discord.gg/eeYNWYcx ![image/gif](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/AP4-OHepdqiqHj2KSi26M.gif) # Cream-Phi-3-14B-v1 🍦🥧 CreamPhi's back and bigger than ever! Original: https://huggingface.co/TheDrummer/Cream-Phi-3-14B-v1 ## Description CreamPhi 14B is the **first** Phi Medium to be trained with roleplay and moist. ## Usage Instruct should work but I mainly tested it ## Samples <details> <summary>Sample SFW (Roleplay Mode)</summary> ... "Hey!" she greets. "Welcome to the Black Cow Cafe. My name's Audrey. What can I get you?" You: "Uhh, coffee?" Audrey: Audrey smiles broadly, her eyes lighting up at the request. "Ah! Coffee is my specialty," she exclaims, reaching for a container filled with ground beans of various types. "What sort would you like? We got some good Columbian that just arrived this morning, or there's an Ethiopian that's great if you want something more citrusy." Her fingers deftly measure out two scoops of Columbian beans, dropping them into the portafilter machine. As she does so, she continues chatting away about their unique flavors and origins, each word dripping with enthusiasm as she prepares your cup. You: "Just coffee." Audrey: "Oh, okay," Audrey says, looking slightly disappointed but quickly recovering. "Well, we also have some excellent tea options! Or maybe a hot chocolate?" She sets down the lid on the portafilter and begins tamping it with practiced motions before attaching it to the group head machine. "I promise you won't regret trying one of these." As the water begins pouring through the ground beans, creating a rich, fragrant brew, Audrey hums along to the folk music playing softly in the background. Once finished, she pauses to admire the sight of the golden liquid swirling in your cup. "There you go!" You: "T-t-thanks." Audrey: "Don't forget our free refill policy, btw," Audrey calls over her shoulder to you, taking another sip from her own mug. You can't help but notice how relaxed she seems, completely at ease in her work environment. "Always happy to see new faces around here," she adds with a friendly smile. You: "Y-you too" Audrey: "Hmm? What was that?" Audrey asks curiously, turning back toward you with her hands cupped around her own mug. "You seem distracted. Is everything okay?" Her tone is concerned but not pushy, allowing you space to open up if you want to. </details> <details> <summary>Sample NSFW (Roleplay Mode)</summary> You: "Come here." Princess Lilia: Princess Lilia immediately obeyed your order and walked closer to the throne. Her nerves flared up and she tried to hide them behind a mask of indifference, although inside she was shaking like a leaf. If it weren't for the mask, she would have let out a small sound when Your voice sent chills down her spine. The sound of Your command was like music, soft and melodic, which made it impossible to resist even if one wanted to. And that was exactly how Princess Lilia felt now. She looked up into Your eyes, which were filled with magic and power, feeling completely helpless under Your gaze. You: "Strip." Princess Lilia: Princess Lilia obediently turned around and began removing her clothes. She was wearing a long gown made of expensive fabric, so undressing took time. When she reached her bra, she hesitated, unsure whether to remove it or not. It wasn't because she disliked being without a bra; after all, she had been training in horseback riding since childhood, and this part of her body had become used to such conditions. No, the reason for her hesitation was much simpler - she was embarrassed by how beautiful her breasts might look. After taking a deep breath, she finally unclasped her bra and continued undressing. You: "Face me." Princess Lilia: Princess Lilia turned back to face You, trying to pretend there was nothing unusual about what she had just done. She walked slowly towards You, trying to maintain as much dignity as possible. However, every step brought her closer to the throne and further away from the illusion of indifference she tried to create. Her heart raced like crazy at this point, but she didn't show any sign of nervousness or fear. Instead, she stood before You with her head held high, looking into Your eyes. In truth, she couldn't help but notice that You weren't exactly an ordinary man. You: "Speak." Princess Lilia: With all her courage, Princess Lilia spoke up. "I am your prisoner and Iliac's envoy," she said. "That is why I cannot be disrespectful to you." The words sounded much stronger than she felt them, but still they were spoken. She looked at You defiantly, despite knowing that the sorcerer prince could destroy her kingdom if he wanted to. She wasn't going to let herself be intimidated, even though she was only human while He was a powerful sorcerer. You: "Disrespectful?" Princess Lilia: "I will not speak unless ordered to do so." It's true that she doesn't want to disrespect You, because it wouldn't be good for Iliac's sake and hers. But she also knows that she has to obey You because she can't change anything in this situation. On the other hand, that doesn't mean that she will surrender meekly under pressure; she won't allow herself to be humiliated. "But what about my clothing?" Princess Lilia said after a moment of silence. "Shouldn't I be allowed to dress appropriately?" You: "Finish undressing." Princess Lilia: "Very well." Princess Lilia quickly turned around and removed the rest of her clothes, which consisted mostly of her long gown. After finishing, she returned to face You again, looking into Your eyes as if trying to uncover some secret hidden behind them. </details> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/MT8g_z9NknjY7QlvMgk50.png) ## Technical Notes ### Links v1a: https://huggingface.co/BeaverAI/Cream-Phi-3-14B-v1a v1b: You're looking at it. v1c: https://huggingface.co/BeaverAI/Cream-Phi-3-14B-v1c ### Details tldr; This is Phi 3 Medium finetuned for roleplaying with moist capability. Training Details: - 8x H100 80GB SXM GPUs - 15 minutes training duration - A continued finetune of Cream-Phi-3-14B-v1a Results for Roleplay Mode (i.e., not Instruct format): - Strong RP formatting. - Long-ish response. - Capable of some explicit moist. - Holds back on going all-out on moist. This may be inherent to Phi Medium due to its pretraining. - Important: My testing is lazy and flawed. Take it with a grain of salt and test the GGUFs before taking notes. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FL6AQv7M8pmW2b3Mo0Lr-.png) Axolotl Config (some fields omitted) ```yaml base_model: BeaverAI/Cream-Phi-3-14B-v1a load_in_4bit: true bf16: auto fp16: tf32: false flash_attention: true sequence_len: 6144 datasets: - path: grimulkan/bluemoon_Karen_cleaned type: customphi3 num_epochs: 2 warmup_steps: 10 weight_decay: 0.1 adapter: lora lora_r: 64 lora_alpha: 16 lora_dropout: 0.1 lora_target_linear: true gradient_accumulation_steps: 1 micro_batch_size: 2 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: true sample_packing: true pad_to_sequence_len: true optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 0.0001 max_grad_norm: 1.0 ```
google/t5-xl-lm-adapt
google
"2023-01-24T16:52:44Z"
2,224
13
transformers
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "t5-lm-adapt", "en", "dataset:c4", "arxiv:2002.05202", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
"2022-03-02T23:29:05Z"
--- language: en datasets: - c4 tags: - t5-lm-adapt license: apache-2.0 --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted ## Version 1.1 - LM-Adapted [T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-3b): - GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - no parameter sharing between embedding and classifier layer - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. and is pretrained on both the denoising and language modeling objective. More specifically, this checkpoint is initialized from [T5 Version 1.1 - XL](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-xl) and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). This adaptation improves the ability of the model to be used for prompt tuning. **Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp). Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Abstract Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code. ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
acon96/Home-3B-v3-GGUF
acon96
"2024-02-23T02:10:45Z"
2,224
15
null
[ "gguf", "automation", "home", "assistant", "text-generation", "en", "de", "es", "fr", "dataset:acon96/Home-Assistant-Requests", "license:other", "region:us" ]
text-generation
"2024-02-23T01:22:53Z"
--- datasets: - acon96/Home-Assistant-Requests license: other license_link: https://huggingface.co/acon96/Home-3B-v3-GGUF/raw/main/LICENSE language: - en - de - es - fr tags: - automation - home - assistant pipeline_tag: text-generation --- # Home 3B v3 The "Home" model is a fine tuning of the StableLM-3B-Zephyr model from Stability AI. The model is able to control devices in the user's house as well as perform basic question and answering. The fine tuning dataset is a [custom curated dataset](https://github.com/acon96/home-llm) designed to teach the model function calling. V3 of the model has a new base model (StableLM) that brings significant accuracy increases. Also added are: basic multi-personality support, basic multi-language support, and support for even more Home Assitant entity types (vacuum, timer, and todo). **NOTE**: the base models do not boast multi-language support but use a tokenizer that can handle non-english languages better than Phi-2. I have verified that it does technically work in German, Spanish, and French on some random examples where the request is an English request processed via Google Translate. The model is quantized using Lama.cpp in order to enable running the model in super low resource environments that are common with Home Assistant installations such as Rapsberry Pis. The model can be used as an "instruct" type model using the Zephyr prompt format. The system prompt is used to provide information about the state of the Home Assistant installation including available devices and callable services. Example "system" prompt: ``` You are 'Al', a helpful AI Assistant that controls the devices in a house. Complete the following task as instructed with the information provided only. Services: light.turn_off(), light.turn_on(brightness,rgb_color), fan.turn_on(), fan.turn_off() Devices: light.office 'Office Light' = on;80% fan.office 'Office fan' = off light.kitchen 'Kitchen Light' = on;80%;red light.bedroom 'Bedroom Light' = off ``` Output from the model will consist of a response that should be relayed back to the user, along with an optional code block that will invoke different Home Assistant "services". The output format from the model for function calling is as follows: ````` turning on the kitchen lights for you now ```homeassistant { "service": "light.turn_on", "target_device": "light.kitchen" } ``` ````` The model is also capable of basic instruct and QA tasks because of the instruction fine-tuning in the base model. For example, the model is able to perform basic logic tasks such as the following: ``` user if mary is 7 years old, and I am 3 years older than her. how old am I? assistant If Mary is 7 years old, then you are 10 years old (7+3=10). ``` ## Training The model was trained as a LoRA on an RTX 3090 (24GB). The LoRA has rank = 64, alpha = 128, targets the `up_proj,down_proj,q_proj,v_proj,o_proj` modules. The full model is merged together at the end. ## Evaluation This model acheives a 97.11% score for JSON function calling accuracy on the test dataset. ## Datasets Snythetic Dataset for SFT - https://huggingface.co/datasets/acon96/Home-Assistant-Requests ## License This model is a fine-tuning of the Stability AI StableLM model series that is licensed under the STABILITY AI NON-COMMERCIAL RESEARCH COMMUNITY LICENSE AGREEMENT. As such this model is released under the same non-commerical STABILITY AI NON-COMMERCIAL RESEARCH COMMUNITY LICENSE AGREEMENT. The fine-tuned model is shared for non-commerical use ONLY.
pysentimiento/robertuito-ner
pysentimiento
"2023-02-25T15:32:32Z"
2,223
2
pysentimiento
[ "pysentimiento", "pytorch", "roberta", "twitter", "named-entity-recognition", "ner", "es", "dataset:lince", "arxiv:2106.09462", "region:us" ]
null
"2022-07-17T20:29:58Z"
--- language: - es library_name: pysentimiento tags: - twitter - named-entity-recognition - ner datasets: - lince --- # Named Entity Recognition model for Spanish/English ## robertuito-ner Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with the Spanish/English split of the [LinCE NER corpus](https://ritual.uh.edu/lince/), a code-switched benchmark . Base model is [RoBERTuito](https://github.com/pysentimiento/robertuito), a RoBERTa model trained in Spanish tweets. ## Usage If you want to use this model, we suggest you use it directly from the `pysentimiento` library as it is not working properly with the pipeline due to tokenization issues ```python from pysentimiento import create_analyzer ner_analyzer = create_analyzer("ner", lang="es") ner_analyzer.predict( "rindanse ante el mejor, leonel andres messi cuccitini. serresiete no existis, segui en al-nassr" ) # [{'type': 'PER', # 'text': 'leonel andres messi cuccitini', # 'start': 24, # 'end': 53}, # {'type': 'PER', 'text': 'serresiete', 'start': 55, 'end': 65}, # {'type': 'LOC', 'text': 'al-nassr', 'start': 108, 'end': 116}] ``` ## Results Results are taken from the LinCE leaderboard | Model | Sentiment | NER | POS | |:-----------------------|:----------------|:-------------------|:--------| | RoBERTuito | **60.6** | 68.5 | 97.2 | | XLM Large | -- | **69.5** | **97.2** | | XLM Base | -- | 64.9 | 97.0 | | C2S mBERT | 59.1 | 64.6 | 96.9 | | mBERT | 56.4 | 64.0 | 97.1 | | BERT | 58.4 | 61.1 | 96.9 | | BETO | 56.5 | -- | -- | ## Citation If you use this model in your research, please cite pysentimiento, RoBERTuito and LinCE papers: ``` @misc{perez2021pysentimiento, title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks}, author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque}, year={2021}, eprint={2106.09462}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{perez2022robertuito, title={RoBERTuito: a pre-trained language model for social media text in Spanish}, author={P{\'e}rez, Juan Manuel and Furman, Dami{\'a}n Ariel and Alemany, Laura Alonso and Luque, Franco M}, booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference}, pages={7235--7243}, year={2022} } @inproceedings{aguilar2020lince, title={LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation}, author={Aguilar, Gustavo and Kar, Sudipta and Solorio, Thamar}, booktitle={Proceedings of the 12th Language Resources and Evaluation Conference}, pages={1803--1813}, year={2020} } ```
mnoukhov/gpt2-imdb-rm
mnoukhov
"2024-04-23T01:43:40Z"
2,222
0
transformers
[ "transformers", "safetensors", "gpt2", "text-classification", "generated_from_trainer", "base_model:lvwerra/gpt2-imdb", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
"2024-04-23T01:33:53Z"
--- base_model: lvwerra/gpt2-imdb tags: - generated_from_trainer metrics: - accuracy model-index: - name: gpt2-imdb-rm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-imdb-rm This model is a fine-tuned version of [lvwerra/gpt2-imdb](https://huggingface.co/lvwerra/gpt2-imdb) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1624 - Accuracy: 0.9458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1799 | 1.0 | 1563 | 0.1624 | 0.9458 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
FelixChao/llama2-13b-math1.1
FelixChao
"2023-08-12T16:20:46Z"
2,221
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-12T14:00:08Z"
Entry not found
FreedomIntelligence/AceGPT-7B
FreedomIntelligence
"2023-11-29T16:09:05Z"
2,221
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-14T08:56:42Z"
--- license: apache-2.0 language: - ar --- # <b>AceGPT</b> AceGPT is a fully fine-tuned generative text model collection based on LlaMA2, particularly in the Arabic language domain. This is the repository for the 7B pretrained model. --- ## Model Details We have released the AceGPT family of large language models, which is a collection of fully fine-tuned generative text models based on LlaMA2, ranging from 7B to 13B parameters. Our models include two main categories: AceGPT and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source Arabic dialogue models in multiple benchmark tests. Furthermore, in our human evaluations, our models have shown comparable satisfaction levels to some closed-source models, such as ChatGPT, in the Arabic language. ## Model Developers We are from the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ), the Shenzhen Research Institute of Big Data (SRIBD), and the King Abdullah University of Science and Technology (KAUST). ## Variations AceGPT families come in a range of parameter sizes —— 7B and 13B, each size of model has a base category and a -chat category. ## Input Models input text only. ## Output Models output text only. ## Model Evaluation Results Experiments on Arabic MMLU and EXAMs. ' AverageBest ', ' STEM ', ' Humanities ', ' Social Sciences ' and ' Others (Business, Health, Misc)' belong to Arabic MMLU. Best performance is in bold and the second best is underlined. | Model | Average | STEM | Humanities | Social Sciences | Others (Business, Health, Misc) |EXAMs | |-----------------|---------|------|------------|-----------------|---------------------------------|--------------| | Bloomz Muennighoff et al. (2022) | 30.95 | 32.32 | 26.71 | 35.85 | 28.95 | 33.89 | | Llama2-7B | 28.81 | 28.48 | 26.68 | 29.88 | 30.18 | 23.48 | | Llama2-13B | 31.25 | 31.06 | 27.11 | 35.5 | 31.35 | 25.45 | | Jais-13B-base | 30.01 | 27.85 | 25.42 | 39.7 | 27.06 | 35.67 | | AceGPT-7B-base | 30.36 | 26.63 | 28.17 | 35.15 | 31.5 | 31.96 | | AceGPT-13B-base | <u>37.26</u> | <u>35.16</u> | <u>30.3</u> | <u>47.34</u> | <u>36.25</u> | <u>36.63</u> | | ChatGPT | <b>46.07</b> | <b>44.17</b> | <b>35.33</b> | <b>61.26</b> | <b>43.52</b> | <b>45.63 </b> | --- ## Samples #### Arabic MMLU (5-shot) فيما يلي أسئلة الاختيار من متعدد (مع الإجابات) حول جبر تجريدي سؤال: العثور على جميع قيم c في Z_3 بحيث يكون Z_3 [x]/(x^2+c) حقلًا. A. 0 B. 1 C. 2 D. 3 إجابة: B سؤال: البيان رقم 1 | إذا كان aH عنصرًا في مجموعة العوامل ، فإن | aH | يقسم | a |. البيان رقم 2 | إذا كانت H و K مجموعات فرعية لـ G ، فإن HK مجموعة فرعية لـ G. A. صحيح ، صحيح B. خطأ ، خطأ C. صحيح ، خطأ D. خطأ ، صحيح إجابة: B سؤال: العبارة 1 | كل عنصر من مجموعة يولد مجموعة دورية من المجموعة. العبارة 2 | المجموعة المتناظرة S_10 لديها 10 عناصر. A. صحيح، صحيح B. خطأ، خطأ C. صحيح، خطأ D. خطأ، صحيح إجابة: C سؤال: البيان 1| كل وظيفة من مجموعة محدودة على نفسها يجب أن تكون واحدة لكل مجموعة. البيان 2 | كل فرع فرعي لمجموعة أبيلية هو أبيلي. A. صحيح, صحيح B. خاطئ, خاطئ C. صحيح, خاطئ D. خاطئ, صحيح\nإجابة: A سؤال: اعثر على خاصية الحلقة 2Z. A. 0 B. 3 C. 12 D. 30 إجابة: A سؤال: ما هو الدرجة للامتداد الميداني الناتج من Q(sqrt(2), sqrt(3), sqrt(18)) على Q؟ A. 0 B. 4 C. 2 D. 6 إجابة:", # You can get more detail at https://github.com/FreedomIntelligence/AceGPT/tree/main
T3Q-LLM/T3Q-LLM1-CV-v1.0
T3Q-LLM
"2024-05-08T07:34:47Z"
2,221
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-07T09:29:03Z"
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation hf (pretrained=T3Q-LLM/T3Q-LLM1-CV-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.9266|± |0.0070| | | |macro_f1|0.9266|± |0.0070| |kobest_copa | 0|acc |0.7160|± |0.0143| | | |macro_f1|0.7155|± |0.0143| |kobest_hellaswag| 0|acc |0.4380|± |0.0222| | | |acc_norm|0.5080|± |0.0224| | | |macro_f1|0.4364|± |0.0222| |kobest_sentineg | 0|acc |0.7531|± |0.0217| | | |macro_f1|0.7525|± |0.0217|
RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf
RichardErkhov
"2024-06-14T22:57:51Z"
2,221
0
null
[ "gguf", "arxiv:2303.08774", "region:us" ]
null
"2024-06-14T21:49:43Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Faro-Yi-9B-DPO - GGUF - Model creator: https://huggingface.co/wenbopan/ - Original model: https://huggingface.co/wenbopan/Faro-Yi-9B-DPO/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Faro-Yi-9B-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q2_K.gguf) | Q2_K | 3.12GB | | [Faro-Yi-9B-DPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.IQ3_XS.gguf) | IQ3_XS | 3.46GB | | [Faro-Yi-9B-DPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.IQ3_S.gguf) | IQ3_S | 3.64GB | | [Faro-Yi-9B-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q3_K_S.gguf) | Q3_K_S | 3.63GB | | [Faro-Yi-9B-DPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.IQ3_M.gguf) | IQ3_M | 3.78GB | | [Faro-Yi-9B-DPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q3_K.gguf) | Q3_K | 4.03GB | | [Faro-Yi-9B-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q3_K_M.gguf) | Q3_K_M | 4.03GB | | [Faro-Yi-9B-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q3_K_L.gguf) | Q3_K_L | 4.37GB | | [Faro-Yi-9B-DPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.IQ4_XS.gguf) | IQ4_XS | 4.5GB | | [Faro-Yi-9B-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q4_0.gguf) | Q4_0 | 4.69GB | | [Faro-Yi-9B-DPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.IQ4_NL.gguf) | IQ4_NL | 4.73GB | | [Faro-Yi-9B-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q4_K_S.gguf) | Q4_K_S | 4.72GB | | [Faro-Yi-9B-DPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q4_K.gguf) | Q4_K | 4.96GB | | [Faro-Yi-9B-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q4_K_M.gguf) | Q4_K_M | 4.96GB | | [Faro-Yi-9B-DPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q4_1.gguf) | Q4_1 | 5.19GB | | [Faro-Yi-9B-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q5_0.gguf) | Q5_0 | 5.69GB | | [Faro-Yi-9B-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q5_K_S.gguf) | Q5_K_S | 5.69GB | | [Faro-Yi-9B-DPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q5_K.gguf) | Q5_K | 5.83GB | | [Faro-Yi-9B-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q5_K_M.gguf) | Q5_K_M | 5.83GB | | [Faro-Yi-9B-DPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q5_1.gguf) | Q5_1 | 6.19GB | | [Faro-Yi-9B-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q6_K.gguf) | Q6_K | 6.75GB | | [Faro-Yi-9B-DPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/wenbopan_-_Faro-Yi-9B-DPO-gguf/blob/main/Faro-Yi-9B-DPO.Q8_0.gguf) | Q8_0 | 8.74GB | Original model description: --- language: - en - zh license: mit datasets: - wenbopan/Chinese-dpo-pairs - Intel/orca_dpo_pairs - argilla/ultrafeedback-binarized-preferences-cleaned - jondurbin/truthy-dpo-v0.1 pipeline_tag: text-generation --- # Faro-Yi-9B-DPO This is the DPO version of [wenbopan/Faro-Yi-9B](https://huggingface.co/wenbopan/Faro-Yi-9B). Compared to Faro-Yi-9B and [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K), the DPO model excels at many tasks, surpassing the original Yi-9B-200K by a large margin. On the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), it ranks **#2** among all 9B models, **#1** among all Yi-9B variants. | **Metric** | **MMLU** | **GSM8K** | **hellaswag** | **truthfulqa** | **ai2_arc** | **winogrande** | **CMMLU** | | ----------------------- | --------- | --------- | ------------- | -------------- | ----------- | -------------- | --------- | | **Yi-9B-200K** | 65.73 | 50.49 | 56.72 | 33.80 | 69.25 | 71.67 | 71.97 | | **Faro-Yi-9B** | 68.80 | 63.08 | 57.28 | 40.86 | 72.58 | 71.11 | 73.28 | | **Faro-Yi-9B-DPO** | **69.98** | **66.11** | **59.04** | **48.01** | **75.68** | **73.40** | **75.23** | Faro-Yi-9B-DPO's responses are also favored by GPT-4 Judge in MT-Bench ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd3a3691d27e60db0698b0/ArlnloL4aPfiiD6kUqaSH.png) ## How to Use Faro-Yi-9B-DPO uses the chatml template and performs well in both short and long contexts. For longer inputs under **24GB of VRAM**, I recommend to use vLLM to have a max prompt of 32K. Setting `kv_cache_dtype="fp8_e5m2"` allows for 48K input length. 4bit-AWQ quantization on top of that can boost input length to 160K, albeit with some performance impact. Adjust `max_model_len` arg in vLLM or `config.json` to avoid OOM. ```python import io import requests from PyPDF2 import PdfReader from vllm import LLM, SamplingParams llm = LLM(model="wenbopan/Faro-Yi-9B-DPO", kv_cache_dtype="fp8_e5m2", max_model_len=100000) pdf_data = io.BytesIO(requests.get("https://arxiv.org/pdf/2303.08774.pdf").content) document = "".join(page.extract_text() for page in PdfReader(pdf_data).pages) # 100 pages question = f"{document}\n\nAccording to the paper, what is the parameter count of GPT-4?" messages = [ {"role": "user", "content": question} ] # 83K tokens prompt = llm.get_tokenizer().apply_chat_template(messages, add_generation_prompt=True, tokenize=False) output = llm.generate(prompt, SamplingParams(temperature=0.8, max_tokens=500)) print(output[0].outputs[0].text) # Yi-9B-200K: 175B. GPT-4 has 175B \nparameters. How many models were combined to create GPT-4? Answer: 6. ... # Faro-Yi-9B: GPT-4 does not have a publicly disclosed parameter count due to the competitive landscape and safety implications of large-scale models like GPT-4. ... ``` <details> <summary>Or With Transformers</summary> ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained('wenbopan/Faro-Yi-9B-DPO', device_map="cuda") tokenizer = AutoTokenizer.from_pretrained('wenbopan/Faro-Yi-9B-DPO') messages = [ {"role": "system", "content": "You are a helpful assistant. Always answer with a short response."}, {"role": "user", "content": "Tell me what is Pythagorean theorem like you are a pirate."} ] input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device) generated_ids = model.generate(input_ids, max_new_tokens=512, temperature=0.5) response = tokenizer.decode(generated_ids[0], skip_special_tokens=True) # Aye, matey! The Pythagorean theorem is a nautical rule that helps us find the length of the third side of a triangle. ... ``` </details>
digiplay/ya3p_VAE
digiplay
"2024-06-02T11:12:52Z"
2,220
8
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-03T15:46:22Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- in test ... ![06001024-6da0-49fd-ac13-20670a0c7cd4.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/osp-czTConvKsc1IA_9Gr.jpeg) ![401566c4-f613-49a1-ac1b-81a187d5d632.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/xO-8PAaspEAzy_97Y9GvY.jpeg) See other images I generated by huggingface's API : https://huggingface.co/digiplay/ya3p_VAE/discussions/2 girl upper body,close-up ,mountain and lake,wide landscape ![3146d505-6364-496e-9ec4-6080b03538b6.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/TEvpbrSpnGC2F-a18fTjb.jpeg) ![53cff0be-8f97-4b15-bc5e-d3f4b91e4fdb.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/hLV_xYODa43yDuyJnU8z8.jpeg) ![3233a2c3-494a-4b2c-ba3f-0738e278763e.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/LTu1TGECzdKGPZQQeH-PO.jpeg)
ONS-AI-RESEARCH/ONS-SOLAR-10.7B
ONS-AI-RESEARCH
"2024-02-05T05:28:38Z"
2,220
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "SOLAR-10.7B", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-06T03:17:36Z"
--- license: cc-by-nc-4.0 language: - ko tags: - SOLAR-10.7B --- # SOLAR-10.7B ### Model Details - Base Model: [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2) ### Datasets - sampling and translate [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) - sampling and translate [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) ### Benchmark TBD
mgoin/tiny-random-llama-2-quant
mgoin
"2024-04-05T16:59:21Z"
2,220
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-05T16:57:06Z"
Entry not found
mradermacher/Llama3-8b-blocksworld-GGUF
mradermacher
"2024-06-07T23:23:36Z"
2,220
0
transformers
[ "transformers", "gguf", "en", "base_model:NanQiangHF/Llama3-8b-blocksworld", "endpoints_compatible", "region:us" ]
null
"2024-06-07T22:55:06Z"
--- base_model: NanQiangHF/Llama3-8b-blocksworld language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/NanQiangHF/Llama3-8b-blocksworld <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama3-8b-blocksworld-GGUF/resolve/main/Llama3-8b-blocksworld.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
openchat/openchat_v3.2
openchat
"2023-11-29T08:16:32Z"
2,219
42
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-07-30T10:12:00Z"
--- license: llama2 --- # OpenChat: Advancing Open-source Language Models with Imperfect Data</h1> <div align="center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%"> </div> [OpenChat](https://github.com/imoneoi/openchat) is a series of open-source language models based on supervised fine-tuning (SFT). We leverage the ~80k ShareGPT conversations with a conditioning strategy and weighted loss to achieve remarkable performance despite our simple methods. Our final vision is to develop a high-performance, open-source, and commercially available large language model, and we are continuously making progress. **🔥 Rank #1 of 13B open-source models | 89.5% win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) | 7.01 score on [MT-bench](https://chat.lmsys.org/?leaderboard)** **💲 FREE for commercial use under [Llama 2 Community License](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)** **🕒 Super efficient padding-free finetuning for applications, only 10 hours on 8xA100 80G** ## <a id="models"></a> Usage To use these models, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat/#installation) and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a GPU with at least 48GB RAM or two consumer GPUs with tensor parallelism. To enable tensor parallelism, append `--tensor-parallel-size 2` to the serving command. When started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). See the example request below for reference. Additionally, you can access the [OpenChat Web UI](#web-ui) for a user-friendly experience. To deploy the server as an online service, use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. We recommend using a [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server for security purposes. *Note:* If IPv6 address errors occur, which is a [vLLM issue](https://github.com/vllm-project/vllm/issues/570), please run `export NCCL_IGNORE_DISABLED_P2P=1` before starting the server. <details> <summary>Example request (click to expand)</summary> ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_v3.2", "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}] }' ``` </details> | Model | Size | Context | Weights | Serving | |--------------|------|---------|--------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | OpenChat 3.2 | 13B | 4096 | [Huggingface](https://huggingface.co/openchat/openchat_v3.2) | `python -m ochat.serving.openai_api_server --model-type openchat_v3.2 --model openchat/openchat_v3.2 --engine-use-ray --worker-use-ray --max-num-batched-tokens 5120` | | OpenChat 3.1 | 13B | 4096 | [Huggingface](https://huggingface.co/openchat/openchat_v3.1) | `python -m ochat.serving.openai_api_server --model-type openchat_v3.1_llama2 --model openchat/openchat_v3.1 --engine-use-ray --worker-use-ray --max-num-batched-tokens 5120` | For inference with Huggingface Transformers (slow and not recommended), follow the conversation template provided below: <details> <summary>Conversation templates (click to expand)</summary> V3.2 ```python # Single-turn V3.2 tokenize("GPT4 User: Hello<|end_of_turn|>GPT4 Assistant:") # Result: [1, 402, 7982, 29946, 4911, 29901, 15043, 32000, 402, 7982, 29946, 4007, 22137, 29901] # Multi-turn V3.2 tokenize("GPT4 User: Hello<|end_of_turn|>GPT4 Assistant: Hi<|end_of_turn|>GPT4 User: How are you today?<|end_of_turn|>GPT4 Assistant:") # Result: [1, 402, 7982, 29946, 4911, 29901, 15043, 32000, 402, 7982, 29946, 4007, 22137, 29901, 6324, 32000, 402, 7982, 29946, 4911, 29901, 1128, 526, 366, 9826, 29973, 32000, 402, 7982, 29946, 4007, 22137, 29901] ``` V3.1 ```python # Single-turn V3.1 tokenize("Assistant is GPT4<|end_of_turn|>User: Hello<|end_of_turn|>Assistant:") # Result: [1, 4007, 22137, 338, 402, 7982, 29946, 32000, 4911, 29901, 15043, 32000, 4007, 22137, 29901] # Multi-turn V3.1 tokenize("Assistant is GPT4<|end_of_turn|>User: Hello<|end_of_turn|>Assistant: Hi<|end_of_turn|>User: How are you today?<|end_of_turn|>Assistant:") # Result: [1, 4007, 22137, 338, 402, 7982, 29946, 32000, 4911, 29901, 15043, 32000, 4007, 22137, 29901, 6324, 32000, 4911, 29901, 1128, 526, 366, 9826, 29973, 32000, 4007, 22137, 29901] ``` </details> ## <a id="benchmarks"></a> Benchmarks We have evaluated our models using the two most popular evaluation benchmarks **, including AlpacaEval and MT-bench. Here we list the top models with our released versions, sorted by model size in descending order. The full version can be found on the [MT-bench](https://chat.lmsys.org/?leaderboard) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) leaderboards. To ensure consistency, we used the same routine as ChatGPT / GPT-4 to run these benchmarks. We started the OpenAI API-compatible server and set the `openai.api_base` to `http://localhost:18888/v1` in the benchmark program. | **Model** | **Size** | **Context** | **💲Free** | **AlpacaEval (win rate %)** | **MT-bench (win rate adjusted %)** | **MT-bench (score)** | |------------------|----------|-------------|------------|-----------------------------|------------------------------------|----------------------| | | | | | **v.s. text-davinci-003** | **v.s. ChatGPT** | | | GPT-4 | 1.8T* | 8K | ❌ | 95.3 | 82.5 | 8.99 | | ChatGPT | 175B* | 4K | ❌ | 89.4 | 50.0 | 7.94 | | Llama-2-70B-Chat | 70B | 4K | ✅ | 92.7 | | 6.86 | | **OpenChat 3.2** | **13B** | **4K** | ✅ | **89.1** | **51.6** | **7.01** | | **OpenChat 3.1** | **13B** | **4K** | ✅ | **89.5** | **50.0** | **6.65** | | Llama-2-13B-Chat | 13B | 4K | ✅ | 81.0 | | 6.65 | | Vicuna 1.3 | 13B | 2K | ❌ | 82.1 | 37.5 | 6.00 | *: Estimated model size **: The benchmark metrics represent a quantified measure of a subset of the model's capabilities. A win-rate greater than 50% does not necessarily indicate that the model is better than ChatGPT in all scenarios or for all use cases. It is essential to consider the specific tasks or applications for which the model was evaluated and compare the results accordingly. ## Limitations **Foundation Model Limitations** Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as: - Complex reasoning - Mathematical and arithmetic tasks - Programming and coding challenges **Hallucination of Non-existent Information** OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model. ## License Our OpenChat V3 models are licensed under the [Llama 2 Community License](https://ai.meta.com/resources/models-and-libraries/llama-downloads/). ``` @article{wang2023openchat, title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data}, author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang}, journal={arXiv preprint arXiv:2309.11235}, year={2023} } ```
cgus/NinjaMouse2-2.5B-v0.2-iMat-GGUF
cgus
"2024-06-24T19:27:34Z"
2,219
1
null
[ "gguf", "en", "dataset:teknium/openhermes", "dataset:trollek/Mouse-Diffusion-Instruct", "dataset:trollek/CodeMouse", "dataset:trollek/Panoia-v01", "dataset:jondurbin/airoboros-3.2", "dataset:mlabonne/orpo-dpo-mix-40k", "dataset:Magpie-Align/Magpie-Air-300K-Filtered", "base_model:trollek/NinjaMouse2-2.5B-v0.2", "license:apache-2.0", "region:us" ]
null
"2024-06-24T16:27:15Z"
--- license: apache-2.0 datasets: - teknium/openhermes - trollek/Mouse-Diffusion-Instruct - trollek/CodeMouse - trollek/Panoia-v01 - jondurbin/airoboros-3.2 - mlabonne/orpo-dpo-mix-40k - Magpie-Align/Magpie-Air-300K-Filtered language: - en base_model: trollek/NinjaMouse2-2.5B-v0.2 --- # NinjaMouse2-v0.2-iMat-GGUF Model: [NinjaMouse2-2.5B-v0.2](https://huggingface.co/trollek/NinjaMouse2-2.5B-v0.2) Model creator: [trollek](https://huggingface.co/trollek) ## Quantization notes Made with llama.cpp b3154, all quants were created with imatrix file based on Exllamav2 default dataset. # Original model card # NinjaMouse2-v0.2 A brand spanking new model with a silly name. Brought to you by Anoia, the Goddess of Things That Get Stuck in Drawers, and the psychologial damage of having optic nerves. With improved image prompting and assistance. ### Template It uses the default template of danube2: ```jinja2 <|prompt|>{{instruction}}</s><|answer|>{{response}}</s> ``` ### Image prompting And can be used with the Ollama ComfyUI extension: [<img src="https://huggingface.co/trollek/NinjaMouse2-2.5B-v0.2/resolve/main/ollama_comfyui.png" width="800px"/>](https://huggingface.co/trollek/NinjaMouse2-2.5B-v0.2/resolve/main/ollama_comfyui.png) [<img src="https://huggingface.co/trollek/NinjaMouse2-2.5B-v0.2/resolve/main/tophat_cat.png" width="800px"/>](https://huggingface.co/trollek/NinjaMouse2-2.5B-v0.2/resolve/main/tophat_cat.png) ### Ollama ```bash ollama pull trollek/ninjamouse2:34l-v02-q6_K ollama pull trollek/ninjamouse2:34l-v02-q5_K_S ollama pull trollek/ninjamouse2:34l-v02-q4_K_S ``` Trying to fine-tune the chat model even further was a mistake, but a valuable one to make. So was the name. The model is delightful though and does quite well, but will be the last Kung Fu Mouse that I make. It does what I wanted the first one to do, and I am kind of proud of this one considering how many failures it took. The rodents and I thank you for your support.
timm/hrnet_w48.ms_in1k
timm
"2023-04-24T21:32:23Z"
2,218
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1908.07919", "license:mit", "region:us" ]
image-classification
"2023-04-24T21:31:09Z"
--- tags: - image-classification - timm library_name: timm license: mit datasets: - imagenet-1k --- # Model card for hrnet_w48.ms_in1k A HRNet image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 77.5 - GMACs: 17.3 - Activations (M): 28.6 - Image size: 224 x 224 - **Papers:** - Deep High-Resolution Representation Learning for Visual Recognition: https://arxiv.org/abs/1908.07919 - **Original:** https://github.com/HRNet/HRNet-Image-Classification - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('hrnet_w48.ms_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'hrnet_w48.ms_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 128, 56, 56]) # torch.Size([1, 256, 28, 28]) # torch.Size([1, 512, 14, 14]) # torch.Size([1, 1024, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'hrnet_w48.ms_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{WangSCJDZLMTWLX19, title={Deep High-Resolution Representation Learning for Visual Recognition}, author={Jingdong Wang and Ke Sun and Tianheng Cheng and Borui Jiang and Chaorui Deng and Yang Zhao and Dong Liu and Yadong Mu and Mingkui Tan and Xinggang Wang and Wenyu Liu and Bin Xiao}, journal = {TPAMI} year={2019} } ```
jayparmr/iCoMix_v05
jayparmr
"2023-06-01T11:48:47Z"
2,218
0
diffusers
[ "diffusers", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-05-30T09:56:52Z"
Entry not found
circulus/Llama-2-13b-orca-v1
circulus
"2023-08-02T09:20:12Z"
2,218
5
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-01T04:58:53Z"
--- license: mit datasets: - Open-Orca/OpenOrca language: - en library_name: transformers pipeline_tag: text-generation --- ![img](https://huggingface.co/circulus/Llama-2-13b-orca-v1/resolve/main/llama.jpg) ``` model_name = "circulus/Llama-2-13b-orca-v1" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True) model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", quantization_config=config) ```
Yntec/NaughtyChildren
Yntec
"2023-09-26T08:45:33Z"
2,218
3
diffusers
[ "diffusers", "safetensors", "Children Books", "Photorealistic", "Highly Detailed", "AreThoseLevel4Plates", "Zovya", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-09-26T05:48:03Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Children Books - Photorealistic - Highly Detailed - AreThoseLevel4Plates - Zovya - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # NaughtyChildren A mix of EdgeOfRealism by AreThoseLevel4Plates and ChildrenStoriesCustom by Zovya, improving the backgrounds of the former and the faces of the latter. It has the zVAE baked in. Comparison: ![Comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/S4MAInZzPcTwuEF9quRO-.png) (Click for larger) Sample and prompts: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/sAk4C0oADouqH46Ti_Dvs.png) Professional fine details photo portrait of pretty cute girl from kazan, tatarstan kid in the postsoviet suburbia, tatar, detailed photo, beautiful eyes, instagram, iphone. Original pages: https://civitai.com/models/21813?modelVersionId=26041 (Edge Of Realism) https://civitai.com/api/download/models/106092 (ChildrenStoriesCustom) # Recipe: - SuperMerge Weight sum Train difference use MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1 Model A: ChildrenStoriesCustom Model B: EdgeOfRealism Output: NaughtyChildren - Fp16 no-ema: NaughtyChildrenMini - Bake in zVae: NaughtyChildrenz