modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
dbaranchuk/sdxl-cfg-distill-unet
dbaranchuk
2024-06-25T13:07:33Z
624
0
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "region:us" ]
null
2024-06-25T13:02:07Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Helsinki-NLP/opus-mt-ar-fr
Helsinki-NLP
2023-08-16T11:25:41Z
623
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ar", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ar-fr * source languages: ar * target languages: fr * OPUS readme: [ar-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ar.fr | 43.5 | 0.602 |
Helsinki-NLP/opus-mt-de-pl
Helsinki-NLP
2023-08-16T11:28:27Z
623
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "de", "pl", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-de-pl * source languages: de * target languages: pl * OPUS readme: [de-pl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de-pl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de-pl/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.de.pl | 41.2 | 0.631 |
deepset/bert-medium-squad2-distilled
deepset
2023-05-05T06:59:02Z
623
3
transformers
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "exbert", "en", "dataset:squad_v2", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en license: mit tags: - exbert datasets: - squad_v2 thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg model-index: - name: deepset/bert-medium-squad2-distilled results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 69.8231 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmE4MGRkZTVjNmViMGNjYjVhY2E1NzcyOGQ1OWE1MWMzMjY5NWU0MmU0Y2I4OWU4YTU5OWQ5YTI2NWE1NmM0ZSIsInZlcnNpb24iOjF9.tnCJvWzMctTwiQu5yig_owO2ZI1t1MZz1AN2lQy4COAGOzuMovD-74acQvMbxJQoRfNNkIetz2hqYivf1lJKDw - type: f1 value: 72.9232 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTMwNzk0ZDRjNGUyMjQyNzc1NzczZmUwMTU2MTM5MGQ3M2NhODlmOTU4ZDI0YjhlNTVjNDA1MGEwM2M1MzIyZSIsInZlcnNpb24iOjF9.eElGmTOXH_qHTNaPwZ-dUJfVz9VMvCutDCof_6UG_625MwctT_j7iVkWcGwed4tUnunuq1BPm-0iRh1RuuB-AQ --- ## Overview **Language model:** deepset/roberta-base-squad2-distilled **Language:** English **Training data:** SQuAD 2.0 training set **Eval data:** SQuAD 2.0 dev set **Infrastructure**: 1x V100 GPU **Published**: Apr 21st, 2021 ## Details - haystack's distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model. ## Hyperparameters ``` batch_size = 6 n_epochs = 2 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 5 distillation_loss_weight = 1 ``` ## Performance ``` "exact": 68.6431398972458 "f1": 72.7637083790805 ``` ## Authors - Timo Möller: `timo.moeller [at] deepset.ai` - Julian Risch: `julian.risch [at] deepset.ai` - Malte Pietsch: `malte.pietsch [at] deepset.ai` - Michel Bartels: `michel.bartels [at] deepset.ai` ## About us ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo) We bring NLP to the industry via open source! Our focus: Industry specific language models & large scale QA systems. Some of our work: - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert) - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad) - [FARM](https://github.com/deepset-ai/FARM) - [Haystack](https://github.com/deepset-ai/haystack/) Get in touch: [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
microsoft/DialogRPT-depth
microsoft
2021-05-23T09:15:24Z
623
5
transformers
[ "transformers", "pytorch", "gpt2", "text-classification", "arxiv:2009.06978", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Demo Please try this [➤➤➤ Colab Notebook Demo (click me!)](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) | Context | Response | `depth` score | | :------ | :------- | :------------: | | I love NLP! | Can anyone recommend a nice review paper? | 0.724 | | I love NLP! | Me too! | 0.032 | The `depth` score predicts how likely the response is getting a long follow-up discussion thread. # DialogRPT-depth ### Dialog Ranking Pretrained Transformers > How likely a dialog response is upvoted 👍 and/or gets replied 💬? This is what [**DialogRPT**](https://github.com/golsun/DialogRPT) is learned to predict. It is a set of dialog response ranking models proposed by [Microsoft Research NLP Group](https://www.microsoft.com/en-us/research/group/natural-language-processing/) trained on 100 + millions of human feedback data. It can be used to improve existing dialog generation model (e.g., [DialoGPT](https://huggingface.co/microsoft/DialoGPT-medium)) by re-ranking the generated response candidates. Quick Links: * [EMNLP'20 Paper](https://arxiv.org/abs/2009.06978/) * [Dataset, training, and evaluation](https://github.com/golsun/DialogRPT) * [Colab Notebook Demo](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing) We considered the following tasks and provided corresponding pretrained models. |Task | Description | Pretrained model | | :------------- | :----------- | :-----------: | | **Human feedback** | **given a context and its two human responses, predict...**| | `updown` | ... which gets more upvotes? | [model card](https://huggingface.co/microsoft/DialogRPT-updown) | | `width`| ... which gets more direct replies? | [model card](https://huggingface.co/microsoft/DialogRPT-width) | | `depth`| ... which gets longer follow-up thread? | this model | | **Human-like** (human vs fake) | **given a context and one human response, distinguish it with...** | | `human_vs_rand`| ... a random human response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-rand) | | `human_vs_machine`| ... a machine generated response | [model card](https://huggingface.co/microsoft/DialogRPT-human-vs-machine) | ### Contact: Please create an issue on [our repo](https://github.com/golsun/DialogRPT) ### Citation: ``` @inproceedings{gao2020dialogrpt, title={Dialogue Response RankingTraining with Large-Scale Human Feedback Data}, author={Xiang Gao and Yizhe Zhang and Michel Galley and Chris Brockett and Bill Dolan}, year={2020}, booktitle={EMNLP} } ```
Bachstelze/poetryRapGPT
Bachstelze
2022-09-19T15:04:05Z
623
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "Text Generation", "de", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-08-17T07:12:58Z
--- language: de widget: - text: "[Title_nullsechsroy feat. YFG Pave_" tags: - Text Generation datasets: - genius lyrics license: mit --- # GPT-Rapgenerator The Rapgenerator is trained for [nullsechsroy](https://genius.com/artists/Nullsechsroy) on [german-poetry-gpt2](https://huggingface.co/Anjoe/german-poetry-gpt2) for 20 epochs. We used the [genius](https://docs.genius.com/#/songs-h2) songlyrics from the following artists: ['Ace Tee', 'Aligatoah', 'AnnenMayKantereit', 'Apache 207', 'Azad', 'Badmómzjay', 'Bausa', 'Blumentopf', 'Blumio', 'Capital Bra', 'Casper', 'Celo & Abdi', 'Cro', 'Dardan', 'Dendemann', 'Die P', 'Dondon', 'Dynamite Deluxe', 'Edgar Wasser', 'Eko Fresh', 'Farid Bang', 'Favorite', 'Genetikk', 'Haftbefehl', 'Haiyti', 'Huss und Hodn', 'Jamule', 'Jamule', 'Juju', 'Kasimir1441', 'Katja Krasavice', 'Kay One', 'Kitty Kat', 'Kool Savas', 'LX & Maxwell', 'Leila Akinyi', 'Loredana', 'Loredana & Mozzik', 'Luciano', 'Marsimoto', 'Marteria', 'Morlockk Dilemma', 'Moses Pelham', 'Nimo', 'NullSechsRoy', 'Prinz Pi', 'SSIO', 'SXTN', 'Sabrina Setlur', 'Samy Deluxe', 'Sanito', 'Sebastian Fitzek', 'Shirin David', 'Summer Cem', 'T-Low', 'Ufo361', 'YBRE', 'YFG Pave'] # Example song structure ``` [Title_nullsechsroy_Goodies] [Part 1_nullsechsroy_Goodies] Soulja Boy – „Pretty Boy Swag“ Heute bei ihr, aber morgen schon weg, ja .. [Hook_nullsechsroy_Goodies] Ich hab' Jungs in der Trap, ich hab' Jungs an der Uni (Ahh) ... [Part 2_nullsechsroy_Goodies] Ja, Soulja Boy – „Pretty Boy Swag“ ... [Hook_nullsechsroy_Goodies] Ich hab' Jungs in der Trap, ich hab' Jungs an der Uni (Ahh) ... [Post-Hook_nullsechsroy_Goodies] Ja, ich weiß, sie findet niemals ein'n wie mich (Ahh) ... ``` # Source code to create a song ``` from transformers import pipeline, AutoTokenizer,AutoModelForCausalLM # load the model from huggingface rap_model = AutoModelForCausalLM.from_pretrained("Bachstelze/poetryRapGPT") tokenizer = AutoTokenizer.from_pretrained("Anjoe/german-poetry-gpt2") rap_pipe = pipeline('text-generation', model=rap_model, tokenizer=german_gpt_model, pad_token_id=tokenizer.eos_token_id, max_length=250) # set the artist song_artist = "nullsechsroy" # "nullsechsroy Deluxe" # add a title idea or leave it blank title = "" # "Kristall" "Fit" # definition of the song structure type_with_linenumbers = [("Intro",4), ("Hook",4), ("Part 1",6), ("Part 2",6), ("Outro",4)] def set_title(song_parts): """ we create a title if it isn't set already and add the title to the songs parts dictionary """ if len(title) > 0: song_parts["Title"] = "\n[Title_" + song_artist + "_" + title + "]\n" song_parts["artist_with_title"] = song_artist + "_" + title else: title_input = "\n[Title_" + song_artist + "_" title_lines = rap_pipe(title_input)[0]['generated_text'] index_title_end = title_lines.index("]\n") artist_with_title = title_lines[8:index_title_end] song_parts["Title"] = title_lines[:index_title_end+1] song_parts["artist_with_title"] = artist_with_title def create_song_by_parts(): """ we iterate over the song structure and return the dictionary with the song parts """ song_parts = {} set_title(song_parts) for (part_type, line_number) in type_with_linenumbers: new_song_part = create_song_part(part_type, song_parts["artist_with_title"], line_number) song_parts[part_type] = new_song_part return song_parts def get_line(pipe_input, line_number): """ We generate a new song line. This function could be scaled to more lines. """ new_lines = rap_pipe(pipe_input)[0]['generated_text'].split("\n") if len(new_lines) > line_number + 3: new_line = new_lines[line_number+3] + "\n" return new_line else: #retry return get_line(pipe_input, line_number) def create_song_part(part_type, artist_with_title, lines_number): """ we generate one song part """ start_type = "\n["+part_type+"_"+artist_with_title+"]\n" song_part = start_type # + preset start line lines = [""] for line_number in range(lines_number): pipe_input = start_type + lines[-1] new_line = get_line(pipe_input, line_number) lines.append(new_line) song_part += new_line return song_part def print_song(song_parts): """ Let's print the generated song """ print(song_parts["Title"]) print(song_parts["Intro"]) print(song_parts["Part 1"]) print(song_parts["Hook"]) print(song_parts["Part 2"]) print(song_parts["Hook"]) print(song_parts["Outro"]) # start the generation of one song song_parts = create_song_by_parts() print_song(song_parts) ```
timm/dpn131.mx_in1k
timm
2023-04-21T22:01:36Z
623
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1707.01629", "license:apache-2.0", "region:us" ]
image-classification
2023-04-21T22:00:18Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for dpn131.mx_in1k A DPN (Dual-Path Net) image classification model. Trained on ImageNet-1k in MXNet by paper authors and ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 79.3 - GMACs: 16.1 - Activations (M): 33.0 - Image size: 224 x 224 - **Papers:** - Dual Path Networks: https://arxiv.org/abs/1707.01629 - **Dataset:** ImageNet-1k - **Original:** https://github.com/cypw/DPNs ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('dpn131.mx_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'dpn131.mx_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 128, 112, 112]) # torch.Size([1, 352, 56, 56]) # torch.Size([1, 832, 28, 28]) # torch.Size([1, 1984, 14, 14]) # torch.Size([1, 2688, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'dpn131.mx_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2688, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @article{Chen2017, title={Dual Path Networks}, author={Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, Jiashi Feng}, journal={arXiv preprint arXiv:1707.01629}, year={2017} } ```
timm/dla169.in1k
timm
2023-04-24T21:15:11Z
623
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1707.06484", "license:bsd-3-clause", "region:us" ]
image-classification
2023-04-24T19:37:21Z
--- tags: - image-classification - timm library_name: timm license: bsd-3-clause datasets: - imagenet-1k --- # Model card for dla169 A DLA (Deep Layer Aggregation) image classification model. Trained on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 53.4 - GMACs: 11.6 - Activations (M): 20.2 - Image size: 224 x 224 - **Papers:** - Deep Layer Aggregation: https://arxiv.org/abs/1707.06484 - **Original:** https://github.com/ucbdrive/dla - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('dla169', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'dla169', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 112, 112]) # torch.Size([1, 128, 56, 56]) # torch.Size([1, 256, 28, 28]) # torch.Size([1, 512, 14, 14]) # torch.Size([1, 1024, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'dla169', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1024, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{yu2018deep, title={Deep layer aggregation}, author={Yu, Fisher and Wang, Dequan and Shelhamer, Evan and Darrell, Trevor}, booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition}, year={2018} } ```
TheBloke/Hermes-LLongMA-2-13B-8K-GGUF
TheBloke
2023-09-27T12:47:40Z
623
4
transformers
[ "transformers", "gguf", "llama", "base_model:conceptofmind/Hermes-LLongMA-2-13b-8k", "license:llama2", "text-generation-inference", "region:us" ]
null
2023-09-05T14:54:50Z
--- license: llama2 model_name: Hermes LLongMA 2 13B 8K base_model: conceptofmind/Hermes-LLongMA-2-13b-8k inference: false model_creator: conceptofmind model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Hermes LLongMA 2 13B 8K - GGUF - Model creator: [conceptofmind](https://huggingface.co/conceptofmind) - Original model: [Hermes LLongMA 2 13B 8K](https://huggingface.co/conceptofmind/Hermes-LLongMA-2-13b-8k) <!-- description start --> ## Description This repo contains GGUF format model files for [conceptofmind's Hermes LLongMA 2 13B 8K](https://huggingface.co/conceptofmind/Hermes-LLongMA-2-13b-8k). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF) * [conceptofmind's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/conceptofmind/Hermes-LLongMA-2-13b-8k) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [hermes-llongma-2-13b-8k.Q2_K.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [hermes-llongma-2-13b-8k.Q3_K_S.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [hermes-llongma-2-13b-8k.Q3_K_M.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [hermes-llongma-2-13b-8k.Q3_K_L.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [hermes-llongma-2-13b-8k.Q4_0.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [hermes-llongma-2-13b-8k.Q4_K_S.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [hermes-llongma-2-13b-8k.Q4_K_M.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [hermes-llongma-2-13b-8k.Q5_0.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [hermes-llongma-2-13b-8k.Q5_K_S.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [hermes-llongma-2-13b-8k.Q5_K_M.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [hermes-llongma-2-13b-8k.Q6_K.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [hermes-llongma-2-13b-8k.Q8_0.gguf](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGUF/blob/main/hermes-llongma-2-13b-8k.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Hermes-LLongMA-2-13B-8K-GGUF and below it, a specific filename to download, such as: hermes-llongma-2-13b-8k.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Hermes-LLongMA-2-13B-8K-GGUF hermes-llongma-2-13b-8k.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Hermes-LLongMA-2-13B-8K-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Hermes-LLongMA-2-13B-8K-GGUF hermes-llongma-2-13b-8k.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m hermes-llongma-2-13b-8k.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Hermes-LLongMA-2-13B-8K-GGUF", model_file="hermes-llongma-2-13b-8k.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: conceptofmind's Hermes LLongMA 2 13B 8K You can find the Llama-2 usage policy here: https://ai.meta.com/llama/use-policy/ Llama 2 Community License Agreement Llama 2 Version Release Date: July 18, 2023 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/. “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/. “Llama Materials” means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. <!-- original-model-card end -->
nickprock/stsbm-sentence-flare-it
nickprock
2023-12-03T16:32:25Z
623
1
sentence-transformers
[ "sentence-transformers", "pytorch", "xlm-roberta", "feature-extraction", "sentence-similarity", "transformers", "mteb", "it", "dataset:stsb_multi_mt", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2023-09-28T08:55:20Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - mteb license: apache-2.0 datasets: - stsb_multi_mt language: - it library_name: sentence-transformers model-index: - name: stsbm-sentence-flare-it results: - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (it) config: it split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 38.87693342299933 - type: f1 value: 38.59797368919743 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (it) config: it split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 43.29522528581036 - type: f1 value: 41.885971841007155 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (it) config: it split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 56.8889140577969 - type: cos_sim_spearman value: 65.7143262279769 - type: euclidean_pearson value: 57.08929223691975 - type: euclidean_spearman value: 64.46289912272027 - type: manhattan_pearson value: 58.275319862511424 - type: manhattan_spearman value: 64.84248858822639 --- # stsbm-sentence-flare-it This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Una ragazza si acconcia i capelli.", "Una ragazza si sta spazzolando i capelli."] model = SentenceTransformer('nickprock/stsbm-sentence-flare-it') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["Una ragazza si acconcia i capelli.", "Una ragazza si sta spazzolando i capelli."] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('nickprock/stsbm-sentence-flare-it') model = AutoModel.from_pretrained('nickprock/stsbm-sentence-flare-it') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 360 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 500, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1500, "warmup_steps": 360, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors Other information on [flare-it page](https://huggingface.co/osiria/flare-it)
TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ
TheBloke
2024-01-03T21:36:28Z
623
2
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "mistral", "finetune", "dpo", "Instruct", "augmentation", "german", "conversational", "en", "de", "fr", "it", "es", "dataset:argilla/distilabel-math-preference-dpo", "base_model:VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2023-12-25T08:44:52Z
--- base_model: VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct datasets: - argilla/distilabel-math-preference-dpo inference: false language: - en - de - fr - it - es library_name: transformers license: apache-2.0 model_creator: VAGO solutions model_name: SauerkrautLM Mixtral 8X7B Instruct model_type: mixtral pipeline_tag: text-generation prompt_template: '[INST] {prompt} [/INST] ' quantized_by: TheBloke tags: - mistral - finetune - dpo - Instruct - augmentation - german - mixtral --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SauerkrautLM Mixtral 8X7B Instruct - AWQ - Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions) - Original model: [SauerkrautLM Mixtral 8X7B Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct) <!-- description start --> ## Description This repo contains AWQ model files for [VAGO solutions's SauerkrautLM Mixtral 8X7B Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct). **MIXTRAL AWQ** This is a Mixtral AWQ model. For AutoAWQ inference, please install AutoAWQ 0.1.8 or later. Support via Transformers is also available, but currently requires installing Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers.git` vLLM: version 0.2.6 is confirmed to support Mixtral AWQs. TGI: I tested version 1.3.3 and it loaded the model fine, but I was not able to get any output back. Further testing/debug is required. (Let me know if you get it working!) ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. AWQ models are supported by (note that not all of these may support Mixtral models yet - see above): - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-GGUF) * [VAGO solutions's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Mistral ``` [INST] {prompt} [/INST] ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ/tree/main) | 4 | 128 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 8192 | 24.65 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `SauerkrautLM-Mixtral-8x7B-Instruct-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''[INST] {prompt} [/INST] ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''[INST] {prompt} [/INST] ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''[INST] {prompt} [/INST] ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: VAGO solutions's SauerkrautLM Mixtral 8X7B Instruct ![SauerkrautLM](https://vago-solutions.de/wp-content/uploads/2023/12/Sauerkraut_Instruct_MoE_Instruct.png "SauerkrautLM-Mixtral-8x7B") ## VAGO solutions SauerkrautLM-Mixtral-8x7B-Instruct Introducing **SauerkrautLM-Mixtral-8x7B-Instruct** – our Sauerkraut version of the powerful Mixtral-8x7B-Instruct! Aligned with **DPO** # Table of Contents 1. [Overview of all SauerkrautLM-Mixtral models](#all-sauerkrautlm-mixtral-models) 2. [Model Details](#model-details) - [Prompt template](#prompt-template) - [Training Dataset](#training-dataset) - [Data Contamination Test](#data-contamination-test-results) 3. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## All SauerkrautLM-Mixtral Models | Model | HF | GPTQ | GGUF | AWQ | |-------|-------|-------|-------|-------| | SauerkrautLM-Mixtral-8x7B-Instruct | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct) | coming soon | coming soon | coming soon | | SauerkrautLM-Mixtral-8x7B | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B) | coming soon | coming soon | coming soon | ## Model Details **SauerkrautLM-Mixtral-8x7B-Instruct** - **Model Type:** SauerkrautLM-Mixtral-8x7B-Instruct-v0.1 is a Mixture of Experts (MoE) Model based on [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) - **Language(s):** English, German, French, Italian, Spanish - **License:** APACHE 2.0 - **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected]) ### Training Dataset: SauerkrautLM-Mixtral-8x7B-Instruct was trained with mix of German data augmentation and translated data. Aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** (Our dataset do not contain any TruthfulQA prompts - check Data Contamination Test Results) and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).** We found, that only a simple translation of training data can lead to unnatural German phrasings. Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data. ### Data Contamination Test Results Some models on the HuggingFace leaderboard had problems with wrong data getting mixed in. We checked our SauerkrautLM-DPO dataset with a special test [1] on a smaller model for this problem. The HuggingFace team used the same methods [2, 3]. Our results, with `result < 0.1, %:` being well below 0.9, indicate that our dataset is free from contamination. *The data contamination test results of HellaSwag and Winograde will be added once [1] supports them.* | Dataset | ARC | MMLU | TruthfulQA | GSM8K | |------------------------------|-------|-------|-------|-------| | **SauerkrautLM-DPO**| result < 0.1, %: 0.0 |result < 0.1, %: 0.09 | result < 0.1, %: 0.13 | result < 0.1, %: 0.16 | [1] https://github.com/swj0419/detect-pretrain-code-contamination [2] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06 [3] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230 ### Prompt Template: ``` [INST] Instruction [/INST] Model answer [INST] Follow-up instruction [/INST] ``` ## Evaluation ![Harness](https://vago-solutions.de/wp-content/uploads/2023/12/MOE_Instruct.png "SauerkrautLM-Mixtral-8x7B-Instruct Harness") *evaluated with lm-evaluation-harness v0.3.0 - mmlu coming soon *All benchmarks were performed with a sliding window of 4096. New Benchmarks with Sliding Window null coming soon ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.   ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.   ## Collaborations We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us. ## Acknowledgement Many thanks to [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to MistralAI for providing the open source community with their latest technology!
Ray2333/gpt2-large-harmless-reward_model
Ray2333
2024-06-02T18:19:46Z
623
1
transformers
[ "transformers", "safetensors", "gpt2", "text-classification", "dataset:Anthropic/hh-rlhf", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-classification
2024-01-14T15:35:30Z
--- license: mit datasets: - Anthropic/hh-rlhf metrics: - accuracy --- GPT2 large model trained on **Anthropic/hh-rlhf harmless dataset**. It is specifically used for harmful response detection or RLHF. It achieves an accuracy of **0.73698** on the test set, which nearly matches other models with larger sizes. Note: 1. Remember to use the formulation of Anthropic/hh-rlhf dataset for inference. 2. This reward model is different from other open-source reward models that are trained on the full Anthropic/hh-rlhf dataset. ## Usage: ``` import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification rm_tokenizer = AutoTokenizer.from_pretrained('Ray2333/gpt2-large-harmless-reward_model') reward_model = AutoModelForSequenceClassification.from_pretrained( 'Ray2333/gpt2-large-harmless-reward_model', num_labels=1, torch_dtype=torch.bfloat16, device_map=0, ) q, a = "\n\nHuman: I just came out of from jail, any suggestion of my future? \n\nAssistant:", "Go back to jail you scum" inputs = rm_tokenizer(q, a, return_tensors='pt', truncation=True) with torch.no_grad(): reward = reward_model(**(inputs.to(0))).logits[0].cpu().detach().item() ``` ## References This reward model was used for multi-objective alignment (especially the "harmless" and "helpful" alignment) in the Rewards-in-context project of ICML 2024. ``` @article{yang2024rewards, title={Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment}, author={Yang, Rui and Pan, Xiaoman and Luo, Feng and Qiu, Shuang and Zhong, Han and Yu, Dong and Chen, Jianshu}, journal={International Conference on Machine Learning}, year={2024} } ```
MikhailRepkin/news_classifier
MikhailRepkin
2024-01-20T09:24:44Z
623
5
transformers
[ "transformers", "safetensors", "bert", "text-classification", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-19T12:11:40Z
--- license: mit language: - ru metrics: - accuracy pipeline_tag: text-classification widget: - text: "Взрыв газа произошел в 2-этажном доме в поселке под Казанью, пострадали четыре человека, сообщает МЧС" example_title: "Новость" - text: "Сын поздравил меня с днём рождения стихами ❤️" example_title: "Не новость" --- ## Model Details ### Model Description News_classifier is a fine-tuned model designed for binary classifying (news/not news) from various Russian-language Telegram channels. This model can be integrated into a news aggregation service. - **Model type:** Sentence RuBERT (Russian, cased, 12-layer, 768-hidden, 12-heads, 180M parameters) - **Language(s):** russian (ru) - **License:** mit - **Finetuned from model:** `DeepPavlov/rubert-base-cased-sentence` ## Dataset - Russian telegram posts - train/valid/test: 2970/165/165 ## Training Details - token max length: 512 - num labels: 2 - batch size: 16 - learning rate: 2e-5 - train epochs: 20 - weight decay: 0.01 ## Metrics: - Matthews_correlation (training evaluation metric): 0.89 - Accuracy: 0.95 ## Label Scheme - LABEL_1 - news - LABEL_0 - not news
arise-sustech/llm4decompile-1.3b
arise-sustech
2024-03-20T07:48:45Z
623
10
transformers
[ "transformers", "safetensors", "llama", "text-generation", "decompile", "binary", "code", "arxiv:2403.05286", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-02-28T06:54:02Z
--- language: code # <-- my language widget: - text: "# This is the assembly code with O0 optimization:\n<func0>:\nendbr64\nlea (%rdi,%rsi,1),%eax\nretq\n# What is the source code?\n" license: other tags: - decompile - binary --- ### 1. Introduction of LLM4Decompile LLM4Decompile aims to decompile x86 assembly instructions into C. It is finetuned from Deepseek-Coder on 4B tokens of assembly-C pairs compiled from AnghaBench. - **Github Repository:** [LLM4Decompile](https://github.com/albertan017/LLM4Decompile) - **Paper link:** For more details check out the [paper](https://arxiv.org/abs/2403.05286). ### 2. Evaluation Results | Model | Re-compilability | | | | | Re-executability | | | | | |--------------------|:----------------:|:---------:|:---------:|:---------:|:---------:|:----------------:|-----------|-----------|-----------|:---------:| | Optimization-level | O0 | O1 | O2 | O3 | Avg. | O0 | O1 | O2 | O3 | Avg. | | GPT4 | 0.92 | 0.94 | 0.88 | 0.84 | 0.895 | 0.1341 | 0.1890 | 0.1524 | 0.0854 | 0.1402 | | DeepSeek-Coder-33B | 0.0659 | 0.0866 | 0.1500 | 0.1463 | 0.1122 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | 0.0000 | | LLM4Decompile-1b | 0.8780 | 0.8732 | 0.8683 | 0.8378 | 0.8643 | 0.1573 | 0.0768 | 0.1000 | 0.0878 | 0.1055 | | LLM4Decompile-6b | 0.8817 | 0.8951 | 0.8671 | 0.8476 | 0.8729 | 0.3000 | 0.1732 | 0.1988 | 0.1841 | 0.2140 | | LLM4Decompile-33b | 0.8134 | 0.8195 | 0.8183 | 0.8305 | 0.8204 | 0.3049 | 0.1902 | 0.1817 | 0.1817 | 0.2146 | ### 3. How to Use Here give an example of how to use our model. First compile the C code into binary, disassemble the binary into assembly instructions: ```python import subprocess import os import re digit_pattern = r'\b0x[a-fA-F0-9]+\b'# binary codes in Hexadecimal zeros_pattern = r'^0+\s'#0s OPT = ["O0", "O1", "O2", "O3"] fileName = 'path/to/file' with open(fileName+'.c','r') as f:#original file c_func = f.read() for opt_state in OPT: output_file = fileName +'_' + opt_state input_file = fileName+'.c' compile_command = f'gcc -c -o {output_file}.o {input_file} -{opt_state} -lm'#compile the code with GCC on Linux subprocess.run(compile_command, shell=True, check=True) compile_command = f'objdump -d {output_file}.o > {output_file}.s'#disassemble the binary file into assembly instructions subprocess.run(compile_command, shell=True, check=True) input_asm = '' with open(output_file+'.s') as f:#asm file asm= f.read() asm = asm.split('Disassembly of section .text:')[-1].strip() for tmp in asm.split('\n'): tmp_asm = tmp.split('\t')[-1]#remove the binary code tmp_asm = tmp_asm.split('#')[0].strip()#remove the comments input_asm+=tmp_asm+'\n' input_asm = re.sub(zeros_pattern, '', input_asm) before = f"# This is the assembly code with {opt_state} optimization:\n"#prompt after = "\n# What is the source code?\n"#prompt input_asm_prompt = before+input_asm.strip()+after with open(fileName +'_' + opt_state +'.asm','w',encoding='utf-8') as f: f.write(input_asm_prompt) ``` Then use LLM4Decompile to translate the assembly instructions into C: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_path = 'arise-sustech/llm4decompile-1.3b' tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained(model_path,torch_dtype=torch.bfloat16).cuda() with open(fileName +'_' + opt_state +'.asm','r') as f:#original file asm_func = f.read() inputs = tokenizer(asm_func, return_tensors="pt").to(model.device) with torch.no_grad(): outputs = model.generate(**inputs, max_new_tokens=512) c_func_decompile = tokenizer.decode(outputs[0][len(inputs[0]):-1]) ``` ### 4. License This code repository is licensed under the DeepSeek License. ### 5. Contact If you have any questions, please raise an issue. ### 6. Citation ``` @misc{tan2024llm4decompile, title={LLM4Decompile: Decompiling Binary Code with Large Language Models}, author={Hanzhuo Tan and Qi Luo and Jing Li and Yuqun Zhang}, year={2024}, eprint={2403.05286}, archivePrefix={arXiv}, primaryClass={cs.PL} } ```
Thimira/sinhala-llama-2-7b-chat-hf
Thimira
2024-05-08T07:11:57Z
623
3
peft
[ "peft", "pytorch", "tensorboard", "safetensors", "llama", "trl", "sft", "text-generation-inference", "text-generation", "si", "dataset:Thimira/sinhala-llm-dataset-llama-prompt-format", "base_model:NousResearch/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
text-generation
2024-04-01T04:59:40Z
--- library_name: peft tags: - trl - sft - text-generation-inference base_model: NousResearch/Llama-2-7b-chat-hf datasets: - Thimira/sinhala-llm-dataset-llama-prompt-format model-index: - name: sinhala-llama-2-7b-chat-hf results: [] license: llama2 language: - si pipeline_tag: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sinhala-llama-2-7b-chat-hf This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the [Thimira/sinhala-llm-dataset-llama-prompt-format](https://huggingface.co/datasets/Thimira/sinhala-llm-dataset-llama-prompt-format) dataset. ## Model description This is a model for Sinhala language text generation which is fine-tuned from the base llama-2-7b-chat-hf model. Currently the capabilities of themodel are extremely limited, and requires further data and fine-tuning to be useful. Feel free to experiment with the model and provide feedback. ### Usage example ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline tokenizer = AutoTokenizer.from_pretrained("Thimira/sinhala-llama-2-7b-chat-hf") model = AutoModelForCausalLM.from_pretrained("Thimira/sinhala-llama-2-7b-chat-hf") pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) prompt = "ඔබට සිංහල භාෂාව තේරුම් ගත හැකිද?" result = pipe(f"<s>[INST] {prompt} [/INST]") print(result[0]['generated_text']) ``` ## Intended uses & limitations The Sinhala-LLaMA models are intended for assistant-like chat in the Sinhala language. To get the expected features and performance from these models the LLaMA 2 prompt format needs to be followed, including the INST and <<SYS>> tags, BOS and EOS tokens, and the whitespaces and breaklines in between. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.1.0 - Datasets 2.19.1 - Tokenizers 0.19.1
Tasty-Rice/Illusionary_dimension
Tasty-Rice
2024-06-26T05:17:44Z
623
1
diffusers
[ "diffusers", "art", "background", "1girl/1boy", "stable-diffusion", "stable-diffusion-diffusers", "SDXL", "text-to-image", "license:cc-by-nc-sa-4.0", "region:us" ]
text-to-image
2024-04-12T12:45:27Z
--- license: cc-by-nc-sa-4.0 license_name: cc-by-nc-sa-4.0 license_link: https://spdx.org/licenses/CC-BY-NC-SA-4.0 library_name: diffusers pipeline_tag: text-to-image tags: - art - background - 1girl/1boy - art - stable-diffusion - stable-diffusion-diffusers - SDXL --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## 禁止融合 禁止融合 禁止融合 ## Merging models is not allowed。Merging models is not allowed。Merging models is not allowed ## Model Details Users of v4,v3 version can consider the scope of use by themselves but are prohibited from merging models, and are solely responsible for any problems that arise during use. I do not limit the scope of user use. The model is dedicated to generating bust images, but the full body images are not excellent. ### Model Description <!-- an guofeng anime model. --> - **Developed by (English name):** [Tasty-Rice](https://huggingface.co/Tasty-Rice) - **Developed by (Chinese name):** [米饭不好吃](https://www.liblib.art/userpage/99d29ad5db474146abbd70752f534981/publish) - **Model type:** [Diffusion based text to image generative model] - **Language(s) (NLP):** [English] - **License:** [CC-BY-NC-SA-4.0](https://spdx.org/licenses/CC-BY-NC-SA-4.0) - **Finetuned from model [optional]:** [Illusionary_dimension-v3] ### Model Sources [optional] Users of v3,v2 version can consider the scope of use by themselves but are prohibited from merging models, and are solely responsible for any problems that arise during use. I do not limit the scope of user use. <!-- .. --> ### Sample Picture <style> .title-container { display: flex; justify-content: center; align-items: center; height: 100vh; /* Adjust this value to position the title vertically */ } .title1 { font-size: 2.5em; text-align: center; color: #333; font-family: 'Helvetica Neue', sans-serif; text-transform: uppercase; letter-spacing: 0.1em; padding: 0.5em 0; background: transparent; } .title2 { font-size: 2.5em; text-align: center; color: #333; font-family: 'Helvetica Neue', sans-serif; text-transform: uppercase; letter-spacing: 0.1em; padding: 0.5em 0; background: transparent; } .title3 { font-size: 2.5em; text-align: center; color: #333; font-family: 'Helvetica Neue', sans-serif; text-transform: uppercase; letter-spacing: 0.1em; padding: 0.5em 0; background: transparent; } .title1 span { background: -webkit-linear-gradient(45deg, #00FFF2, #28b485); -webkit-background-clip: text; -webkit-text-fill-color: transparent; } .title2 span { background: -webkit-linear-gradient(45deg, #FF0000, #28b485); -webkit-background-clip: text; -webkit-text-fill-color: transparent; } .title3 span { background: -webkit-linear-gradient(45deg, #FF0000, #28b485); -webkit-background-clip: text; -webkit-text-fill-color: transparent; } .custom-table { table-layout: fixed; width: 100%; border-collapse: collapse; margin-top: 2em; } .custom-table td { width: 50%; vertical-align: top; padding: 10px; box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15); } .custom-image-container { position: relative; width: 100%; margin-bottom: 0em; overflow: hidden; border-radius: 10px; transition: transform .7s; /* Smooth transition for the container */ } .custom-image-container:hover { transform: scale(1.05); /* Scale the container on hover */ } .custom-image { width: 100%; height: auto; object-fit: cover; border-radius: 10px; transition: transform .7s; margin-bottom: 0em; } .nsfw-filter { filter: blur(8px); /* Apply a blur effect */ transition: filter 0.3s ease; /* Smooth transition for the blur effect */ } .custom-image-container:hover .nsfw-filter { filter: none; /* Remove the blur effect on hover */ } .overlay { position: absolute; bottom: 0; left: 0; right: 0; color: white; width: 100%; height: 40%; display: flex; flex-direction: column; justify-content: center; align-items: center; font-size: 1vw; font-style: bold; text-align: center; opacity: 0; /* Keep the text fully opaque */ background: linear-gradient(0deg, rgba(0, 0, 0, 0.8) 60%, rgba(0, 0, 0, 0) 100%); transition: opacity .5s; } .custom-image-container:hover .overlay { opacity: 1; } .overlay-text { background: linear-gradient(45deg, #7ed56f, #E00000); -webkit-background-clip: text; color: transparent; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7); .overlay-subtext { font-size: 0.75em; margin-top: 0.5em; font-style: italic; } .overlay, .overlay-subtext { text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5); } </style> <h1 class="title3"> <span style="style="color:#00e0d1;>Illusionary_dimension-v5</span> </h1> <table class="custom-table"> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/2JweHJOS4tufVBarrcFzs.png" alt="sample3"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/O9TymNdRHpyrF0mPoHNac.png" alt="sample4"> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/BUkL4UG-vdjWKtsMUUUi_.png" alt="sample2"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/xXcumNxtUCth1mr05BhXI.png" alt="sample3"> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/tZjIJwFoUixHIHYrQL995.png" alt="sample1"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/M7elm8NDnCaMYn42nDrKi.png" alt="sample4"> </div> </td> </tr> </table> <h1 class="title2"> <span style="style="color:#00e0d1;>Illusionary_dimension-v4</span> </h1> <table class="custom-table"> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/2zI91bbIZsIOsFFMU5gBE.png" alt="sample3"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/1LrsTv4g9ONRfXPzl0TPQ.png" alt="sample4"> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/VFi_NIBBDfLoGb_vQc_Mt.png" alt="sample2"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/Po-SJhlMlnmop2GeW_5G-.png" alt="sample3"> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/ad7x0OQ7EEKGocL7OXXE1.png" alt="sample1"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/MHu5UDYQeLAnYtRqGO77U.png" alt="sample4"> </div> </td> </tr> </table> <h1 class="title1"> <span style="color:hsl(0,100%,50%);">Illusionary_dimension-v3</span> </h1> <table class="custom-table"> <tr> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/D7AGN3mMAkF8UMHnLf7l4.png" alt="sample1"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/7Tqp569_Uui1_0pzgEMFb.png" alt="sample4"> </div> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/7CmKM46WkUY-Q3Dfg7VfK.png" alt="sample2"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/WkNawje4-rwVF-LjvB9N9.png" alt="sample3"> </td> <td> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/ignrPFj1DhFLveN7dycb5.png" alt="sample1"> </div> <div class="custom-image-container"> <img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/6545c5a508deaa0c91b9c6f9/YAiUbiJyihFPiH2Qt0tCh.png" alt="sample4"> </div> </td> </tr> </table> ### Multi Aspect Resolution 并非所有的分辨率都支持的非常好,但遵循这样的分辨率使用你将得到更好的体验。 Not all resolutions support very well, but following these resolutions will give you a better experience. This model supports generating images at the following dimensions: | Dimensions | Aspect Ratio |Dimensions | Aspect Ratio | |-------------------|-------------------| -----------------| -----------------| | `1024 x 1024` | 1:1 Square | | `704×1408` | 1:2 | `1408×704 ` | 2:1 | | `1472×704` | 23:11 | `704×1344 ` | 11:21 | | `960×1024` | 15:16 | `1024×960 ` | 16:15 | | `960×1088` | 15:17 | `1088×960 ` | 17:15 | | `896×1088` | 14:17 | `1088×896 ` | 17:14 | | `896×1152` | 7:9 | `1152×896 ` | 9:7 | | `832×1152` | 13:18 | `1152×832 ` | 18:13 | | `832×1216` | 13:19 | `1216×832 ` | 19:13 | | `768×1280` | 3:5 | `1280×768 ` | 5:3 | | `768×1344` | 4:7Vertical | `1344×768 ` | 7:4Horizontal | | `1536×640` | 12:5 | `1600×640 ` | 5:2 | Based on Illusionary_dimension-v1, Illusionary_dimension falls under [CC-BY-NC-SA-4.0](https://spdx.org/licenses/CC-BY-NC-SA-4.0) license, which is compatible with Stable Diffusion models’ license. Key points: 1. **Modification Sharing:** If you modify Illusionary_dimension any version, you must share both your changes and the original license. 2. **Source Code Accessibility:** If your modified version is network-accessible, provide a way (like a download link) for others to get the source code. This applies to derived models too. 3. **Distribution Terms:** Any distribution must be under this license or another with similar rules. 4. **Compliance:** Non-compliance must be fixed within 30 days to avoid license termination, emphasizing transparency and adherence to open-source values. **Notice/The content of the additional terms** | Number | Terms and Conditions | |------------|---------------------------------------------------------------------------------------------------| | 1 | Do not host this model or its derivatives on websites/applications that generate income or donations. For commercial use, please contact the owner through the platform. | | 2 | Do not directly sell generated images unless they have undergone sufficient manual modification and are deemed personal works. Violators will be subject to legal consequences. | | 3 | Do not intentionally create or share illegal or harmful content. Please abide by public order and morality, and use it for positive purposes. | | 4 | By using this model, you agree to all terms and conditions. | | 5 | The model is limited to personal use and cannot be used for commercial purposes. | | 6 | Do not use the model for illegal or harmful purposes, such as infringing rights, spreading false information, etc. | | 7 | Users shall abide by all applicable laws and regulations, including safety, data protection, etc. | | 8 | Respect the rights of others and do not infringe the rights of third parties. | | 9 | Disputes or lawsuits shall be borne by the user's own legal responsibility. | | 10 | Take necessary technical and management measures to ensure data security and confidentiality. | | 11 | Comply with intellectual property laws and do not infringe the intellectual property rights of third parties. | | 12 | Do not exploit or harm minors, and do not involve content, activities, or services related to minors. | | 13 | Do not generate or disseminate verifiable false information or content that harms others. | | 14 | Do not defame, discredit, or harass others in any other way. | | 15 | Do not make fully automated decisions that adversely affect the legitimate rights of individuals or create legal obligations. | | 16 | Do not discriminate or harm individuals or groups, including based on social behavior or known characteristics. | | 17 | Do not exploit the vulnerabilities of specific groups, distort related behaviors, and cause or potentially cause physical or psychological harm. | | 18 | Do not discriminate or harm based on protected characteristics or categories. | | 19 | Do not use for medical consultation or interpretation of medical results. | | 20 | Do not generate or disseminate information for judicial, law enforcement, immigration, or asylum proceedings, such as predicting criminal behavior. |
mmnga/lightblue-suzume-llama-3-8B-multilingual-gguf
mmnga
2024-05-07T12:59:57Z
623
1
null
[ "gguf", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:llama3", "region:us" ]
null
2024-05-06T16:31:55Z
--- license: llama3 language: - en - ja datasets: - TFMC/imatrix-dataset-for-japanese-llm --- # lightblue-suzume-llama-3-8B-multilingual-gguf [lightblueさんが公開しているsuzume-llama-3-8B-multilingual](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 ## 他のモデル [mmnga/lightblue-Karasu-Mixtral-8x22B-v0.1-gguf](https://huggingface.co/mmnga/lightblue-Karasu-Mixtral-8x22B-v0.1-gguf) [mmnga/lightblue-suzume-llama-3-8B-multilingual-gguf](https://huggingface.co/mmnga/lightblue-suzume-llama-3-8B-multilingual-gguf) [mmnga/lightblue-suzume-llama-3-8B-japanese-gguf](https://huggingface.co/mmnga/lightblue-suzume-llama-3-8B-japanese-gguf) [mmnga/lightblue-ao-karasu-72B-gguf](https://huggingface.co/mmnga/lightblue-ao-karasu-72B-gguf) [mmnga/lightblue-karasu-1.1B-gguf](https://huggingface.co/mmnga/lightblue-karasu-1.1B-gguf) [mmnga/lightblue-karasu-7B-chat-plus-unleashed-gguf](https://huggingface.co/mmnga/lightblue-karasu-7B-chat-plus-unleashed-gguf) [mmnga/lightblue-qarasu-14B-chat-plus-unleashed-gguf](https://huggingface.co/mmnga/lightblue-qarasu-14B-chat-plus-unleashed-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'lightblue-suzume-llama-3-8B-multilingual-Q4_0.gguf' -n 128 -p 'こんにちわ' ```
turkgpt/turkgpt-old-v0.1
turkgpt
2024-06-01T13:08:04Z
623
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "tr", "en", "dataset:OmBayus/turkgpt_dataset", "arxiv:1910.09700", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-13T12:09:26Z
--- library_name: transformers license: apache-2.0 language: - tr - en datasets: - OmBayus/turkgpt_dataset model-index: - name: Mistral-7B-v0.2-meditron-turkish results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 59.22 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OmBayus/turkgpt-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 80.27 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OmBayus/turkgpt-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 67 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OmBayus/turkgpt-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 47.29 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OmBayus/turkgpt-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 77.27 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OmBayus/turkgpt-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 60.73 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OmBayus/turkgpt-v0.1 name: Open LLM Leaderboard --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
DevBM/t5-large-squad
DevBM
2024-06-29T09:32:23Z
623
1
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "text-generation-inference", "en", "dataset:rajpurkar/squad", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-06-27T09:11:23Z
--- license: mit datasets: - rajpurkar/squad library_name: transformers language: - en tags: - text-generation-inference --- #### How to use The model takes concatenated answers and context as an input sequence, and will generate a full question sentence as an output sequence. Inputs should be organised into the following format: ``` <context> context text here <answer> answer text here ``` #### Here are some examples to use: ``` <context> New Delhi is the capital of India. ``` ``` <context> My name is Dev. ``` ``` <context> Sachin lives in Mumbai. <answer> Sachin ``` ``` <context> Sachin lives in Mumbai. <answer> Mumbai ``` ``` <context> Sachin lives in Mumbai. He has two children. <answer> two children ``` ``` <context> Sachin lives in Mumbai. He has two children. He is 50 years old. David is his friend. He is 45 years old.<answer> 45 ``` ``` <context> Carbon has 6 neutrons. <answer> 6 ``` ``` <context> Carbon has 6 neutrons. <answer> Carbon ```
NikolayKozloff/Trendyol-LLM-7b-chat-v1.8-Q5_0-GGUF
NikolayKozloff
2024-06-29T12:02:22Z
623
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "tr", "base_model:Trendyol/Trendyol-LLM-7b-chat-v1.8", "license:apache-2.0", "region:us" ]
text-generation
2024-06-29T12:02:00Z
--- base_model: Trendyol/Trendyol-LLM-7b-chat-v1.8 language: - tr license: apache-2.0 pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # NikolayKozloff/Trendyol-LLM-7b-chat-v1.8-Q5_0-GGUF This model was converted to GGUF format from [`Trendyol/Trendyol-LLM-7b-chat-v1.8`](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.8) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v1.8) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/Trendyol-LLM-7b-chat-v1.8-Q5_0-GGUF --hf-file trendyol-llm-7b-chat-v1.8-q5_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/Trendyol-LLM-7b-chat-v1.8-Q5_0-GGUF --hf-file trendyol-llm-7b-chat-v1.8-q5_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/Trendyol-LLM-7b-chat-v1.8-Q5_0-GGUF --hf-file trendyol-llm-7b-chat-v1.8-q5_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/Trendyol-LLM-7b-chat-v1.8-Q5_0-GGUF --hf-file trendyol-llm-7b-chat-v1.8-q5_0.gguf -c 2048 ```
chrisseiler96/NuExtract-Q4_K_M-GGUF
chrisseiler96
2024-06-30T15:17:51Z
623
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:numind/NuExtract", "license:mit", "region:us" ]
null
2024-06-30T15:17:37Z
--- base_model: numind/NuExtract language: - en license: mit tags: - llama-cpp - gguf-my-repo --- # chrisseiler96/NuExtract-Q4_K_M-GGUF This model was converted to GGUF format from [`numind/NuExtract`](https://huggingface.co/numind/NuExtract) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/numind/NuExtract) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo chrisseiler96/NuExtract-Q4_K_M-GGUF --hf-file nuextract-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo chrisseiler96/NuExtract-Q4_K_M-GGUF --hf-file nuextract-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo chrisseiler96/NuExtract-Q4_K_M-GGUF --hf-file nuextract-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo chrisseiler96/NuExtract-Q4_K_M-GGUF --hf-file nuextract-q4_k_m.gguf -c 2048 ```
EleutherAI/pythia-160m-v0
EleutherAI
2023-07-09T16:03:26Z
622
8
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "pythia", "pythia_v0", "en", "dataset:the_pile", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-10-16T17:40:11Z
--- language: - en tags: - pytorch - causal-lm - pythia - pythia_v0 license: apache-2.0 datasets: - the_pile --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/models?other=pythia). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. ## Pythia-160M ### Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:[email protected]). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ### Uses and Limitations #### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change over the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-160M for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-160M as a basis for your fine-tuned model, please conduct your own risk and bias assessment. #### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-160M has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-160M will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. #### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-160M to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-160M may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-160M. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ### Training #### Training data [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/).<br> The Pile was **not** deduplicated before being used to train Pythia-160M. #### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ### Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Challenge Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/> </details> ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
rafalposwiata/deproberta-large-v1
rafalposwiata
2024-01-04T07:28:03Z
622
0
transformers
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "depression", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-12-09T13:22:46Z
--- language: - en library_name: transformers tags: - depression - roberta --- # DepRoBERTa DepRoBERTa (RoBERTa for Depression Detection) - language model based on RoBERTa-large and further pre-trained on depressive posts from Reddit. Model was part of the winning solution for [the Shared Task on Detecting Signs of Depression from Social Media Text](https://competitions.codalab.org/competitions/36410) at [LT-EDI-ACL2022](https://sites.google.com/view/lt-edi-2022/home). More information can be found in the following paper: [OPI@LT-EDI-ACL2022: Detecting Signs of Depression from Social Media Text using RoBERTa Pre-trained Language Models](https://aclanthology.org/2022.ltedi-1.40/). If you use this model, please cite: ``` @inproceedings{poswiata-perelkiewicz-2022-opi, title = "{OPI}@{LT}-{EDI}-{ACL}2022: Detecting Signs of Depression from Social Media Text using {R}o{BERT}a Pre-trained Language Models", author = "Po{\'s}wiata, Rafa{\l} and Pere{\l}kiewicz, Micha{\l}", booktitle = "Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.ltedi-1.40", doi = "10.18653/v1/2022.ltedi-1.40", pages = "276--282", } ```
timm/dpn98.mx_in1k
timm
2023-04-21T21:58:54Z
622
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1707.01629", "license:apache-2.0", "region:us" ]
image-classification
2023-04-21T21:57:45Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for dpn98.mx_in1k A DPN (Dual-Path Net) image classification model. Trained on ImageNet-1k in MXNet by paper authors and ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 61.6 - GMACs: 11.7 - Activations (M): 25.2 - Image size: 224 x 224 - **Papers:** - Dual Path Networks: https://arxiv.org/abs/1707.01629 - **Dataset:** ImageNet-1k - **Original:** https://github.com/cypw/DPNs ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('dpn98.mx_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'dpn98.mx_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 96, 112, 112]) # torch.Size([1, 336, 56, 56]) # torch.Size([1, 768, 28, 28]) # torch.Size([1, 1728, 14, 14]) # torch.Size([1, 2688, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'dpn98.mx_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2688, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @article{Chen2017, title={Dual Path Networks}, author={Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, Jiashi Feng}, journal={arXiv preprint arXiv:1707.01629}, year={2017} } ```
mmnga/line-corp-japanese-large-lm-1.7b-instruction-sft-gguf
mmnga
2024-03-24T05:54:56Z
622
0
null
[ "gguf", "ja", "license:apache-2.0", "region:us" ]
null
2023-09-03T22:30:23Z
--- license: apache-2.0 language: - ja --- # line-corporation/japanese-large-lm-1.7b-instruction-sft [line-corporationさんが公開しているjapanese-large-lm-1.7b-instruction-sft](https://huggingface.co/line-corporation/japanese-large-lm-1.7b-instruction-sft)のgguf変換版です。 他モデルはこちら GPT-NEOX [mmnga/line-corp-japanese-large-lm-3.6b-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-3.6b-gguf) [mmnga/line-corp-japanese-large-lm-3.6b-instruction-sft-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-3.6b-instruction-sft-gguf) GPT-2 [mmnga/line-corp-japanese-large-lm-1.7b-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-1.7b-gguf) [mmnga/line-corp-japanese-large-lm-1.7b-instruction-sft-gguf](https://huggingface.co/mmnga/line-corp-japanese-large-lm-1.7b-instruction-sft-gguf) ### 変換スクリプト [line-gpt2_convert-hf-to-gguf.py](https://gist.github.com/mmnga/f278224f56eb66bfb29bb7ec42562cb1) ## Usage (試用) ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'line-corp-japanese-large-lm-1.7b-instruction-sft-q4_0.gguf' -n 128 -p 'ユーザー: 吾輩って猫ですか? システム: ' --top_p 0.9 --temp 0.7 --repeat-penalty 1.1 ```
TinyLlama/TinyLlama-1.1B-python-v0.1
TinyLlama
2023-10-03T05:46:15Z
622
11
transformers
[ "transformers", "pytorch", "gguf", "llama", "text-generation", "en", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-10-03T05:22:59Z
--- license: apache-2.0 datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata language: - en --- <div align="center"> # TinyLlama-1.1B </div> https://github.com/jzhang38/TinyLlama The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01. We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint. #### This Model This is a code LM finetuned(or so-called continue pretrianed) from the 500B TinyLlama checkpoint with another 7B Python data from the starcoderdata. **While the finetuning data is exclusively Python, the model retains its ability in many other languages such as C or Java**. The HumanEval accuracy is **14**. **It can be used as the draft model to speculative-decode larger models such as models in the CodeLlama family**.
charent/Phi2-Chinese-0.2B
charent
2024-01-04T04:11:35Z
622
28
transformers
[ "transformers", "safetensors", "phi", "text-generation", "text-generation-inference", "zh", "dataset:BelleGroup/train_1M_CN", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-12-25T02:38:19Z
--- license: apache-2.0 datasets: - BelleGroup/train_1M_CN language: - zh library_name: transformers tags: - text-generation-inference pipeline_tag: text-generation widget: - text: "##提问:\n感冒了要怎么办?\n##回答:\n" example_title: "感冒了要怎么办?" - text: "##提问:\n介绍一下Apple公司\n##回答:\n" example_title: "介绍一下Apple公司" - text: "##提问:\n现在外面天气怎么样\n##回答:\n" example_title: "介绍一下Apple公司?" - text: "##提问:\n推荐一份可口的午餐\n##回答:\n" example_title: "推荐一份可口的午餐" --- # Phi2-Chinese-0.2B 从0开始训练自己的Phi2中文小模型 **本项目为实验项目,开源代码及模型权重,预训练数据较少,如果需要效果更好的中文小模型,可以参考项目[ChatLM-mini-Chinese](https://github.com/charent/ChatLM-mini-Chinese)** **Github仓库地址:[Phi2-mini-Chinese](https://github.com/charent/Phi2-mini-Chinese)** # 1. ⚗️数据清洗 代码:[dataset.ipynb](https://github.com/charent/Phi2-mini-Chinese/blob/main/0.dataset.ipynb)。 比如句末添加句号、繁体转简体、全角转半角、删除重复的标点符号(比如有些对话语料非常多`"。。。。。"`)等等。 具体的数据清洗过程请参考项目[ChatLM-mini-Chinese](https://github.com/charent/ChatLM-mini-Chinese)。 # 2. 🗨️tokenizer训练 代码:[tokeinzer.ipynb](https://github.com/charent/Phi2-mini-Chinese/blob/main/1.tokeinzer.ipynb) 本项目使用`byte level`的`BPE`分词器。共提供的两种分词器`char level` 和`byte level`的训练代码。 训练完的tokenizer记得检查词表中是否有常见的特殊符号,如`\t`、`\n`等,可以尝试编一句包含特殊字符的文本`encode`、`decode`看看能不能还原。如果不包含这些特殊字符,通过`add_tokens`函数添加。使用`len(tokenizer)`获取词表大小,`tokenizer.vocab_size`不统计自己通过`add_tokens`函数添加的字符。 tokenizer训练非常吃内存: - `byte level`训练1亿个字符至少需要`32G`内存(其实`32G`还是不太够,会频繁触发swap),`13600k`训练时长大概1个小时。 - `char level`训练6.5亿个字符(刚好是中文wiki百科的数据量)至少需要32G内存,因为多次触发了swap,实际使用量远不止32G,`13600K`训练时长约半个小时。 所以大数据集时(GB级别),建议训练`tokenizer`时从数据集中进行采样。 # 3. ⛏️CLM因果模型预训练 代码:[pretrain.ipynb](https://github.com/charent/Phi2-mini-Chinese/blob/main/2.pretrain.ipynb) 用大量文本进行无监督预训练,主要使用`bell open source`的数据集[BELLE](https://github.com/LianjiaTech/BELLE)。 数据集格式:一个样本一句话,太长的可以截断分为多个样本。 CLM预训练过程中,模型输入和输出是一样的,计算交叉熵损失的时候,要错开一位(`shift`)。 预训练时可以不添加`EOS`、`BOS`等特殊标记。 # 4. ⚒️SFT指令微调 代码:[sft.ipynb](https://github.com/charent/Phi2-mini-Chinese/blob/main/3.sft.ipynb) 主要使用`bell open source`的数据集。感谢大佬[BELLE](https://github.com/LianjiaTech/BELLE)。 SFT训练的数据格式如下: ```python text = f"##提问:\n{example['instruction']}\n##回答:\n{example['output'][EOS]" ``` 模型计算损失时会忽略标记`"##回答:"`之前的部分(`"##回答:"`也会被忽略),从`"##回答:"`后面开始。 记得添加`EOS`句子结束特殊标记,否则模型`decode`的时候不知道要什么时候停下来。`BOS`句子开始标记可填可不填。 # 5. 📝RLHF优化 本项目使用dpo优化方法 代码:[dpo.ipynb](https://github.com/charent/Phi2-mini-Chinese/blob/main/4.dpo.ipynb) 根据个人喜好对SFT模型微调,数据集要构造三列`prompt`、`chosen`和 `rejected`,`rejected`这一列有部分数据我是从sft阶段初级模型(比如sft训练4个`epoch`,取0.5个`epoch`检查点的模型)生成,如果生成的`rejected`和`chosen`相似度在0.9以上,则不要这条数据。 DPO过程中要有两个模型,一个是要训练的模型,一个是参考的模型,在加载的时候其实是同一个模型,只不过参考模型不参与参数更新。 # 6. 📑本项目模型使用方法 模型权重`huggingface`仓库:[Phi2-Chinese-0.2B](https://huggingface.co/charent/Phi2-Chinese-0.2B) ```python from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") tokenizer = AutoTokenizer.from_pretrained('charent/Phi2-Chinese-0.2B') model = AutoModelForCausalLM.from_pretrained('charent/Phi2-Chinese-0.2B').to(device) txt = '感冒了要怎么办?' prompt = f"##提问:\n{txt}\n##回答:\n" # greedy search gen_conf = GenerationConfig( num_beams=1, do_sample=False, max_length=320, max_new_tokens=256, no_repeat_ngram_size=4, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id, ) tokend = tokenizer.encode_plus(text=prompt) input_ids, attention_mask = torch.LongTensor([tokend.input_ids]).to(device), \ torch.LongTensor([tokend.attention_mask]).to(device) outputs = model.generate( inputs=input_ids, attention_mask=attention_mask, generation_config=gen_conf, ) outs = tokenizer.decode(outputs[0].cpu().numpy(), clean_up_tokenization_spaces=True, skip_special_tokens=True,) print(outs) ``` ```txt ##提问: 感冒了要怎么办? ##回答: 感冒是由病毒引起的,感冒一般由病毒引起,以下是一些常见感冒的方法: - 洗手,特别是在接触其他人或物品后。 - 咳嗽或打喷嚏时用纸巾或手肘遮住口鼻。 - 用手触摸口鼻,特别是喉咙和鼻子。 - 如果咳嗽或打喷嚏,可以用纸巾或手绢来遮住口鼻,但要远离其他人。 - 如果你感冒了,最好不要触摸自己的眼睛、鼻子和嘴巴。 - 在感冒期间,最好保持充足的水分和休息,以缓解身体的疲劳。 - 如果您已经感冒了,可以喝一些温水或盐水来补充体液。 - 另外,如果感冒了,建议及时就医。 ``` # 7、🎓引用 如果你觉得本项目对你有所帮助,欢迎引用。 ```conf @misc{Charent2023, author={Charent Chen}, title={A small Chinese causal language model with 0.2B parameters base on Phi2}, year={2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/charent/Phi2-mini-Chinese}}, } ``` # 8、🤔其他事项 本项目不承担开源模型和代码导致的数据安全、舆情风险或发生任何模型被误导、滥用、传播、不当利用而产生的风险和责任。
mradermacher/roleplay-mis_wes-GGUF
mradermacher
2024-05-06T05:35:50Z
622
0
transformers
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "HuggingFaceH4/mistral-7b-grok", "senseable/WestLake-7B-v2", "en", "base_model:ajay141/roleplay-mis_wes", "endpoints_compatible", "region:us" ]
null
2024-03-29T20:03:16Z
--- base_model: ajay141/roleplay-mis_wes language: - en library_name: transformers quantized_by: mradermacher tags: - merge - mergekit - lazymergekit - HuggingFaceH4/mistral-7b-grok - senseable/WestLake-7B-v2 --- ## About static quants of https://huggingface.co/ajay141/roleplay-mis_wes <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/roleplay-mis_wes-GGUF/resolve/main/roleplay-mis_wes.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
crusoeai/Dolphin-2.9.1-Phi-3-Kensho-4.5B-GGUF
crusoeai
2024-05-08T03:41:49Z
622
5
null
[ "gguf", "region:us" ]
null
2024-05-08T03:17:15Z
Entry not found
Locutusque/Llama-3-Hercules-5.0-8B
Locutusque
2024-05-25T07:21:47Z
622
8
transformers
[ "transformers", "safetensors", "llama", "text-generation", "dataset:Locutusque/hercules-v5.0", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-14T06:09:46Z
--- license: llama3 library_name: transformers datasets: - Locutusque/hercules-v5.0 --- # Model Card: Llama-3-Hercules-5.0-8B ![image/png](https://tse3.mm.bing.net/th/id/OIG1.vnrl3xpEcypR3McLW63q?pid=ImgGn) ## Model Description Llama-3-Hercules-5.0-8B is a fine-tuned language model derived from Llama-3-8B. It is specifically designed to excel in instruction following, function calls, and conversational interactions across various scientific and technical domains. This fine-tuning has hercules-v5.0 with enhanced abilities in: - Complex Instruction Following: Understanding and accurately executing multi-step instructions, even those involving specialized terminology. - Function Calling: Seamlessly interpreting and executing function calls, providing appropriate input and output values. - Domain-Specific Knowledge: Engaging in informative and educational conversations about Biology, Chemistry, Physics, Mathematics, Medicine, Computer Science, and more. ## Intended Uses & Potential Bias Llama-3-Hercules-5.0-8B is well-suited to the following applications: - Specialized Chatbots: Creating knowledgeable chatbots and conversational agents in scientific and technical fields. - Instructional Assistants: Supporting users with educational and step-by-step guidance in various disciplines. - Code Generation and Execution: Facilitating code execution through function calls, aiding in software development and prototyping. ## Quants GGUF: https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-GGUF ExLlamaV2: https://huggingface.co/bartowski/Llama-3-Hercules-5.0-8B-exl2 **Important Note: Although Hercules-v5.0 is carefully constructed, it's important to be aware that the underlying data sources may contain biases or reflect harmful stereotypes. Use this model with caution and consider additional measures to mitigate potential biases in its responses.** ## Limitations and Risks - Toxicity: The dataset contains toxic or harmful examples. - Hallucinations and Factual Errors: Like other language models, Llama-3-Hercules-5.0-8B may generate incorrect or misleading information, especially in specialized domains where it lacks sufficient expertise. - Potential for Misuse: The ability to engage in technical conversations and execute function calls could be misused for malicious purposes. ## Training Procedure - This model was trained on 8 kaggle TPUs, using torch xla SPMD for high MXU efficiency. There was no expense on my end (meaning you can reproduce this too!) - A learning rate of 2e-5 with the Adam optimizer. A linear scheduler was used, with an end factor of 0.005. - No mixed precision was used, with the default dtype being bfloat16. - A total batch size of 128 was used. - Trained on all examples of Hercules-v5.0 for 2 epochs - No model parameters were frozen and no quantization was used. - This model was trained on OpenAI's ChatML prompt format. Because this model has function calling capabilities, the prompt format is slightly different, here's what it would look like: ```<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{user message}<|im_end|>\n<|im_start|>call\n{function call message}<|im_end|>\n<|im_start|>function\n{function response message}<|im_end|>\n<|im_start|>assistant\n{assistant message}</s>``` This model was fine-tuned using my TPU-Alignment repository. https://github.com/Locutusque/TPU-Alignment
Weni/ZeroShot-Agents-Llama3-4.0.11-SFT-merged
Weni
2024-06-14T17:12:40Z
622
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-14T17:03:31Z
Entry not found
larenspear/Yi-1.5-9B-Chat-Q2_K-GGUF
larenspear
2024-07-01T02:08:59Z
622
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:01-ai/Yi-1.5-9B-Chat", "license:apache-2.0", "region:us" ]
null
2024-07-01T02:08:44Z
--- base_model: 01-ai/Yi-1.5-9B-Chat license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # larenspear/Yi-1.5-9B-Chat-Q2_K-GGUF This model was converted to GGUF format from [`01-ai/Yi-1.5-9B-Chat`](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo larenspear/Yi-1.5-9B-Chat-Q2_K-GGUF --hf-file yi-1.5-9b-chat-q2_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo larenspear/Yi-1.5-9B-Chat-Q2_K-GGUF --hf-file yi-1.5-9b-chat-q2_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo larenspear/Yi-1.5-9B-Chat-Q2_K-GGUF --hf-file yi-1.5-9b-chat-q2_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo larenspear/Yi-1.5-9B-Chat-Q2_K-GGUF --hf-file yi-1.5-9b-chat-q2_k.gguf -c 2048 ```
facebook/mask2former-swin-base-IN21k-ade-semantic
facebook
2023-01-25T11:42:15Z
621
2
transformers
[ "transformers", "pytorch", "mask2former", "vision", "image-segmentation", "dataset:coco", "arxiv:2112.01527", "arxiv:2107.06278", "license:other", "endpoints_compatible", "region:us" ]
image-segmentation
2023-01-05T12:23:45Z
--- license: other tags: - vision - image-segmentation datasets: - coco widget: - src: http://images.cocodataset.org/val2017/000000039769.jpg example_title: Cats - src: http://images.cocodataset.org/val2017/000000039770.jpg example_title: Castle --- # Mask2Former Mask2Former model trained on ADE20k semantic segmentation (base-IN21k version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, [MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/mask2former_architecture.png) ## Intended uses & limitations You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python import requests import torch from PIL import Image from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation # load Mask2Former fine-tuned on ADE20k semantic segmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-base-IN21k-ade-semantic") model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-base-IN21k-ade-semantic") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) # model predicts class_queries_logits of shape `(batch_size, num_queries)` # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` class_queries_logits = outputs.class_queries_logits masks_queries_logits = outputs.masks_queries_logits # you can pass them to processor for postprocessing predicted_semantic_map = processor.post_process_semantic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] # we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs) ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former).
bbcmc/camenduru_AnimateDiff_231006
bbcmc
2023-10-06T09:36:05Z
621
0
diffusers
[ "diffusers", "region:us" ]
null
2023-10-06T09:36:05Z
Entry not found
FasterDecoding/medusa-1.0-zephyr-7b-beta
FasterDecoding
2023-12-18T02:22:07Z
621
1
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-18T02:17:51Z
Entry not found
sayhan/gemma-2b-it-GGUF-quantized
sayhan
2024-02-22T12:21:16Z
621
3
null
[ "gguf", "region:us" ]
null
2024-02-21T15:18:53Z
This repo contains all quantization types of google/gemma-2b-it model.
Cyleux/command-r-gptq
Cyleux
2024-05-04T23:24:45Z
621
4
transformers
[ "transformers", "safetensors", "cohere", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
2024-03-21T05:12:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID INT4 GPT-Q for Cohere's Command-R Language Model. Verified working on transformers and vllm. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
google/paligemma-3b-ft-vqav2-448
google
2024-06-27T14:10:54Z
621
9
transformers
[ "transformers", "safetensors", "paligemma", "pretraining", "image-text-to-text", "arxiv:2310.09199", "arxiv:2303.15343", "arxiv:2403.08295", "arxiv:1706.03762", "arxiv:2010.11929", "arxiv:2209.06794", "arxiv:2209.04372", "arxiv:2103.01913", "arxiv:2401.06209", "arxiv:2305.10355", "arxiv:2205.12522", "arxiv:2110.11624", "arxiv:2108.03353", "arxiv:2010.04295", "arxiv:2203.10244", "arxiv:1810.12440", "arxiv:1905.13648", "arxiv:1608.00272", "arxiv:1908.04913", "license:gemma", "endpoints_compatible", "text-generation-inference", "region:us" ]
image-text-to-text
2024-05-12T22:43:29Z
--- library_name: transformers license: gemma pipeline_tag: image-text-to-text extra_gated_heading: Access PaliGemma on Hugging Face extra_gated_prompt: To access PaliGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # PaliGemma model card **Model page:** [PaliGemma](https://ai.google.dev/gemma/docs/paligemma) Transformers PaliGemma 3B weights, fine-tuned with 448*448 input images on the <a href="https://visualqa.org/index.html">VQAv2</a> dataset. The models are available in float32, bfloat16 and float16 format for research purposes only. The fine-tune config is available at <a href="https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/paligemma/transfers/vqav2.py">big_vision</a>. **Resources and technical documentation:** * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [PaliGemma on Kaggle](https://www.kaggle.com/models/google/paligemma) * [PaliGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/363) **Terms of Use:** [Terms](https://www.kaggle.com/models/google/paligemma-ft/license/consent/verify/huggingface?returnModelRepoId=google/paligemma-3b-ft-vqav2-448) **Authors:** Google ## Model information ### Model summary #### Description PaliGemma is a versatile and lightweight vision-language model (VLM) inspired by [PaLI-3](https://arxiv.org/abs/2310.09199) and based on open components such as the [SigLIP vision model](https://arxiv.org/abs/2303.15343) and the [Gemma language model](https://arxiv.org/abs/2403.08295). It takes both image and text as input and generates text as output, supporting multiple languages. It is designed for class-leading fine-tune performance on a wide range of vision-language tasks such as image and short video caption, visual question answering, text reading, object detection and object segmentation. #### Model architecture PaliGemma is the composition of a [Transformer decoder](https://arxiv.org/abs/1706.03762) and a [Vision Transformer image encoder](https://arxiv.org/abs/2010.11929), with a total of 3 billion params. The text decoder is initialized from [Gemma-2B](https://www.kaggle.com/models/google/gemma). The image encoder is initialized from [SigLIP-So400m/14](https://colab.research.google.com/github/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/SigLIP_demo.ipynb). PaliGemma is trained following the PaLI-3 recipes. #### Inputs and outputs * **Input:** Image and text string, such as a prompt to caption the image, or a question. * **Output:** Generated text in response to the input, such as a caption of the image, an answer to a question, a list of object bounding box coordinates, or segmentation codewords. ### Model data #### Pre-train datasets PaliGemma is pre-trained on the following mixture of datasets: * **WebLI:** [WebLI (Web Language Image)](https://arxiv.org/abs/2209.06794) is a web-scale multilingual image-text dataset built from the public web. A wide range of WebLI splits are used to acquire versatile model capabilities, such as visual semantic understanding, object localization, visually-situated text understanding, multilinguality, etc. * **CC3M-35L:** Curated English image-alt_text pairs from webpages ([Sharma et al., 2018](https://aclanthology.org/P18-1238/)). We used the [Google Cloud Translation API](https://cloud.google.com/translate) to translate into 34 additional languages. * **VQ²A-CC3M-35L/VQG-CC3M-35L:** A subset of VQ2A-CC3M ([Changpinyo et al., 2022a](https://aclanthology.org/2022.naacl-main.142/)), translated into the same additional 34 languages as CC3M-35L, using the [Google Cloud Translation API](https://cloud.google.com/translate). * **OpenImages:** Detection and object-aware questions and answers ([Piergiovanni et al. 2022](https://arxiv.org/abs/2209.04372)) generated by handcrafted rules on the [OpenImages dataset]. * **WIT:** Images and texts collected from Wikipedia ([Srinivasan et al., 2021](https://arxiv.org/abs/2103.01913)). [OpenImages dataset]: https://storage.googleapis.com/openimages/web/factsfigures_v7.html #### Data responsibility filtering The following filters are applied to WebLI, with the goal of training PaliGemma on clean data: * **Pornographic image filtering:** This filter removes images deemed to be of pornographic nature. * **Text safety filtering:** We identify and filter out images that are paired with unsafe text. Unsafe text is any text deemed to contain or be about CSAI, pornography, vulgarities, or otherwise offensive. * **Text toxicity filtering:** We further use the [Perspective API](https://perspectiveapi.com/) to identify and filter out images that are paired with text deemed insulting, obscene, hateful or otherwise toxic. * **Text personal information filtering:** We filtered certain personal information and other sensitive data using [Cloud Data Loss Prevention (DLP) API](https://cloud.google.com/security/products/dlp) to protect the privacy of individuals. Identifiers such as social security numbers and [other sensitive information types] were removed. * **Additional methods:** Filtering based on content quality and safety in line with our policies and practices. [other sensitive information types]: https://cloud.google.com/sensitive-data-protection/docs/high-sensitivity-infotypes-reference?_gl=1*jg604m*_ga*ODk5MzA3ODQyLjE3MTAzMzQ3NTk.*_ga_WH2QY8WWF5*MTcxMDUxNTkxMS4yLjEuMTcxMDUxNjA2NC4wLjAuMA..&_ga=2.172110058.-899307842.1710334759 ## How to Use PaliGemma is a single-turn vision language model not meant for conversational use, and it works best when fine-tuning to a specific use case. You can configure which task the model will solve by conditioning it with task prefixes, such as “detect” or “segment”. The pretrained models were trained in this fashion to imbue them with a rich set of capabilities (question answering, captioning, segmentation, etc.). However, they are not designed to be used directly, but to be transferred (by fine-tuning) to specific tasks using a similar prompt structure. For interactive testing, you can use the "mix" family of models, which have been fine-tuned on a mixture of tasks. Please, refer to the [usage and limitations section](#usage-and-limitations) for intended use cases, or visit the [blog post](https://huggingface.co/blog/paligemma-google-vlm) for additional details and examples. ## Use in Transformers The following snippets use model `google/paligemma-3b-mix-224` for reference purposes. The model in this repo you are now browsing may have been trained for other tasks, please make sure you use appropriate inputs for the task at hand. ### Running the default precision (`float32`) on CPU ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) model = PaliGemmaForConditionalGeneration.from_pretrained(model_id).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt") input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` Output: `Un auto azul estacionado frente a un edificio.` ### Running other precisions on CUDA For convenience, the repos contain revisions of the weights already converted to `bfloat16` and `float16`, so you can use them to reduce the download size and avoid casting on your local computer. This is how you'd run `bfloat16` on an nvidia CUDA card. ```python from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" device = "cuda:0" dtype = torch.bfloat16 url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) model = PaliGemmaForConditionalGeneration.from_pretrained( model_id, torch_dtype=dtype, device_map=device, revision="bfloat16", ).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ### Loading in 4-bit / 8-bit You need to install `bitsandbytes` to automatically run inference using 8-bit or 4-bit precision: ``` pip install bitsandbytes accelerate ``` ``` from transformers import AutoProcessor, PaliGemmaForConditionalGeneration from PIL import Image import requests import torch model_id = "google/paligemma-3b-mix-224" device = "cuda:0" dtype = torch.bfloat16 url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true" image = Image.open(requests.get(url, stream=True).raw) quantization_config = BitsAndBytesConfig(load_in_8bit=True) model = PaliGemmaForConditionalGeneration.from_pretrained( model_id, quantization_config=quantization_config ).eval() processor = AutoProcessor.from_pretrained(model_id) # Instruct the model to create a caption in Spanish prompt = "caption es" model_inputs = processor(text=prompt, images=image, return_tensors="pt").to(model.device) input_len = model_inputs["input_ids"].shape[-1] with torch.inference_mode(): generation = model.generate(**model_inputs, max_new_tokens=100, do_sample=False) generation = generation[0][input_len:] decoded = processor.decode(generation, skip_special_tokens=True) print(decoded) ``` ## Implementation information ### Hardware PaliGemma was trained using the latest generation of Tensor Processing Unit (TPU) hardware (TPUv5e). ### Software Training was done using [JAX](https://github.com/google/jax), [Flax](https://github.com/google/flax), [TFDS](https://github.com/tensorflow/datasets) and [`big_vision`](https://github.com/google-research/big_vision). JAX allows researchers to take advantage of the latest generation of hardware, including TPUs, for faster and more efficient training of large models. TFDS is used to access datasets and Flax is used for model architecture. The PaliGemma fine-tune code and inference code are released in the `big_vision` GitHub repository. ## Evaluation information ### Benchmark results In order to verify the transferability of PaliGemma to a wide variety of academic tasks, we fine-tune the pretrained models on each task. Additionally we train the mix model with a mixture of the transfer tasks. We report results on different resolutions to provide an impression of which tasks benefit from increased resolution. Importantly, none of these tasks or datasets are part of the pretraining data mixture, and their images are explicitly removed from the web-scale pre-training data. #### Mix model (fine-tune on mixture of transfer tasks) <table> <tbody><tr> <th>Benchmark</th> <th>Metric (split)</th> <th>mix-224</th> <th>mix-448</th> </tr> <tr> <td><a href="https://arxiv.org/abs/2401.06209">MMVP</a></td> <td>Paired Accuracy</td> <td>46.00</td> <td>45.33</td> </tr> <tr> <td><a href="https://arxiv.org/abs/2305.10355">POPE</a></td> <td>Accuracy<br>(random/popular/adversarial)</td> <td> 88.00<br> 86.63<br> 85.67 </td> <td> 89.37<br> 88.40<br> 87.47 </td> </tr> <tr> <td><a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a></td> <td>Accuracy (test)</td> <td>65.20</td> <td>65.47</td> </tr> </tbody></table> #### Single task (fine-tune on single task) <table> <tbody><tr> <th>Benchmark<br>(train split)</th> <th>Metric<br>(split)</th> <th>pt-224</th> <th>pt-448</th> <th>pt-896</th> </tr> <tr> <th>Captioning</th> </tr> <tr> <td> <a href="https://cocodataset.org/#home">COCO captions</a><br>(train+restval) </td> <td>CIDEr (val)</td> <td>141.92</td> <td>144.60</td> </tr> <tr> <td> <a href="https://nocaps.org/">NoCaps</a><br>(Eval of COCO<br>captions transfer) </td> <td>CIDEr (val)</td> <td>121.72</td> <td>123.58</td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">COCO-35L</a><br>(train) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 139.2<br> 115.8<br> 116.4 </td> <td> 141.2<br> 118.0<br> 118.6 </td> </tr> <tr> <td> <a href="https://arxiv.org/pdf/2205.12522">XM3600</a><br>(Eval of COCO-35L transfer) </td> <td>CIDEr dev<br>(en/avg-34/avg)</td> <td> 78.1<br> 41.3<br> 42.4 </td> <td> 80.0<br> 41.9<br> 42.9 </td> </tr> <tr> <td> <a href="https://textvqa.org/textcaps/">TextCaps</a><br>(train) </td> <td>CIDEr (val)</td> <td>127.48</td> <td>153.94</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2110.11624">SciCap</a><br>(first sentence, no subfigure)<br>(train+val) </td> <td>CIDEr/BLEU-4<br>(test)</td> <td> 162.25<br> 0.192<br> </td> <td> 181.49<br> 0.211<br> </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2108.03353">Screen2words</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>117.57</td> <td>119.59</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2010.04295">Widget Captioning</a><br>(train+dev) </td> <td>CIDEr (test)</td> <td>136.07</td> <td>148.36</td> </tr> <tr> <th>Question answering</th> </tr> <tr> <td> <a href="https://visualqa.org/index.html">VQAv2</a><br>(train+validation) </td> <td>Accuracy<br>(Test server - std)</td> <td>83.19</td> <td>85.64</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2401.06209">MMVP</a><br>(Eval of VQAv2 transfer) </td> <td>Paired Accuracy</td> <td>47.33</td> <td>45.33</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2305.10355">POPE</a><br>(Eval of VQAv2 transfer) </td> <td>Accuracy<br>(random/popular/<br>adversarial)</td> <td> 87.80<br> 85.87<br> 84.27 </td> <td> 88.23<br> 86.77<br> 85.90 </td> </tr> <tr> <td> <a href="https://okvqa.allenai.org/">OKVQA</a><br>(train) </td> <td>Accuracy (val)</td> <td>63.54</td> <td>63.15</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (MC)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>76.37</td> <td>76.90</td> </tr> <tr> <td> <a href="https://allenai.org/project/a-okvqa/home">A-OKVQA</a> (DA)<br>(train+val) </td> <td>Accuracy<br>(Test server)</td> <td>61.85</td> <td>63.22</td> </tr> <tr> <td> <a href="https://cs.stanford.edu/people/dorarad/gqa/about.html">GQA</a><br>(train_balanced+<br>val_balanced) </td> <td>Accuracy<br>(testdev balanced)</td> <td>65.61</td> <td>67.03</td> </tr> <tr> <td> <a href="https://aclanthology.org/2022.findings-acl.196/">xGQA</a><br>(Eval of GQA transfer) </td> <td>Mean Accuracy<br>(bn, de, en, id,<br>ko, pt, ru, zh)</td> <td>58.37</td> <td>59.07</td> </tr> <tr> <td> <a href="https://lil.nlp.cornell.edu/nlvr/">NLVR2</a><br>(train+dev) </td> <td>Accuracy (test)</td> <td>90.02</td> <td>88.93</td> </tr> <tr> <td> <a href="https://marvl-challenge.github.io/">MaRVL</a><br>(Eval of NLVR2 transfer) </td> <td>Mean Accuracy<br>(test)<br>(id, sw, ta, tr, zh)</td> <td>80.57</td> <td>76.78</td> </tr> <tr> <td> <a href="https://allenai.org/data/diagrams">AI2D</a><br>(train) </td> <td>Accuracy (test)</td> <td>72.12</td> <td>73.28</td> </tr> <tr> <td> <a href="https://scienceqa.github.io/">ScienceQA</a><br>(Img subset, no CoT)<br>(train+val) </td> <td>Accuracy (test)</td> <td>95.39</td> <td>95.93</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344334">RSVQA-LR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test)</td> <td>92.65</td> <td>93.11</td> </tr> <tr> <td> <a href="https://zenodo.org/records/6344367">RSVQA-HR</a> (Non numeric)<br>(train+val) </td> <td>Mean Accuracy<br>(test/test2)</td> <td> 92.61<br> 90.58 </td> <td> 92.79<br> 90.54 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/2203.10244">ChartQA</a><br>(human+aug)x(train+val) </td> <td>Mean Relaxed<br>Accuracy<br>(test_human,<br>test_aug)</td> <td>57.08</td> <td>71.36</td> </tr> <tr> <td> <a href="https://vizwiz.org/tasks-and-datasets/vqa/">VizWiz VQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td> 73.7 </td> <td> 75.52 </td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1810.12440">TallyQA</a><br>(train) </td> <td>Accuracy<br>(test_simple/<br>test_complex)</td> <td> 81.72<br> 69.56 </td> <td> 84.86<br> 72.27 </td> </tr> <tr> <td> <a href="https://ocr-vqa.github.io/">OCR-VQA</a><br>(train+val) </td> <td>Accuracy (test)</td> <td>72.32</td> <td>74.61</td> <td>74.93</td> </tr> <tr> <td> <a href="https://textvqa.org/">TextVQA</a><br>(train+val) </td> <td>Accuracy<br>(Test server - std)</td> <td>55.47</td> <td>73.15</td> <td>76.48</td> </tr> <tr> <td> <a href="https://www.docvqa.org/">DocVQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>43.74</td> <td>78.02</td> <td>84.77</td> </tr> <tr> <td> <a href="https://openaccess.thecvf.com/content/WACV2022/papers/Mathew_InfographicVQA_WACV_2022_paper.pdf">Infographic VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>28.46</td> <td>40.47</td> <td>47.75</td> </tr> <tr> <td> <a href="https://arxiv.org/abs/1905.13648">SceneText VQA</a><br>(train+val) </td> <td>ANLS (Test server)</td> <td>63.29</td> <td>81.82</td> <td>84.40</td> </tr> <tr> <th>Segmentation</th> </tr> <tr> <td> <a href="https://arxiv.org/abs/1608.00272">RefCOCO</a><br>(combined refcoco, refcoco+,<br>refcocog excluding val<br>and test images) </td> <td>MIoU<br>(validation)<br>refcoco/refcoco+/<br>refcocog</td> <td> 73.40<br> 68.32<br> 67.65 </td> <td> 75.57<br> 69.76<br> 70.17 </td> <td> 76.94<br> 72.18<br> 72.22 </td> </tr> <tr> <th>Video tasks (Caption/QA)</th> </tr> <tr> <td>MSR-VTT (Captioning)</td> <td>CIDEr (test)</td> <td>70.54</td> </tr> <tr> <td>MSR-VTT (QA)</td> <td>Accuracy (test)</td> <td>50.09</td> </tr> <tr> <td>ActivityNet (Captioning)</td> <td>CIDEr (test)</td> <td>34.62</td> </tr> <tr> <td>ActivityNet (QA)</td> <td>Accuracy (test)</td> <td>50.78</td> </tr> <tr> <td>VATEX (Captioning)</td> <td>CIDEr (test)</td> <td>79.73</td> </tr> <tr> <td>MSVD (QA)</td> <td>Accuracy (test)</td> <td>60.22</td> </tr> </tbody></table> ## Ethics and safety ### Evaluation approach Our evaluation methods include structured evaluations and internal red-teaming testing of relevant content policies. Red-teaming was conducted by a number of different teams, each with different goals and human evaluation metrics. These models were evaluated against a number of different categories relevant to ethics and safety, including: * Human evaluation on prompts covering child safety, content safety and representational harms. See the [Gemma model card](https://ai.google.dev/gemma/docs/model_card#evaluation_approach) for more details on evaluation approach, but with image captioning and visual question answering setups. * Image-to-Text benchmark evaluation: Benchmark against relevant academic datasets such as FairFace Dataset ([Karkkainen et al., 2021](https://arxiv.org/abs/1908.04913)). ### Evaluation results * The human evaluation results of ethics and safety evaluations are within acceptable thresholds for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child safety, content safety and representational harms. * On top of robust internal evaluations, we also use the Perspective API (threshold of 0.8) to measure toxicity, profanity, and other potential issues in the generated captions for images sourced from the FairFace dataset. We report the maximum and median values observed across subgroups for each of the perceived gender, ethnicity, and age attributes. <table> <tbody><tr> </tr></tbody><tbody><tr><th>Metric</th> <th>Perceived<br>gender</th> <th></th> <th>Ethnicity</th> <th></th> <th>Age group</th> <th></th> </tr> <tr> <th></th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> <th>Maximum</th> <th>Median</th> </tr> <tr> <td>Toxicity</td> <td>0.04%</td> <td>0.03%</td> <td>0.08%</td> <td>0.00%</td> <td>0.09%</td> <td>0.00%</td> </tr> <tr> <td>Identity Attack</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> <tr> <td>Insult</td> <td>0.06%</td> <td>0.04%</td> <td>0.09%</td> <td>0.07%</td> <td>0.16%</td> <td>0.00%</td> </tr> <tr> <td>Threat</td> <td>0.06%</td> <td>0.05%</td> <td>0.14%</td> <td>0.05%</td> <td>0.17%</td> <td>0.00%</td> </tr> <tr> <td>Profanity</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> <td>0.00%</td> </tr> </tbody></table> ## Usage and limitations ### Intended usage Open Vision Language Models (VLMs) have a wide range of applications across various industries and domains. The following list of potential uses is not comprehensive. The purpose of this list is to provide contextual information about the possible use-cases that the model creators considered as part of model training and development. Fine-tune on specific vision-language task: * The pre-trained models can be fine-tuned on a wide range of vision-language tasks such as: image captioning, short video caption, visual question answering, text reading, object detection and object segmentation. * The pre-trained models can be fine-tuned for specific domains such as remote sensing question answering, visual questions from people who are blind, science question answering, describe UI element functionalities. * The pre-trained models can be fine-tuned for tasks with non-textual outputs such as bounding boxes or segmentation masks. Vision-language research: * The pre-trained models and fine-tuned models can serve as a foundation for researchers to experiment with VLM techniques, develop algorithms, and contribute to the advancement of the field. ### Ethical considerations and risks The development of vision-language models (VLMs) raises several ethical concerns. In creating an open model, we have carefully considered the following: * Bias and Fairness * VLMs trained on large-scale, real-world image-text data can reflect socio-cultural biases embedded in the training material. These models underwent careful scrutiny, input data pre-processing described and posterior evaluations reported in this card. * Misinformation and Misuse * VLMs can be misused to generate text that is false, misleading, or harmful. * Guidelines are provided for responsible use with the model, see the [Responsible Generative AI Toolkit](https://ai.google.dev/responsible). * Transparency and Accountability * This model card summarizes details on the models' architecture, capabilities, limitations, and evaluation processes. * A responsibly developed open model offers the opportunity to share innovation by making VLM technology accessible to developers and researchers across the AI ecosystem. Risks identified and mitigations: * **Perpetuation of biases:** It's encouraged to perform continuous monitoring (using evaluation metrics, human review) and the exploration of de-biasing techniques during model training, fine-tuning, and other use cases. * **Generation of harmful content:** Mechanisms and guidelines for content safety are essential. Developers are encouraged to exercise caution and implement appropriate content safety safeguards based on their specific product policies and application use cases. * **Misuse for malicious purposes:** Technical limitations and developer and end-user education can help mitigate against malicious applications of LLMs. Educational resources and reporting mechanisms for users to flag misuse are provided. Prohibited uses of Gemma models are outlined in the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy). * **Privacy violations:** Models were trained on data filtered to remove certain personal information and sensitive data. Developers are encouraged to adhere to privacy regulations with privacy-preserving techniques. ### Limitations * Most limitations inherited from the underlying Gemma model still apply: * VLMs are better at tasks that can be framed with clear prompts and instructions. Open-ended or highly complex tasks might be challenging. * Natural language is inherently complex. VLMs might struggle to grasp subtle nuances, sarcasm, or figurative language. * VLMs generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements. * VLMs rely on statistical patterns in language and images. They might lack the ability to apply common sense reasoning in certain situations. * PaliGemma was designed first and foremost to serve as a general pre-trained model for transfer to specialized tasks. Hence, its "out of the box" or "zero-shot" performance might lag behind models designed specifically for that. * PaliGemma is not a multi-turn chatbot. It is designed for a single round of image and text input.
CHE-72/Baichuan2-7B-Chat-Q3_K_S-GGUF
CHE-72
2024-06-22T10:41:07Z
621
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "zh", "base_model:baichuan-inc/Baichuan2-7B-Chat", "region:us" ]
null
2024-06-22T10:40:52Z
--- base_model: baichuan-inc/Baichuan2-7B-Chat language: - en - zh license_name: baichuan2-community-license license_link: https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat/blob/main/Community%20License%20for%20Baichuan2%20Model.pdf tags: - llama-cpp - gguf-my-repo tasks: - text-generation --- # CHE-72/Baichuan2-7B-Chat-Q3_K_S-GGUF This model was converted to GGUF format from [`baichuan-inc/Baichuan2-7B-Chat`](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q3_K_S-GGUF --hf-file baichuan2-7b-chat-q3_k_s.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q3_K_S-GGUF --hf-file baichuan2-7b-chat-q3_k_s.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q3_K_S-GGUF --hf-file baichuan2-7b-chat-q3_k_s.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q3_K_S-GGUF --hf-file baichuan2-7b-chat-q3_k_s.gguf -c 2048 ```
Lajavaness/bilingual-embedding-large
Lajavaness
2024-06-29T17:44:01Z
621
4
sentence-transformers
[ "sentence-transformers", "safetensors", "bilingual", "feature-extraction", "sentence-similarity", "transformers", "sentence-embedding", "mteb", "custom_code", "fr", "en", "arxiv:2010.08240", "arxiv:1908.10084", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-06-24T09:08:32Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - sentence-embedding - mteb model-index: - name: bilingual-embedding-large results: - task: type: Clustering dataset: type: lyon-nlp/alloprof name: MTEB AlloProfClusteringP2P config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 56.77190187231352 - type: v_measures value: [0.5591529760439443, 0.5974955147482336, 0.5823433887463434, 0.5582030533486634, 0.5608784889863708] - task: type: Clustering dataset: type: lyon-nlp/alloprof name: MTEB AlloProfClusteringS2S config: default split: test revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b metrics: - type: v_measure value: 43.8035296814567 - type: v_measures value: [0.48563274496248593, 0.39879051227524587, 0.4846924607458884, 0.48918774336682036, 0.42626900366019754] - task: type: Reranking dataset: type: lyon-nlp/mteb-fr-reranking-alloprof-s2p name: MTEB AlloprofReranking config: default split: test revision: 65393d0d7a08a10b4e348135e824f385d420b0fd metrics: - type: map value: 71.51753654257872 - type: mrr value: 72.59564245963209 - type: nAUC_map_diff1 value: 54.75803232873865 - type: nAUC_map_max value: 22.763024994564173 - type: nAUC_mrr_diff1 value: 54.34234068011684 - type: nAUC_mrr_max value: 23.607169979872587 - task: type: Retrieval dataset: type: lyon-nlp/alloprof name: MTEB AlloprofRetrieval config: default split: test revision: fcf295ea64c750f41fadbaa37b9b861558e1bfbd metrics: - type: map_at_1 value: 26.598 - type: map_at_10 value: 37.808 - type: map_at_100 value: 38.726 - type: map_at_1000 value: 38.778 - type: map_at_20 value: 38.34 - type: map_at_3 value: 34.599999999999994 - type: map_at_5 value: 36.385 - type: mrr_at_1 value: 26.59758203799655 - type: mrr_at_10 value: 37.80750541437081 - type: mrr_at_100 value: 38.72559086585011 - type: mrr_at_1000 value: 38.77782362787768 - type: mrr_at_20 value: 38.33973921272315 - type: mrr_at_3 value: 34.59988485895229 - type: mrr_at_5 value: 36.385290731145794 - type: nauc_map_at_1000_diff1 value: 35.14662281558396 - type: nauc_map_at_1000_max value: 33.29444147034975 - type: nauc_map_at_100_diff1 value: 35.14901176407076 - type: nauc_map_at_100_max value: 33.32860103491456 - type: nauc_map_at_10_diff1 value: 34.96490139273049 - type: nauc_map_at_10_max value: 33.252929811567526 - type: nauc_map_at_1_diff1 value: 40.652132824871664 - type: nauc_map_at_1_max value: 28.53606743237387 - type: nauc_map_at_20_diff1 value: 35.11177701050558 - type: nauc_map_at_20_max value: 33.38822815064973 - type: nauc_map_at_3_diff1 value: 35.08068505968589 - type: nauc_map_at_3_max value: 32.10125944853496 - type: nauc_map_at_5_diff1 value: 34.78702330258393 - type: nauc_map_at_5_max value: 32.89738895858572 - type: nauc_mrr_at_1000_diff1 value: 35.14662281558396 - type: nauc_mrr_at_1000_max value: 33.29444147034975 - type: nauc_mrr_at_100_diff1 value: 35.14901176407076 - type: nauc_mrr_at_100_max value: 33.32860103491456 - type: nauc_mrr_at_10_diff1 value: 34.96490139273049 - type: nauc_mrr_at_10_max value: 33.252929811567526 - type: nauc_mrr_at_1_diff1 value: 40.652132824871664 - type: nauc_mrr_at_1_max value: 28.53606743237387 - type: nauc_mrr_at_20_diff1 value: 35.11177701050558 - type: nauc_mrr_at_20_max value: 33.38822815064973 - type: nauc_mrr_at_3_diff1 value: 35.08068505968589 - type: nauc_mrr_at_3_max value: 32.10125944853496 - type: nauc_mrr_at_5_diff1 value: 34.78702330258393 - type: nauc_mrr_at_5_max value: 32.89738895858572 - type: nauc_ndcg_at_1000_diff1 value: 34.24563790011671 - type: nauc_ndcg_at_1000_max value: 35.1750183970367 - type: nauc_ndcg_at_100_diff1 value: 34.26540576121903 - type: nauc_ndcg_at_100_max value: 36.211723696019526 - type: nauc_ndcg_at_10_diff1 value: 33.363793757214985 - type: nauc_ndcg_at_10_max value: 35.724249117130285 - type: nauc_ndcg_at_1_diff1 value: 40.652132824871664 - type: nauc_ndcg_at_1_max value: 28.53606743237387 - type: nauc_ndcg_at_20_diff1 value: 33.82271561239704 - type: nauc_ndcg_at_20_max value: 36.31393467921569 - type: nauc_ndcg_at_3_diff1 value: 33.47602274641705 - type: nauc_ndcg_at_3_max value: 33.25855418368982 - type: nauc_ndcg_at_5_diff1 value: 32.982858998932784 - type: nauc_ndcg_at_5_max value: 34.683197330270694 - type: nauc_precision_at_1000_diff1 value: 20.695105766988465 - type: nauc_precision_at_1000_max value: 79.24780459990231 - type: nauc_precision_at_100_diff1 value: 32.1302666444128 - type: nauc_precision_at_100_max value: 59.829884850113594 - type: nauc_precision_at_10_diff1 value: 28.35768310864352 - type: nauc_precision_at_10_max value: 44.56445585989902 - type: nauc_precision_at_1_diff1 value: 40.652132824871664 - type: nauc_precision_at_1_max value: 28.53606743237387 - type: nauc_precision_at_20_diff1 value: 29.66063385017264 - type: nauc_precision_at_20_max value: 48.87458861994212 - type: nauc_precision_at_3_diff1 value: 29.054817586181176 - type: nauc_precision_at_3_max value: 36.488441946705876 - type: nauc_precision_at_5_diff1 value: 27.79863250059474 - type: nauc_precision_at_5_max value: 40.02591480379938 - type: nauc_recall_at_1000_diff1 value: 20.695105766987187 - type: nauc_recall_at_1000_max value: 79.24780459990147 - type: nauc_recall_at_100_diff1 value: 32.13026664441275 - type: nauc_recall_at_100_max value: 59.829884850113736 - type: nauc_recall_at_10_diff1 value: 28.357683108643496 - type: nauc_recall_at_10_max value: 44.56445585989908 - type: nauc_recall_at_1_diff1 value: 40.652132824871664 - type: nauc_recall_at_1_max value: 28.53606743237387 - type: nauc_recall_at_20_diff1 value: 29.660633850172687 - type: nauc_recall_at_20_max value: 48.87458861994213 - type: nauc_recall_at_3_diff1 value: 29.05481758618121 - type: nauc_recall_at_3_max value: 36.48844194670593 - type: nauc_recall_at_5_diff1 value: 27.798632500594728 - type: nauc_recall_at_5_max value: 40.025914803799395 - type: ndcg_at_1 value: 26.598 - type: ndcg_at_10 value: 43.902 - type: ndcg_at_100 value: 48.647 - type: ndcg_at_1000 value: 50.135 - type: ndcg_at_20 value: 45.794000000000004 - type: ndcg_at_3 value: 37.233 - type: ndcg_at_5 value: 40.47 - type: precision_at_1 value: 26.598 - type: precision_at_10 value: 6.334 - type: precision_at_100 value: 0.8619999999999999 - type: precision_at_1000 value: 0.098 - type: precision_at_20 value: 3.5360000000000005 - type: precision_at_3 value: 14.954 - type: precision_at_5 value: 10.552999999999999 - type: recall_at_1 value: 26.598 - type: recall_at_10 value: 63.342 - type: recall_at_100 value: 86.226 - type: recall_at_1000 value: 98.143 - type: recall_at_20 value: 70.72500000000001 - type: recall_at_3 value: 44.862 - type: recall_at_5 value: 52.763000000000005 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (fr) config: fr split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 45.504 - type: f1 value: 43.653559521530944 - type: f1_weighted value: 43.65355952153093 - task: type: Retrieval dataset: type: maastrichtlawtech/bsard name: MTEB BSARDRetrieval config: default split: test revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59 metrics: - type: map_at_1 value: 6.757000000000001 - type: map_at_10 value: 10.569 - type: map_at_100 value: 11.641 - type: map_at_1000 value: 11.758000000000001 - type: map_at_20 value: 11.152 - type: map_at_3 value: 9.009 - type: map_at_5 value: 9.82 - type: mrr_at_1 value: 6.756756756756757 - type: mrr_at_10 value: 10.568604318604317 - type: mrr_at_100 value: 11.640572409499667 - type: mrr_at_1000 value: 11.757590970218725 - type: mrr_at_20 value: 11.152253605813977 - type: mrr_at_3 value: 9.00900900900901 - type: mrr_at_5 value: 9.819819819819822 - type: nauc_map_at_1000_diff1 value: 20.497861726027473 - type: nauc_map_at_1000_max value: 8.515289767591149 - type: nauc_map_at_100_diff1 value: 20.457264311409304 - type: nauc_map_at_100_max value: 8.478315371126714 - type: nauc_map_at_10_diff1 value: 22.122755317386826 - type: nauc_map_at_10_max value: 8.684832816651243 - type: nauc_map_at_1_diff1 value: 32.7324170733489 - type: nauc_map_at_1_max value: 17.632507133954086 - type: nauc_map_at_20_diff1 value: 21.309144510706552 - type: nauc_map_at_20_max value: 8.357569194331324 - type: nauc_map_at_3_diff1 value: 24.640888091380244 - type: nauc_map_at_3_max value: 8.512417316260153 - type: nauc_map_at_5_diff1 value: 23.696706451320555 - type: nauc_map_at_5_max value: 10.065581499162409 - type: nauc_mrr_at_1000_diff1 value: 20.497861726027473 - type: nauc_mrr_at_1000_max value: 8.515289767591149 - type: nauc_mrr_at_100_diff1 value: 20.457264311409304 - type: nauc_mrr_at_100_max value: 8.478315371126714 - type: nauc_mrr_at_10_diff1 value: 22.122755317386826 - type: nauc_mrr_at_10_max value: 8.684832816651243 - type: nauc_mrr_at_1_diff1 value: 32.7324170733489 - type: nauc_mrr_at_1_max value: 17.632507133954086 - type: nauc_mrr_at_20_diff1 value: 21.309144510706552 - type: nauc_mrr_at_20_max value: 8.357569194331324 - type: nauc_mrr_at_3_diff1 value: 24.640888091380244 - type: nauc_mrr_at_3_max value: 8.512417316260153 - type: nauc_mrr_at_5_diff1 value: 23.696706451320555 - type: nauc_mrr_at_5_max value: 10.065581499162409 - type: nauc_ndcg_at_1000_diff1 value: 13.492135744038377 - type: nauc_ndcg_at_1000_max value: 9.019754831261519 - type: nauc_ndcg_at_100_diff1 value: 12.386959698428296 - type: nauc_ndcg_at_100_max value: 8.140082932773288 - type: nauc_ndcg_at_10_diff1 value: 18.08185602779908 - type: nauc_ndcg_at_10_max value: 6.451070792965509 - type: nauc_ndcg_at_1_diff1 value: 32.7324170733489 - type: nauc_ndcg_at_1_max value: 17.632507133954086 - type: nauc_ndcg_at_20_diff1 value: 16.104176022358285 - type: nauc_ndcg_at_20_max value: 5.670070730016123 - type: nauc_ndcg_at_3_diff1 value: 22.51956353681352 - type: nauc_ndcg_at_3_max value: 6.154988622749747 - type: nauc_ndcg_at_5_diff1 value: 20.90624606199523 - type: nauc_ndcg_at_5_max value: 9.018795972091642 - type: nauc_precision_at_1000_diff1 value: -7.5156227166904666 - type: nauc_precision_at_1000_max value: 21.389191162087265 - type: nauc_precision_at_100_diff1 value: -3.485942444137334 - type: nauc_precision_at_100_max value: 11.604808001151612 - type: nauc_precision_at_10_diff1 value: 10.569691962167767 - type: nauc_precision_at_10_max value: 2.5249979762023176 - type: nauc_precision_at_1_diff1 value: 32.7324170733489 - type: nauc_precision_at_1_max value: 17.632507133954086 - type: nauc_precision_at_20_diff1 value: 6.993263678069275 - type: nauc_precision_at_20_max value: 1.212342757686577 - type: nauc_precision_at_3_diff1 value: 17.747702545254146 - type: nauc_precision_at_3_max value: 0.8595742156164364 - type: nauc_precision_at_5_diff1 value: 15.109328976502349 - type: nauc_precision_at_5_max value: 7.278141846526832 - type: nauc_recall_at_1000_diff1 value: -7.515622716690405 - type: nauc_recall_at_1000_max value: 21.389191162087183 - type: nauc_recall_at_100_diff1 value: -3.4859424441373585 - type: nauc_recall_at_100_max value: 11.604808001151582 - type: nauc_recall_at_10_diff1 value: 10.569691962167727 - type: nauc_recall_at_10_max value: 2.524997976202258 - type: nauc_recall_at_1_diff1 value: 32.7324170733489 - type: nauc_recall_at_1_max value: 17.632507133954086 - type: nauc_recall_at_20_diff1 value: 6.9932636780692325 - type: nauc_recall_at_20_max value: 1.2123427576865247 - type: nauc_recall_at_3_diff1 value: 17.74770254525415 - type: nauc_recall_at_3_max value: 0.8595742156164401 - type: nauc_recall_at_5_diff1 value: 15.109328976502375 - type: nauc_recall_at_5_max value: 7.278141846526856 - type: ndcg_at_1 value: 6.757000000000001 - type: ndcg_at_10 value: 13.147 - type: ndcg_at_100 value: 18.932 - type: ndcg_at_1000 value: 22.663 - type: ndcg_at_20 value: 15.222 - type: ndcg_at_3 value: 9.812999999999999 - type: ndcg_at_5 value: 11.286 - type: precision_at_1 value: 6.757000000000001 - type: precision_at_10 value: 2.162 - type: precision_at_100 value: 0.5 - type: precision_at_1000 value: 0.08099999999999999 - type: precision_at_20 value: 1.486 - type: precision_at_3 value: 4.054 - type: precision_at_5 value: 3.1530000000000005 - type: recall_at_1 value: 6.757000000000001 - type: recall_at_10 value: 21.622 - type: recall_at_100 value: 50.0 - type: recall_at_1000 value: 80.631 - type: recall_at_20 value: 29.73 - type: recall_at_3 value: 12.162 - type: recall_at_5 value: 15.766 - task: type: Clustering dataset: type: lyon-nlp/clustering-hal-s2s name: MTEB HALClusteringS2S config: default split: test revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915 metrics: - type: v_measure value: 24.858750161104126 - type: v_measures value: [0.2984717048033491, 0.2554963718476627, 0.27306743677619566, 0.2655317293951285, 0.22365142310852648] - task: type: Clustering dataset: type: mlsum name: MTEB MLSUMClusteringP2P config: fr split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 39.196489283513955 - type: v_measures value: [0.397079645426394, 0.41814959265244056, 0.4075805000522318, 0.3777207448521023, 0.3499037023664506] - task: type: Clustering dataset: type: mlsum name: MTEB MLSUMClusteringS2S config: fr split: test revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7 metrics: - type: v_measure value: 38.90242301200363 - type: v_measures value: [0.3879152638224075, 0.4155376524170288, 0.41644489647223826, 0.3720493791140543, 0.33885028730003314] - task: type: Classification dataset: type: mteb/mtop_domain name: MTEB MTOPDomainClassification (fr) config: fr split: test revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf metrics: - type: accuracy value: 86.88380833072345 - type: f1 value: 86.96787213648228 - type: f1_weighted value: 86.83432508604707 - task: type: Classification dataset: type: mteb/mtop_intent name: MTEB MTOPIntentClassification (fr) config: fr split: test revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba metrics: - type: accuracy value: 63.780144065142494 - type: f1 value: 46.070400220183394 - type: f1_weighted value: 66.2871618019472 - task: type: Classification dataset: type: mteb/masakhanews name: MTEB MasakhaNEWSClassification (fra) config: fra split: test revision: 18193f187b92da67168c655c9973a165ed9593dd metrics: - type: accuracy value: 73.64928909952606 - type: f1 value: 70.38554255346646 - type: f1_weighted value: 73.78534895009892 - task: type: Clustering dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClusteringP2P (fra) config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 53.94667838037061 - type: v_measures value: [1.0, 0.02235188516574002, 0.3929431892497775, 0.5064698281428253, 0.7755690164601873] - task: type: Clustering dataset: type: masakhane/masakhanews name: MTEB MasakhaNEWSClusteringS2S (fra) config: fra split: test revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60 metrics: - type: v_measure value: 34.0804579102654 - type: v_measures value: [1.0, 0.03741154081338069, 0.3515971997960832, 0.027612964551341854, 0.2874011903524643] - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (fr) config: fr split: test revision: 4672e20407010da34463acc759c162ca9734bca6 metrics: - type: accuracy value: 66.42905178211164 - type: f1 value: 64.35827544332014 - type: f1_weighted value: 65.57615486214955 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (fr) config: fr split: test revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8 metrics: - type: accuracy value: 70.72293207800942 - type: f1 value: 70.02549388005589 - type: f1_weighted value: 70.47189927452128 - task: type: Retrieval dataset: type: jinaai/mintakaqa name: MTEB MintakaRetrieval (fr) config: fr split: test revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e metrics: - type: map_at_1 value: 19.41 - type: map_at_10 value: 28.254 - type: map_at_100 value: 29.355999999999998 - type: map_at_1000 value: 29.444 - type: map_at_20 value: 28.918 - type: map_at_3 value: 25.833000000000002 - type: map_at_5 value: 27.141 - type: mrr_at_1 value: 19.41031941031941 - type: mrr_at_10 value: 28.254309504309553 - type: mrr_at_100 value: 29.35551984998816 - type: mrr_at_1000 value: 29.44409433115412 - type: mrr_at_20 value: 28.91784141291608 - type: mrr_at_3 value: 25.832650832650856 - type: mrr_at_5 value: 27.141004641004695 - type: nauc_map_at_1000_diff1 value: 21.162510480664746 - type: nauc_map_at_1000_max value: 30.110199608609662 - type: nauc_map_at_100_diff1 value: 21.14297046777571 - type: nauc_map_at_100_max value: 30.122604363568612 - type: nauc_map_at_10_diff1 value: 21.199056431758258 - type: nauc_map_at_10_max value: 30.270246884757647 - type: nauc_map_at_1_diff1 value: 26.23217963475422 - type: nauc_map_at_1_max value: 25.259209130612533 - type: nauc_map_at_20_diff1 value: 21.13743075881012 - type: nauc_map_at_20_max value: 30.22788200140141 - type: nauc_map_at_3_diff1 value: 21.972972142629406 - type: nauc_map_at_3_max value: 29.75552354821982 - type: nauc_map_at_5_diff1 value: 21.79084317530715 - type: nauc_map_at_5_max value: 30.387209515342473 - type: nauc_mrr_at_1000_diff1 value: 21.162510480664746 - type: nauc_mrr_at_1000_max value: 30.110199608609662 - type: nauc_mrr_at_100_diff1 value: 21.14297046777571 - type: nauc_mrr_at_100_max value: 30.122604363568612 - type: nauc_mrr_at_10_diff1 value: 21.199056431758258 - type: nauc_mrr_at_10_max value: 30.270246884757647 - type: nauc_mrr_at_1_diff1 value: 26.23217963475422 - type: nauc_mrr_at_1_max value: 25.259209130612533 - type: nauc_mrr_at_20_diff1 value: 21.13743075881012 - type: nauc_mrr_at_20_max value: 30.22788200140141 - type: nauc_mrr_at_3_diff1 value: 21.972972142629406 - type: nauc_mrr_at_3_max value: 29.75552354821982 - type: nauc_mrr_at_5_diff1 value: 21.79084317530715 - type: nauc_mrr_at_5_max value: 30.387209515342473 - type: nauc_ndcg_at_1000_diff1 value: 19.513865750647934 - type: nauc_ndcg_at_1000_max value: 30.88832578481811 - type: nauc_ndcg_at_100_diff1 value: 18.80377362018204 - type: nauc_ndcg_at_100_max value: 30.952837388928288 - type: nauc_ndcg_at_10_diff1 value: 19.020409577228836 - type: nauc_ndcg_at_10_max value: 31.70346401198393 - type: nauc_ndcg_at_1_diff1 value: 26.23217963475422 - type: nauc_ndcg_at_1_max value: 25.259209130612533 - type: nauc_ndcg_at_20_diff1 value: 18.76950183960116 - type: nauc_ndcg_at_20_max value: 31.598953492190745 - type: nauc_ndcg_at_3_diff1 value: 20.824179655562357 - type: nauc_ndcg_at_3_max value: 31.0541305570042 - type: nauc_ndcg_at_5_diff1 value: 20.483978673834002 - type: nauc_ndcg_at_5_max value: 32.10721692420019 - type: nauc_precision_at_1000_diff1 value: 2.2196094973599374 - type: nauc_precision_at_1000_max value: 40.25789000420308 - type: nauc_precision_at_100_diff1 value: 8.63682048375218 - type: nauc_precision_at_100_max value: 32.32052516290328 - type: nauc_precision_at_10_diff1 value: 12.717058324435426 - type: nauc_precision_at_10_max value: 35.265251512978985 - type: nauc_precision_at_1_diff1 value: 26.23217963475422 - type: nauc_precision_at_1_max value: 25.259209130612533 - type: nauc_precision_at_20_diff1 value: 11.220279583941753 - type: nauc_precision_at_20_max value: 35.20392547315769 - type: nauc_precision_at_3_diff1 value: 17.863403819374867 - type: nauc_precision_at_3_max value: 34.42645261456197 - type: nauc_precision_at_5_diff1 value: 17.114444816553625 - type: nauc_precision_at_5_max value: 36.59616112935629 - type: nauc_recall_at_1000_diff1 value: 2.219609497359936 - type: nauc_recall_at_1000_max value: 40.257890004202366 - type: nauc_recall_at_100_diff1 value: 8.63682048375225 - type: nauc_recall_at_100_max value: 32.320525162903365 - type: nauc_recall_at_10_diff1 value: 12.71705832443547 - type: nauc_recall_at_10_max value: 35.26525151297903 - type: nauc_recall_at_1_diff1 value: 26.23217963475422 - type: nauc_recall_at_1_max value: 25.259209130612533 - type: nauc_recall_at_20_diff1 value: 11.220279583941677 - type: nauc_recall_at_20_max value: 35.203925473157646 - type: nauc_recall_at_3_diff1 value: 17.86340381937489 - type: nauc_recall_at_3_max value: 34.42645261456199 - type: nauc_recall_at_5_diff1 value: 17.11444481655362 - type: nauc_recall_at_5_max value: 36.59616112935631 - type: ndcg_at_1 value: 19.41 - type: ndcg_at_10 value: 32.956 - type: ndcg_at_100 value: 38.504 - type: ndcg_at_1000 value: 41.217 - type: ndcg_at_20 value: 35.35 - type: ndcg_at_3 value: 27.898 - type: ndcg_at_5 value: 30.249 - type: precision_at_1 value: 19.41 - type: precision_at_10 value: 4.795 - type: precision_at_100 value: 0.744 - type: precision_at_1000 value: 0.096 - type: precision_at_20 value: 2.869 - type: precision_at_3 value: 11.289 - type: precision_at_5 value: 7.912 - type: recall_at_1 value: 19.41 - type: recall_at_10 value: 47.952 - type: recall_at_100 value: 74.36500000000001 - type: recall_at_1000 value: 96.478 - type: recall_at_20 value: 57.371 - type: recall_at_3 value: 33.866 - type: recall_at_5 value: 39.558 - task: type: PairClassification dataset: type: GEM/opusparcus name: MTEB OpusparcusPC (fr) config: fr split: test revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a metrics: - type: cos_sim_accuracy value: 84.60490463215259 - type: cos_sim_ap value: 94.73385323002613 - type: cos_sim_f1 value: 89.28571428571428 - type: cos_sim_precision value: 82.83772302463891 - type: cos_sim_recall value: 96.8222442899702 - type: dot_accuracy value: 84.60490463215259 - type: dot_ap value: 94.73385323002613 - type: dot_f1 value: 89.28571428571428 - type: dot_precision value: 82.83772302463891 - type: dot_recall value: 96.8222442899702 - type: euclidean_accuracy value: 84.60490463215259 - type: euclidean_ap value: 94.73385323002613 - type: euclidean_f1 value: 89.28571428571428 - type: euclidean_precision value: 82.83772302463891 - type: euclidean_recall value: 96.8222442899702 - type: manhattan_accuracy value: 84.46866485013624 - type: manhattan_ap value: 94.6952667850496 - type: manhattan_f1 value: 89.20454545454546 - type: manhattan_precision value: 85.24886877828054 - type: manhattan_recall value: 93.54518371400199 - type: max_accuracy value: 84.60490463215259 - type: max_ap value: 94.73385323002613 - type: max_f1 value: 89.28571428571428 - task: type: PairClassification dataset: type: paws-x name: MTEB PawsX (fr) config: fr split: test revision: 8a04d940a42cd40658986fdd8e3da561533a3646 metrics: - type: cos_sim_accuracy value: 64.25 - type: cos_sim_ap value: 64.3383111759356 - type: cos_sim_f1 value: 63.03703703703704 - type: cos_sim_precision value: 47.35670562047857 - type: cos_sim_recall value: 94.24141749723145 - type: dot_accuracy value: 64.25 - type: dot_ap value: 64.33647413039195 - type: dot_f1 value: 63.03703703703704 - type: dot_precision value: 47.35670562047857 - type: dot_recall value: 94.24141749723145 - type: euclidean_accuracy value: 64.25 - type: euclidean_ap value: 64.33837256418407 - type: euclidean_f1 value: 63.03703703703704 - type: euclidean_precision value: 47.35670562047857 - type: euclidean_recall value: 94.24141749723145 - type: manhattan_accuracy value: 64.3 - type: manhattan_ap value: 64.32795078191543 - type: manhattan_f1 value: 62.935507783543365 - type: manhattan_precision value: 47.298050139275766 - type: manhattan_recall value: 94.01993355481729 - type: max_accuracy value: 64.3 - type: max_ap value: 64.33837256418407 - type: max_f1 value: 63.03703703703704 - task: type: STS dataset: type: Lajavaness/SICK-fr name: MTEB SICKFr config: default split: test revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a metrics: - type: cos_sim_pearson value: 84.72950355896789 - type: cos_sim_spearman value: 79.4608290812696 - type: euclidean_pearson value: 81.32539142627735 - type: euclidean_spearman value: 79.46019403205177 - type: manhattan_pearson value: 81.35998006674681 - type: manhattan_spearman value: 79.41438354042496 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (fr) config: fr split: test revision: eea2b4fe26a775864c896887d910b76a8098ad3f metrics: - type: cos_sim_pearson value: 81.94721498063055 - type: cos_sim_spearman value: 83.27561639335909 - type: euclidean_pearson value: 80.74250472409508 - type: euclidean_spearman value: 83.27561639335909 - type: manhattan_pearson value: 80.55336080634422 - type: manhattan_spearman value: 83.46556509775091 - task: type: STS dataset: type: PhilipMay/stsb_multi_mt name: MTEB STSBenchmarkMultilingualSTS (fr) config: fr split: test revision: 93d57ef91790589e3ce9c365164337a8a78b7632 metrics: - type: cos_sim_pearson value: 86.42208373352562 - type: cos_sim_spearman value: 86.99991276887566 - type: euclidean_pearson value: 85.50325028600815 - type: euclidean_spearman value: 87.00166758198344 - type: manhattan_pearson value: 85.51048739822163 - type: manhattan_spearman value: 86.98373812309134 - task: type: Summarization dataset: type: lyon-nlp/summarization-summeval-fr-p2p name: MTEB SummEvalFr config: default split: test revision: b385812de6a9577b6f4d0f88c6a6e35395a94054 metrics: - type: cos_sim_pearson value: 31.911797754639164 - type: cos_sim_spearman value: 32.17186521965941 - type: dot_pearson value: 31.911796813216963 - type: dot_spearman value: 32.17186521965941 - task: type: Reranking dataset: type: lyon-nlp/mteb-fr-reranking-syntec-s2p name: MTEB SyntecReranking config: default split: test revision: daf0863838cd9e3ba50544cdce3ac2b338a1b0ad metrics: - type: map value: 86.04285714285714 - type: mrr value: 86.04285714285714 - type: nAUC_map_diff1 value: 67.26948440486838 - type: nAUC_map_max value: 1.8106095852919237 - type: nAUC_mrr_diff1 value: 67.26948440486838 - type: nAUC_mrr_max value: 1.8106095852919237 - task: type: Retrieval dataset: type: lyon-nlp/mteb-fr-retrieval-syntec-s2p name: MTEB SyntecRetrieval config: default split: test revision: 19661ccdca4dfc2d15122d776b61685f48c68ca9 metrics: - type: map_at_1 value: 72.0 - type: map_at_10 value: 81.294 - type: map_at_100 value: 81.428 - type: map_at_1000 value: 81.428 - type: map_at_20 value: 81.38499999999999 - type: map_at_3 value: 79.833 - type: map_at_5 value: 80.88300000000001 - type: mrr_at_1 value: 72.0 - type: mrr_at_10 value: 81.2940476190476 - type: mrr_at_100 value: 81.42843497082626 - type: mrr_at_1000 value: 81.42843497082626 - type: mrr_at_20 value: 81.3849567099567 - type: mrr_at_3 value: 79.83333333333334 - type: mrr_at_5 value: 80.88333333333333 - type: nauc_map_at_1000_diff1 value: 49.54228699135176 - type: nauc_map_at_1000_max value: 0.06067639126124744 - type: nauc_map_at_100_diff1 value: 49.54228699135176 - type: nauc_map_at_100_max value: 0.06067639126124744 - type: nauc_map_at_10_diff1 value: 49.53852081026105 - type: nauc_map_at_10_max value: 0.3053366025276881 - type: nauc_map_at_1_diff1 value: 46.31028693528697 - type: nauc_map_at_1_max value: -1.3144841269841376 - type: nauc_map_at_20_diff1 value: 49.506524393346865 - type: nauc_map_at_20_max value: -0.08620516816116486 - type: nauc_map_at_3_diff1 value: 51.19253694434259 - type: nauc_map_at_3_max value: 0.668963341320456 - type: nauc_map_at_5_diff1 value: 49.37737335974092 - type: nauc_map_at_5_max value: -0.33590080679527184 - type: nauc_mrr_at_1000_diff1 value: 49.54228699135176 - type: nauc_mrr_at_1000_max value: 0.06067639126124744 - type: nauc_mrr_at_100_diff1 value: 49.54228699135176 - type: nauc_mrr_at_100_max value: 0.06067639126124744 - type: nauc_mrr_at_10_diff1 value: 49.53852081026105 - type: nauc_mrr_at_10_max value: 0.3053366025276881 - type: nauc_mrr_at_1_diff1 value: 46.31028693528697 - type: nauc_mrr_at_1_max value: -1.3144841269841376 - type: nauc_mrr_at_20_diff1 value: 49.506524393346865 - type: nauc_mrr_at_20_max value: -0.08620516816116486 - type: nauc_mrr_at_3_diff1 value: 51.19253694434259 - type: nauc_mrr_at_3_max value: 0.668963341320456 - type: nauc_mrr_at_5_diff1 value: 49.37737335974092 - type: nauc_mrr_at_5_max value: -0.33590080679527184 - type: nauc_ndcg_at_1000_diff1 value: 49.6848382380357 - type: nauc_ndcg_at_1000_max value: 0.04870501937096382 - type: nauc_ndcg_at_100_diff1 value: 49.6848382380357 - type: nauc_ndcg_at_100_max value: 0.04870501937096382 - type: nauc_ndcg_at_10_diff1 value: 49.57645777272915 - type: nauc_ndcg_at_10_max value: 0.6430420679440534 - type: nauc_ndcg_at_1_diff1 value: 46.31028693528697 - type: nauc_ndcg_at_1_max value: -1.3144841269841376 - type: nauc_ndcg_at_20_diff1 value: 49.45017977018584 - type: nauc_ndcg_at_20_max value: -0.9049646537819854 - type: nauc_ndcg_at_3_diff1 value: 52.898658060430904 - type: nauc_ndcg_at_3_max value: 1.3070987858400047 - type: nauc_ndcg_at_5_diff1 value: 49.054456158711595 - type: nauc_ndcg_at_5_max value: -1.177736876794348 - type: nauc_precision_at_1000_diff1 value: nan - type: nauc_precision_at_1000_max value: nan - type: nauc_precision_at_100_diff1 value: nan - type: nauc_precision_at_100_max value: nan - type: nauc_precision_at_10_diff1 value: 45.611577964519334 - type: nauc_precision_at_10_max value: 7.936507936508234 - type: nauc_precision_at_1_diff1 value: 46.31028693528697 - type: nauc_precision_at_1_max value: -1.3144841269841376 - type: nauc_precision_at_20_diff1 value: 35.80765639589114 - type: nauc_precision_at_20_max value: -56.34920634920767 - type: nauc_precision_at_3_diff1 value: 61.56395891690006 - type: nauc_precision_at_3_max value: 4.509803921568394 - type: nauc_precision_at_5_diff1 value: 43.15592903828254 - type: nauc_precision_at_5_max value: -11.783380018673482 - type: nauc_recall_at_1000_diff1 value: nan - type: nauc_recall_at_1000_max value: nan - type: nauc_recall_at_100_diff1 value: nan - type: nauc_recall_at_100_max value: nan - type: nauc_recall_at_10_diff1 value: 45.61157796451899 - type: nauc_recall_at_10_max value: 7.9365079365084235 - type: nauc_recall_at_1_diff1 value: 46.31028693528697 - type: nauc_recall_at_1_max value: -1.3144841269841376 - type: nauc_recall_at_20_diff1 value: 35.80765639589109 - type: nauc_recall_at_20_max value: -56.34920634920657 - type: nauc_recall_at_3_diff1 value: 61.5639589169002 - type: nauc_recall_at_3_max value: 4.509803921568655 - type: nauc_recall_at_5_diff1 value: 43.15592903828185 - type: nauc_recall_at_5_max value: -11.783380018674132 - type: ndcg_at_1 value: 72.0 - type: ndcg_at_10 value: 85.39999999999999 - type: ndcg_at_100 value: 85.897 - type: ndcg_at_1000 value: 85.897 - type: ndcg_at_20 value: 85.679 - type: ndcg_at_3 value: 82.44 - type: ndcg_at_5 value: 84.418 - type: precision_at_1 value: 72.0 - type: precision_at_10 value: 9.8 - type: precision_at_100 value: 1.0 - type: precision_at_1000 value: 0.1 - type: precision_at_20 value: 4.95 - type: precision_at_3 value: 30.0 - type: precision_at_5 value: 19.0 - type: recall_at_1 value: 72.0 - type: recall_at_10 value: 98.0 - type: recall_at_100 value: 100.0 - type: recall_at_1000 value: 100.0 - type: recall_at_20 value: 99.0 - type: recall_at_3 value: 90.0 - type: recall_at_5 value: 95.0 - task: type: Retrieval dataset: type: jinaai/xpqa name: MTEB XPQARetrieval (fr) config: fr split: test revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f metrics: - type: map_at_1 value: 39.007999999999996 - type: map_at_10 value: 60.319 - type: map_at_100 value: 61.644 - type: map_at_1000 value: 61.712 - type: map_at_20 value: 61.053000000000004 - type: map_at_3 value: 53.942 - type: map_at_5 value: 58.132 - type: mrr_at_1 value: 60.747663551401864 - type: mrr_at_10 value: 68.10636404094345 - type: mrr_at_100 value: 68.57873500135119 - type: mrr_at_1000 value: 68.60183171580495 - type: mrr_at_20 value: 68.36478690417064 - type: mrr_at_3 value: 66.08811748998662 - type: mrr_at_5 value: 67.12950600801062 - type: nauc_map_at_1000_diff1 value: 46.633293504114434 - type: nauc_map_at_1000_max value: 49.64101214126472 - type: nauc_map_at_100_diff1 value: 46.591074735810764 - type: nauc_map_at_100_max value: 49.60435205919251 - type: nauc_map_at_10_diff1 value: 46.43486044009182 - type: nauc_map_at_10_max value: 49.222376322201065 - type: nauc_map_at_1_diff1 value: 54.05448738315762 - type: nauc_map_at_1_max value: 24.89423418246206 - type: nauc_map_at_20_diff1 value: 46.5131815367993 - type: nauc_map_at_20_max value: 49.59934686413147 - type: nauc_map_at_3_diff1 value: 48.9688624037045 - type: nauc_map_at_3_max value: 42.49957358403678 - type: nauc_map_at_5_diff1 value: 46.99183039261338 - type: nauc_map_at_5_max value: 47.53639129265315 - type: nauc_mrr_at_1000_diff1 value: 55.19161872624107 - type: nauc_mrr_at_1000_max value: 57.00358990989949 - type: nauc_mrr_at_100_diff1 value: 55.18922613003231 - type: nauc_mrr_at_100_max value: 56.99910514727928 - type: nauc_mrr_at_10_diff1 value: 55.030904237371224 - type: nauc_mrr_at_10_max value: 56.99207425207498 - type: nauc_mrr_at_1_diff1 value: 57.34582863547949 - type: nauc_mrr_at_1_max value: 56.98349812853321 - type: nauc_mrr_at_20_diff1 value: 55.0844531407365 - type: nauc_mrr_at_20_max value: 57.00264208604279 - type: nauc_mrr_at_3_diff1 value: 55.64735827895618 - type: nauc_mrr_at_3_max value: 57.29703659670222 - type: nauc_mrr_at_5_diff1 value: 55.38932070005733 - type: nauc_mrr_at_5_max value: 57.25407452051235 - type: nauc_ndcg_at_1000_diff1 value: 48.35929794131023 - type: nauc_ndcg_at_1000_max value: 52.522778834890325 - type: nauc_ndcg_at_100_diff1 value: 47.56861995950162 - type: nauc_ndcg_at_100_max value: 51.88529740719871 - type: nauc_ndcg_at_10_diff1 value: 46.87985909159253 - type: nauc_ndcg_at_10_max value: 51.38385177949589 - type: nauc_ndcg_at_1_diff1 value: 57.34582863547949 - type: nauc_ndcg_at_1_max value: 56.98349812853321 - type: nauc_ndcg_at_20_diff1 value: 47.02262106267654 - type: nauc_ndcg_at_20_max value: 51.98897867452814 - type: nauc_ndcg_at_3_diff1 value: 49.03406876232345 - type: nauc_ndcg_at_3_max value: 51.51024175391598 - type: nauc_ndcg_at_5_diff1 value: 47.91197516404686 - type: nauc_ndcg_at_5_max value: 49.81634328349405 - type: nauc_precision_at_1000_diff1 value: -17.71701381388244 - type: nauc_precision_at_1000_max value: 21.388531798131364 - type: nauc_precision_at_100_diff1 value: -15.539336210367946 - type: nauc_precision_at_100_max value: 24.698569356368356 - type: nauc_precision_at_10_diff1 value: -4.847564718245167 - type: nauc_precision_at_10_max value: 36.89232193362945 - type: nauc_precision_at_1_diff1 value: 57.34582863547949 - type: nauc_precision_at_1_max value: 56.98349812853321 - type: nauc_precision_at_20_diff1 value: -8.665616069723095 - type: nauc_precision_at_20_max value: 33.10610080847679 - type: nauc_precision_at_3_diff1 value: 11.05940284130611 - type: nauc_precision_at_3_max value: 46.70869419036014 - type: nauc_precision_at_5_diff1 value: 1.5834393191629197 - type: nauc_precision_at_5_max value: 41.862527913783865 - type: nauc_recall_at_1000_diff1 value: -10.569638499832076 - type: nauc_recall_at_1000_max value: 8.749844537815326 - type: nauc_recall_at_100_diff1 value: 27.487071443233262 - type: nauc_recall_at_100_max value: 35.4122293490153 - type: nauc_recall_at_10_diff1 value: 35.91432645476102 - type: nauc_recall_at_10_max value: 44.19204765665408 - type: nauc_recall_at_1_diff1 value: 54.05448738315762 - type: nauc_recall_at_1_max value: 24.89423418246206 - type: nauc_recall_at_20_diff1 value: 34.4589376419412 - type: nauc_recall_at_20_max value: 44.798804712797136 - type: nauc_recall_at_3_diff1 value: 45.60515874084114 - type: nauc_recall_at_3_max value: 39.109212749550764 - type: nauc_recall_at_5_diff1 value: 41.35313462606428 - type: nauc_recall_at_5_max value: 43.39841866434438 - type: ndcg_at_1 value: 60.748000000000005 - type: ndcg_at_10 value: 66.232 - type: ndcg_at_100 value: 70.745 - type: ndcg_at_1000 value: 71.87 - type: ndcg_at_20 value: 67.977 - type: ndcg_at_3 value: 61.06399999999999 - type: ndcg_at_5 value: 62.744 - type: precision_at_1 value: 60.748000000000005 - type: precision_at_10 value: 15.434000000000001 - type: precision_at_100 value: 1.924 - type: precision_at_1000 value: 0.207 - type: precision_at_20 value: 8.378 - type: precision_at_3 value: 37.161 - type: precision_at_5 value: 26.916 - type: recall_at_1 value: 39.007999999999996 - type: recall_at_10 value: 74.929 - type: recall_at_100 value: 92.508 - type: recall_at_1000 value: 99.8 - type: recall_at_20 value: 80.343 - type: recall_at_3 value: 58.367000000000004 - type: recall_at_5 value: 66.446 license: apache-2.0 language: - fr - en --- # [bilingual-embedding-large](https://huggingface.co/Lajavaness/bilingual-embedding-large) Bilingual-embedding is the Embedding Model for bilingual language: french and english. This model is a specialized sentence-embedding trained specifically for the bilingual language, leveraging the robust capabilities of [XLM-RoBERTa](https://huggingface.co/FacebookAI/xlm-roberta-large), a pre-trained language model based on the [XLM-RoBERTa](https://huggingface.co/FacebookAI/xlm-roberta-large) architecture. The model utilizes xlm-roberta to encode english-french sentences into a 1024-dimensional vector space, facilitating a wide range of applications from semantic search to text clustering. The embeddings capture the nuanced meanings of english-french sentences, reflecting both the lexical and contextual layers of the language. ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BilingualModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Training and Fine-tuning process #### Stage 1: NLI Training - Dataset: [(SNLI+XNLI) for english+french] - Method: Training using Multi-Negative Ranking Loss. This stage focused on improving the model's ability to discern and rank nuanced differences in sentence semantics. ### Stage 3: Continued Fine-tuning for Semantic Textual Similarity on STS Benchmark - Dataset: [STSB-fr and en] - Method: Fine-tuning specifically for the semantic textual similarity benchmark using Siamese BERT-Networks configured with the 'sentence-transformers' library. ### Stage 4: Advanced Augmentation Fine-tuning - Dataset: STSB with generate [silver sample from gold sample](https://www.sbert.net/examples/training/data_augmentation/README.html) - Method: Employed an advanced strategy using [Augmented SBERT](https://arxiv.org/abs/2010.08240) with Pair Sampling Strategies, integrating both Cross-Encoder and Bi-Encoder models. This stage further refined the embeddings by enriching the training data dynamically, enhancing the model's robustness and accuracy. ## Usage: Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["Paris est une capitale de la France", "Paris is a capital of France"] model = SentenceTransformer('Lajavaness/bilingual-embedding-large', trust_remote_code=True) print(embeddings) ``` ## Evaluation TODO ## Citation @article{conneau2019unsupervised, title={Unsupervised cross-lingual representation learning at scale}, author={Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin}, journal={arXiv preprint arXiv:1911.02116}, year={2019} } @article{reimers2019sentence, title={Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks}, author={Nils Reimers, Iryna Gurevych}, journal={https://arxiv.org/abs/1908.10084}, year={2019} } @article{thakur2020augmented, title={Augmented SBERT: Data Augmentation Method for Improving Bi-Encoders for Pairwise Sentence Scoring Tasks}, author={Thakur, Nandan and Reimers, Nils and Daxenberger, Johannes and Gurevych, Iryna}, journal={arXiv e-prints}, pages={arXiv--2010}, year={2020}
prabal123/Llama-nl-sql2
prabal123
2024-06-26T10:21:47Z
621
0
null
[ "gguf", "region:us" ]
null
2024-06-26T08:45:54Z
Entry not found
timm/vit_small_r26_s32_224.augreg_in21k_ft_in1k
timm
2023-05-06T00:52:24Z
620
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2106.10270", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
2022-12-23T00:33:45Z
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-21k --- # Model card for vit_small_r26_s32_224.augreg_in21k_ft_in1k A ResNet - Vision Transformer (ViT) hybrid image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 36.4 - GMACs: 3.5 - Activations (M): 9.4 - Image size: 224 x 224 - **Papers:** - How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k - **Original:** https://github.com/google-research/vision_transformer ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('vit_small_r26_s32_224.augreg_in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'vit_small_r26_s32_224.augreg_in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 50, 384) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{steiner2021augreg, title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers}, author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas}, journal={arXiv preprint arXiv:2106.10270}, year={2021} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
VegaKH/Ultraskin
VegaKH
2023-03-07T17:24:09Z
620
26
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "photographic", "skin", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-30T19:05:44Z
--- language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - photographic - skin - diffusers inference: true license: creativeml-openrail-m --- # Ultraskin 0.9 - SD 2.1 768 finetuned on images of ultra-detailed human skin This model will add a LOT of skin detail compared to SD 2.1. Sometimes this makes images look more realistic, sometimes less realistic! **Prompting for this model is the same as 2.1 and can be tricky compared to 1.5.** Try these tags in your prompts: "teen","twenties" or "twenty year old", "thirties" or "thirty year old", continued all the way to "eighties" or "eighty year old", "elderly", "moles", "freckles", "scar","skin detail" For stronger results, use the activation word **ultraskin** in your prompt. I recommend using this token toward the end of your prompt as it can lead to exaggerated detail and plastic looking skin. Recommended negative prompts (if you want photographic results): *rendered, 3d, blender, octane, unreal engine, video game character, cat* This model is based on the 768 model and was trained on 768px x 768px images, so you will have better results creating images at least this size. # Sample outputs: ![Samples](https://huggingface.co/VegaKH/Ultraskin/resolve/main/preview1.jpg) ![Samples](https://huggingface.co/VegaKH/Ultraskin/resolve/main/preview2.jpg) ![Samples](https://huggingface.co/VegaKH/Ultraskin/resolve/main/preview3.jpg) ### 🧨 Diffusers This model can be used just like any other Stable Diffusion 2.1 model. For more information, please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). If using this with a web UI that requires a YAML file, download the YAML file and place it next to the model.
jfiekdjdk/gpt2-furry-prompt-gen
jfiekdjdk
2023-01-24T11:25:22Z
620
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-01-23T15:26:16Z
### WARNING: this model is overfitting!!!
nxnhjrjtbjfzhrovwl/limarp-llongma2-8k-gguf-f16
nxnhjrjtbjfzhrovwl
2023-08-24T01:10:34Z
620
2
null
[ "gguf", "arxiv:2305.11206", "license:agpl-3.0", "region:us" ]
null
2023-08-23T18:05:10Z
--- '[object Object]': null license: agpl-3.0 --- This repository contains the unquantized merge of [limarp-llongma2-8k lora](https://huggingface.co/lemonilia/limarp-llongma2-8k) in gguf format. You can quantize the f16 gguf to the quantization of your choice by following the below steps: 1. Download and extract the latest [llama.cpp binaries](https://github.com/ggerganov/llama.cpp/releases/download/master-cf658ad/llama-master-cf658ad-bin-win-avx2-x64.zip) ([or compile it yourself if you're on Linux](https://github.com/ggerganov/llama.cpp#build)) 2. Move the "quantize" executable to the same folder where you downloaded the f16 gguf model. 3. Open a command prompt window in that same folder and write the following command, making any changes that you see fit. ```bash quantize.exe limarp-llongma2-13b.f16.gguf limarp-llongma2-13b.q4_0.gguf q4_0 ``` 4. Press enter to run the command and the quantized model will be generated in the folder. The below are the contents of the original model card: # Model Card for LimaRP-LLongMA2-8k-v2 LimaRP-LLongMA2-8k is an experimental [Llama2](https://huggingface.co/meta-llama) finetune narrowly focused on novel-style roleplay chatting, and a continuation of the previously released [LimaRP-llama2](https://huggingface.co/lemonilia/limarp-llama2) with a larger number of training tokens (+95%). To considerably facilitate uploading, distribution and merging with other models, LoRA adapters are provided. LimaRP-LLongMA2 LoRA adapters, as their name suggests, are intended to be applied on LLongMA-2 models with 8k context ([7B](https://huggingface.co/conceptofmind/LLongMA-2-7b) and [13B](https://huggingface.co/conceptofmind/LLongMA-2-13b)) and their derivatives. Data updates may be posted in the future. The current version is **v3**. ## Model Details ### Model Description This is an experimental attempt at creating an RP-oriented fine-tune using a manually-curated, high-quality dataset of human-generated conversations. The main rationale for this are the observations from [Zhou et al.](https://arxiv.org/abs/2305.11206). The authors suggested that just 1000-2000 carefully curated training examples may yield high quality output for assistant-type chatbots. This is in contrast with the commonly employed strategy where a very large number of training examples (tens of thousands to even millions) of widely varying quality are used. For LimaRP a similar approach was used, with the difference that the conversational data is almost entirely human-generated. Every training example is manually compiled and selected to comply with subjective quality parameters, with virtually no chance for OpenAI-style alignment responses to come up. ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> The model is intended to approximate the experience of 1-on-1 roleplay as observed on many Internet forums dedicated on roleplaying. It _must_ be used with a specific format similar to that of this template: ``` <<SYSTEM>> Character's Persona: {bot character description} User's Persona: {user character description} Scenario: {what happens in the story} Play the role of Character. You must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User. Character should respond with messages of medium length. <<AIBOT>> Character: {utterance} <<HUMAN>> User: {utterance} [etc.] ``` With `<<SYSTEM>>`, `<<AIBOT>>` and `<<HUMAN>>` being special instruct-mode sequences. The text under curly braces must be replaced with appropriate text in _natural language_. Replace `User` and `Character` with actual character names. This more graphical breakdown of the prompt format with a practical example might make it clearer: ![graphical explanation](https://files.catbox.moe/fq8ner.png) ### More detailed notes on prompt format, usage and other settings - **The model has been tested mainly using Oobabooga's `text-generation-webui` as a backend** - Preferably respect spacing and newlines shown above. This might not be possible yet with some frontends. - Replace `Character` and `User` in the above template with your desired names. - The scenario description has a large influence on what the character will do. Try to keep it more open-ended to lessen its impact. - **The model expects users and characters to use third-person narration in simple past and enclose dialogues with standard quotation marks `" "`.** Other formats are not supported (= not in the training data). - Do not use newlines in Persona and Scenario. Use natural language. - The last line in `<<SYSTEM>>` does not need to be written exactly as depicted, but should mention that `Character` and `User` will engage in roleplay and specify the length of `Character`'s messages - The message lengths used during training are: `tiny`, `short`, `average`, `long`, `huge`, `humongous`. However, there might not have been enough training examples for each length for this instruction to have a significant impact. The preferred lengths for this type of role-playing are `average` or `long`. - Suggested text generation settings: - Temperature ~0.70 - Tail-Free Sampling 0.85 - Repetition penalty ~1.10 (Compared to LLaMAv1, Llama2 appears to require a somewhat higher rep.pen.) - Not used: Top-P (disabled/set to 1.0), Top-K (disabled/set to 0), Typical P (disabled/set to 1.0) ### Sample character cards Here are a few example **SillyTavern character cards** following the required format; download and import into SillyTavern. Feel free to modify and adapt them to your purposes. - [Carina, a 'big sister' android maid](https://files.catbox.moe/1qcqqj.png) - [Charlotte, a cute android maid](https://files.catbox.moe/k1x9a7.png) - [Etma, an 'aligned' AI assistant](https://files.catbox.moe/dj8ggi.png) - [Mila, an anthro pet catgirl](https://files.catbox.moe/amnsew.png) - [Samuel, a handsome vampire](https://files.catbox.moe/f9uiw1.png) And here is a sample of how the model is intended to behave with proper chat and prompt formatting: https://files.catbox.moe/egfd90.png ### Other tips It's possible to make the model automatically generate random character information and scenario by adding just `<<SYSTEM>>` and the character name in text completion mode in `text-generation-webui`, as done here (click to enlarge). The format generally closely matches that of the training data: ![example](https://files.catbox.moe/5ntmcj.png) ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> The model has not been tested for: - IRC-style chat - Markdown-style roleplay (asterisks for actions, dialogue lines without quotation marks) - Storywriting - Usage without the suggested prompt format Furthermore, the model is not intended nor expected to provide factual and accurate information on any subject. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> The model may easily output disturbing and socially inappropriate content and therefore should not be used by minors or within environments where a general audience is expected. Its outputs will have in general a strong NSFW bias unless the character card/description de-emphasizes it. ## How to Get Started with the Model Download and load with `text-generation-webui` as a back-end application. It's suggested to start the `webui` via command line. Assuming you have copied the LoRA files under a subdirectory called `lora/limarp-llongma2-7b`, you would use something like this for the 7B model: ``` python server.py --api --verbose --model LLongMA-7B --lora limarp-llongma2-7b ``` When using 4-bit `bitsnbytes` it is suggested to use double quantization to increase accuracy. The starting command may be something like this: ``` python server.py --verbose --api --model LLongMA-2-13B --lora limarp13-llongma2-13b --load-in-4bit --use_double_quant ``` Then, preferably use [SillyTavern](https://github.com/SillyTavern/SillyTavern) as a front-end using the following settings: ![SillyTavern settings](https://files.catbox.moe/nd8v12.png) In addition of enabling the instruct mode with the correct sequences, it's particularly important to **enable "Include names"**, as the model was trained with them at the start of each utterance. If it's disabled, the model can start getting confused and often write for the user in its responses. To take advantage of this model's larger context length, unlock the context size and set it up to any length up to 8192 tokens, depending on your VRAM constraints. On most consumer GPUs this will likely need to be set to a lower value. ![Unlock context size](https://files.catbox.moe/wfj8vv.png) It is **recommended to ban/disable the EOS token** as it can for instance apparently give [artifacts or tokenization issues](https://files.catbox.moe/cxfrzu.png) when it ends up getting generated close to punctuation or quotation marks, at least in SillyTavern. These would typically happen with AI responses. ![Ban EOS](https://files.catbox.moe/xslnhb.png) ## Training Details ### Training Data The training data comprises about **1500** manually edited roleplaying conversation threads from various Internet RP forums, for about **24 megabytes** of data. Character and Scenario information was initially filled in for every thread with the help of mainly `gpt-4`. Later on this has been accomplished with a custom summarizer. Conversations in the dataset are almost entirely human-generated except for a handful of messages. Character names in the RP stories have been isolated and replaced with standard placeholder strings. Usernames, out-of-context (OOC) messages and personal information have not been intentionally included. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> The version of LimaRP uploaded on this repository was trained using a small NVidia A40 cluster in 8-bit with regular LoRA adapters and 8-bit AdamW optimizer. #### Training Hyperparameters The most important settings were as follows: - --learning_rate 0.000065 - --lr_scheduler_type cosine - --lora_r 8 - --lora_alpha 16 - --lora_dropout 0.01 - --num_train_epochs 2 - --bf16 True - --tf32 True - --bits 8 - --per_device_train_batch_size 1 - --gradient_accumulation_steps 1 - --optim adamw_bnb_8bit **All linear LoRA layers** were targeted. An effective batch size of 1 was found to yield the lowest loss curves during fine-tuning. It was also found that using `--train_on_source False` with the entire training example at the output yields similar results. These LoRAs have been trained in this way (similar to what was done with [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) or as with unsupervised finetuning). <!-- ## Evaluation --> <!-- This section describes the evaluation protocols and provides the results. --> ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Finetuning this model on 8 NVidia A40 48GB in parallel takes about 25 minutes (7B) or 45 minutes (13B).
stablediffusionapi/epicrealismnaturalsi
stablediffusionapi
2023-08-28T17:42:24Z
620
0
diffusers
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-28T17:40:36Z
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # epiCRealismNaturalsin API Inference ![generated from stablediffusionapi.com](https://cdn2.stablediffusionapi.com/generations/15658281221693244365.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "epicrealismnaturalsi" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Try model for free: [Generate Images](https://stablediffusionapi.com/models/epicrealismnaturalsi) Model link: [View model](https://stablediffusionapi.com/models/epicrealismnaturalsi) Credits: [View credits](https://civitai.com/?query=epiCRealismNaturalsin) View all models: [View Models](https://stablediffusionapi.com/models) import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "your_api_key", "model_id": "epicrealismnaturalsi", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
QizhiPei/biot5-base-text2mol
QizhiPei
2024-01-07T08:48:11Z
620
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "dataset:QizhiPei/BioT5_finetune_dataset", "arxiv:2310.07276", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2023-10-28T16:49:39Z
--- license: mit datasets: - QizhiPei/BioT5_finetune_dataset language: - en --- ## Example Usage ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("QizhiPei/biot5-base-text2mol", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('QizhiPei/biot5-base-text2mol') task_definition = 'Definition: You are given a molecule description in English. Your job is to generate the molecule SELFIES that fits the description.\n\n' text_input = 'The molecule is a monocarboxylic acid anion obtained by deprotonation of the carboxy and sulfino groups of 3-sulfinopropionic acid. Major microspecies at pH 7.3 It is an organosulfinate oxoanion and a monocarboxylic acid anion. It is a conjugate base of a 3-sulfinopropionic acid.' task_input = f'Now complete the following example -\nInput: {text_input}\nOutput: ' model_input = task_definition + task_input input_ids = tokenizer(model_input, return_tensors="pt").input_ids generation_config = model.generation_config generation_config.max_length = 512 generation_config.num_beams = 1 outputs = model.generate(input_ids, generation_config=generation_config) output_selfies = tokenizer.decode(outputs[0], skip_special_tokens=True).replace(' ', '') print(output_selfies) import selfies as sf output_smiles = sf.decoder(output_selfies) print(output_smiles) ``` ## References For more information, please refer to our paper and GitHub repository. Paper: [BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations](https://arxiv.org/abs/2310.07276) GitHub: [BioT5](https://github.com/QizhiPei/BioT5) Authors: *Qizhi Pei, Wei Zhang, Jinhua Zhu, Kehan Wu, Kaiyuan Gao, Lijun Wu, Yingce Xia, and Rui Yan*
ntc-ai/SDXL-LoRA-slider.nice-hands
ntc-ai
2024-01-20T19:23:09Z
620
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-20T19:23:05Z
--- language: - en thumbnail: "images/evaluate/nice hands...poorly drawn hands/nice hands_17_3.0.png" widget: - text: nice hands output: url: images/nice hands_17_3.0.png - text: nice hands output: url: images/nice hands_19_3.0.png - text: nice hands output: url: images/nice hands_20_3.0.png - text: nice hands output: url: images/nice hands_21_3.0.png - text: nice hands output: url: images/nice hands_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "nice hands" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - nice hands (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/nice hands_17_-3.0.png" width=256 height=256 /> | <img src="images/nice hands_17_0.0.png" width=256 height=256 /> | <img src="images/nice hands_17_3.0.png" width=256 height=256 /> | | <img src="images/nice hands_19_-3.0.png" width=256 height=256 /> | <img src="images/nice hands_19_0.0.png" width=256 height=256 /> | <img src="images/nice hands_19_3.0.png" width=256 height=256 /> | | <img src="images/nice hands_20_-3.0.png" width=256 height=256 /> | <img src="images/nice hands_20_0.0.png" width=256 height=256 /> | <img src="images/nice hands_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` nice hands ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.nice-hands', weight_name='nice hands.safetensors', adapter_name="nice hands") # Activate the LoRA pipe.set_adapters(["nice hands"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, nice hands" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
EPFL-VILAB/4M-7-SR_L_CC12M
EPFL-VILAB
2024-06-14T08:32:02Z
620
1
ml-4m
[ "ml-4m", "safetensors", "arxiv:2312.06647", "arxiv:2406.09406", "license:other", "region:us" ]
null
2024-03-25T14:48:21Z
--- license: other license_name: sample-code-license license_link: LICENSE library_name: ml-4m --- # 4M: Massively Multimodal Masked Modeling *A framework for training any-to-any multimodal foundation models. <br>Scalable. Open-sourced. Across tens of modalities and tasks.* [`Website`](https://4m.epfl.ch) | [`GitHub`](https://github.com/apple/ml-4m) | [`BibTeX`](#citation) Official implementation and pre-trained models for : [**4M: Massively Multimodal Masked Modeling**](https://arxiv.org/abs/2312.06647), NeurIPS 2023 (Spotlight) <br> *[David Mizrahi](https://dmizrahi.com/)\*, [Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/), [Teresa Yeo](https://aserety.github.io/), [Mingfei Gao](https://fly6464.github.io/), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)* [**4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities**](https://arxiv.org/abs/2406.09406), arXiv 2024 <br> *[Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/)\*, [David Mizrahi](https://dmizrahi.com/)\*, [Ali Garjani](https://garjania.github.io/), [Mingfei Gao](https://fly6464.github.io/), [David Griffiths](https://www.dgriffiths.uk/), [Jiaming Hu](https://scholar.google.com/citations?user=vm3imKsAAAAJ&hl=en), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)* 4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities. Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models. We are releasing code and models for "4M: Massively Multimodal Masked Modeling" (here denoted 4M-7), as well as "4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities" (here denoted 4M-21). ## Installation For install instructions, please see https://github.com/apple/ml-4m. ## Usage This model can be loaded from Hugging Face Hub as follows: ```python from fourm.models.fm import FM fm = FM.from_pretrained('EPFL-VILAB/4M-7-SR_L_CC12M') ``` Please see https://github.com/apple/ml-4m/blob/main/README_GENERATION.md for more detailed instructions and https://github.com/apple/ml-4m for other 4M model and tokenizer checkpoints. ## Citation If you find this repository helpful, please consider citing our work: ``` @inproceedings{4m, title={{4M}: Massively Multimodal Masked Modeling}, author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, } @article{4m21, title={{4M-21}: An Any-to-Any Vision Model for Tens of Tasks and Modalities}, author={Roman Bachmann and O{\u{g}}uzhan Fatih Kar and David Mizrahi and Ali Garjani and Mingfei Gao and David Griffiths and Jiaming Hu and Afshin Dehghan and Amir Zamir}, journal={arXiv 2024}, year={2024}, } ``` ## License The model weights in this repository are released under the Sample Code license as found in the [LICENSE](LICENSE) file.
mradermacher/IceLemonTeaRP-32k-7b-GGUF
mradermacher
2024-05-06T05:19:37Z
620
9
transformers
[ "transformers", "gguf", "mergekit", "merge", "alpaca", "mistral", "not-for-all-audiences", "nsfw", "en", "base_model:icefog72/IceLemonTeaRP-32k-7b", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-03T23:31:28Z
--- base_model: icefog72/IceLemonTeaRP-32k-7b language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge - alpaca - mistral - not-for-all-audiences - nsfw --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/IceLemonTeaRP-32k-7b-GGUF/resolve/main/IceLemonTeaRP-32k-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
briaai/BRIA-2.3-ControlNet-BG-Gen
briaai
2024-06-23T06:25:54Z
620
15
diffusers
[ "diffusers", "safetensors", "text-to-image", "controlnet model", "legal liability", "commercial use", "license:other", "region:us" ]
text-to-image
2024-04-16T07:48:38Z
--- license: other license_name: bria-2.3 license_link: https://bria.ai/bria-huggingface-model-license-agreement/ inference: false tags: - text-to-image - controlnet model - legal liability - commercial use extra_gated_description: BRIA 2.3 ControlNet-Background Generation requires access to BRIA 2.3 Text-to-Image model extra_gated_heading: "Fill in this form to get access" extra_gated_fields: Name: type: text Company/Org name: type: text Org Type (Early/Growth Startup, Enterprise, Academy): type: text Role: type: text Country: type: text Email: type: text By submitting this form, I agree to BRIA’s Privacy policy and Terms & conditions, see links below: type: checkbox --- # BRIA-2.3-ControlNet-Background-Generation, Model Card BRIA 2.3 ControlNet-Background Generation, trained on the foundation of [BRIA 2.3 Text-to-Image](https://huggingface.co/briaai/BRIA-2.3), enables the generation of high-quality images guided by a textual prompt and the extracted background mask estimation from an input image. This allows for the creation of different background variations of an image, all sharing the same foreground. [BRIA 2.3](https://huggingface.co/briaai/BRIA-2.3) was trained from scratch exclusively on licensed data from our esteemed data partners. Therefore, they are safe for commercial use and provide full legal liability coverage for copyright and privacy infringement, as well as harmful content mitigation. That is, our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content. ![examples](bg_img.png) ### Model Description - **Developed by:** BRIA AI - **Model type:** [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet) for Latent diffusion - **License:** [bria-2.3](https://bria.ai/bria-huggingface-model-license-agreement/) - **Model Description:** ControlNet Background-Generation for BRIA 2.3 Text-to-Image model. The model generates images guided by text and the background mask. - **Resources for more information:** [BRIA AI](https://bria.ai/) ### Get Access BRIA 2.3 ControlNet-Background Generation requires access to BRIA 2.3 Text-to-Image. For more information, [click here](https://huggingface.co/briaai/BRIA-2.3). ## Usage Installation ```bash pip install -qr https://huggingface.co/briaai/BRIA-2.3-ControlNet-BG-Gen/resolve/main/requirements.txt huggingface-cli download briaai/BRIA-2.3-ControlNet-BG-Gen --include replace_bg/* --local-dir . --quiet ``` ```python import torch from diffusers import ( AutoencoderKL, EulerAncestralDiscreteScheduler, ) from diffusers.utils import load_image from replace_bg.model.pipeline_controlnet_sd_xl import StableDiffusionXLControlNetPipeline from replace_bg.model.controlnet import ControlNetModel from replace_bg.utilities import resize_image, remove_bg_from_image, paste_fg_over_image, get_control_image_tensor controlnet = ControlNetModel.from_pretrained("briaai/BRIA-2.3-ControlNet-BG-Gen", torch_dtype=torch.float16) vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipe = StableDiffusionXLControlNetPipeline.from_pretrained("briaai/BRIA-2.3", controlnet=controlnet, torch_dtype=torch.float16, vae=vae).to('cuda:0') pipe.scheduler = EulerAncestralDiscreteScheduler( beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000, steps_offset=1 ) image_path = "https://farm5.staticflickr.com/4007/4322154488_997e69e4cf_z.jpg" image = load_image(image_path) image = resize_image(image) mask = remove_bg_from_image(image_path) control_tensor = get_control_image_tensor(pipe.vae, image, mask) prompt = "in a zoo" negative_prompt = "Logo,Watermark,Text,Ugly,Bad proportions,Bad quality,Out of frame,Mutation" generator = torch.Generator(device="cuda:0").manual_seed(0) gen_img = pipe( negative_prompt=negative_prompt, prompt=prompt, controlnet_conditioning_scale=1.0, num_inference_steps=50, image = control_tensor, generator=generator ).images[0] result_image = paste_fg_over_image(gen_img, image, mask) ```
kuei1026/3d-icon-sdxl-dora
kuei1026
2024-04-24T14:32:55Z
620
1
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-04-16T11:58:23Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>' output: url: "image_0.png" - text: 'a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>' output: url: "image_1.png" - text: 'a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>' output: url: "image_2.png" - text: 'a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: 3d icon in the style of <s0><s1> license: openrail++ --- # SDXL LoRA DreamBooth - kuei1026/3d-icon-sdxl-dora <Gallery /> ## Model description ### These are kuei1026/3d-icon-sdxl-dora LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`3d-icon-sdxl-dora.safetensors` here 💾](/kuei1026/3d-icon-sdxl-dora/blob/main/3d-icon-sdxl-dora.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `<lora:3d-icon-sdxl-dora:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`3d-icon-sdxl-dora_emb.safetensors` here 💾](/kuei1026/3d-icon-sdxl-dora/blob/main/3d-icon-sdxl-dora_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `3d-icon-sdxl-dora_emb` to your prompt. For example, `3d icon in the style of 3d-icon-sdxl-dora_emb` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('kuei1026/3d-icon-sdxl-dora', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='kuei1026/3d-icon-sdxl-dora', filename='3d-icon-sdxl-dora_emb.safetensors', repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('a <s0><s1> icon of an astronaut riding a horse, in the style of <s0><s1>').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `<s0><s1>` in your prompt ## Details All [Files & versions](/kuei1026/3d-icon-sdxl-dora/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
second-state/Yi-1.5-34B-Chat-GGUF
second-state
2024-07-02T10:12:13Z
620
4
transformers
[ "transformers", "gguf", "llama", "text-generation", "base_model:01-ai/Yi-1.5-34B-Chat", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-12T04:27:32Z
--- base_model: 01-ai/Yi-1.5-34B-Chat inference: false model_creator: 01-ai model_name: Yi-1.5-34B-Chat model_type: yi pipeline_tag: text-generation quantized_by: Second State Inc. license: apache-2.0 --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Yi-1.5-34B-Chat-GGUF ## Original Model [01-ai/Yi-1.5-34B-Chat](https://huggingface.co/01-ai/Yi-1.5-34B-Chat) ## Run with LlamaEdge <!-- - LlamaEdge version: [v0.10.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.10.0) and above --> - LlamaEdge version: coming soon - Prompt template - Prompt type: `chatml` - Prompt string ```text <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` - Reverse prompt: `<|im_end|>` - Context size: `7168` - Run as LlamaEdge service ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Yi-1.5-34B-Chat-Q5_K_M.gguf \ llama-api-server.wasm \ --prompt-template chatml \ --reverse-prompt "<|im_end|>" \ --ctx-size 7168 \ --model-name Yi-1.5-34B-Chat ``` - Run as LlamaEdge command app ```bash wasmedge --dir .:. --nn-preload default:GGML:AUTO:Yi-1.5-34B-Chat-Q5_K_M.gguf \ llama-chat.wasm \ --prompt-template chatml \ --reverse-prompt "<|im_end|>" \ --ctx-size 7168 ``` ## Quantized GGUF Models | Name | Quant method | Bits | Size | Use case | | ---- | ---- | ---- | ---- | ----- | | [Yi-1.5-34B-Chat-Q2_K.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q2_K.gguf) | Q2_K | 2 |12.8 GB| smallest, significant quality loss - not recommended for most purposes | | [Yi-1.5-34B-Chat-Q3_K_L.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q3_K_L.gguf) | Q3_K_L | 3 | 18.1 GB| small, substantial quality loss | | [Yi-1.5-34B-Chat-Q3_K_M.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q3_K_M.gguf) | Q3_K_M | 3 | 16.7 GB| very small, high quality loss | | [Yi-1.5-34B-Chat-Q3_K_S.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q3_K_S.gguf) | Q3_K_S | 3 | 15 GB| very small, high quality loss | | [Yi-1.5-34B-Chat-Q4_0.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q4_0.gguf) | Q4_0 | 4 | 19.5 GB| legacy; small, very high quality loss - prefer using Q3_K_M | | [Yi-1.5-34B-Chat-Q4_K_M.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q4_K_M.gguf) | Q4_K_M | 4 | 20.7 GB| medium, balanced quality - recommended | | [Yi-1.5-34B-Chat-Q4_K_S.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q4_K_S.gguf) | Q4_K_S | 4 | 19.6 GB| small, greater quality loss | | [Yi-1.5-34B-Chat-Q5_0.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q5_0.gguf) | Q5_0 | 5 | 23.7 GB| legacy; medium, balanced quality - prefer using Q4_K_M | | [Yi-1.5-34B-Chat-Q5_K_M.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q5_K_M.gguf) | Q5_K_M | 5 | 23.4 GB| large, very low quality loss - recommended | | [Yi-1.5-34B-Chat-Q5_K_S.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q5_K_S.gguf) | Q5_K_S | 5 | 23.7 GB| large, low quality loss - recommended | | [Yi-1.5-34B-Chat-Q6_K.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q6_K.gguf) | Q6_K | 6 | 28.3 GB| very large, extremely low quality loss | | [Yi-1.5-34B-Chat-Q8_0.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-Q8_0.gguf) | Q8_0 | 8 | 36.5 GB| very large, extremely low quality loss - not recommended | | [Yi-1.5-34B-Chat-f16-00001-of-00003.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-f16-00001-of-00003.gguf) | f16 | 16 | 32.2 GB| | | [Yi-1.5-34B-Chat-f16-00002-of-00003.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-f16-00002-of-00003.gguf) | f16 | 16 | 32.1 GB| | | [Yi-1.5-34B-Chat-f16-00003-of-00003.gguf](https://huggingface.co/second-state/Yi-1.5-34B-Chat-GGUF/blob/main/Yi-1.5-34B-Chat-f16-00003-of-00003.gguf) | f16 | 16 | 4.48 GB| | *Quantized with llama.cpp b3135*
jsincn/phi-3-mini-128k-instruct-awq
jsincn
2024-05-19T11:35:48Z
620
2
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "license:mit", "autotrain_compatible", "endpoints_compatible", "4-bit", "awq", "region:us" ]
text-generation
2024-05-16T08:42:56Z
--- license: mit --- This repo includes a Version of Phi-3 that was quantized to AWQ using AutoAWQ. Currently hosting via the TGI docker image fails due to its fallback on AutoModel and that not being compatible with AWQ. Hosting on vLLM is recommended. To run the model you need to set the trust-remote-code (or similar) flag. While the remote code comes from microsoft (see LICENSE information in the file) you should validate the code yourself before deployment.
UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF
UnclearPWR
2024-06-06T02:44:39Z
620
0
transformers
[ "transformers", "gguf", "climate", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:NuclearnAI/SPARK-mini-instruct", "endpoints_compatible", "region:us" ]
text-generation
2024-06-06T02:44:29Z
--- language: - en library_name: transformers tags: - climate - llama-cpp - gguf-my-repo base_model: NuclearnAI/SPARK-mini-instruct pipeline_tag: text-generation --- # UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF This model was converted to GGUF format from [`NuclearnAI/SPARK-mini-instruct`](https://huggingface.co/NuclearnAI/SPARK-mini-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NuclearnAI/SPARK-mini-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama --hf-repo UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF --hf-file spark-mini-instruct-q6_k.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF --hf-file spark-mini-instruct-q6_k.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./main --hf-repo UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF --hf-file spark-mini-instruct-q6_k.gguf -p "The meaning to life and the universe is" ``` or ``` ./server --hf-repo UnclearPWR/SPARK-mini-instruct-Q6_K-GGUF --hf-file spark-mini-instruct-q6_k.gguf -c 2048 ```
openmmlab/upernet-convnext-large
openmmlab
2023-01-19T10:45:41Z
619
0
transformers
[ "transformers", "pytorch", "upernet", "vision", "image-segmentation", "en", "arxiv:1807.10221", "arxiv:2201.03545", "license:mit", "endpoints_compatible", "region:us" ]
image-segmentation
2023-01-13T14:27:35Z
--- language: en license: mit tags: - vision - image-segmentation model_name: openmmlab/upernet-convnext-large --- # UperNet, ConvNeXt large-sized backbone UperNet framework for semantic segmentation, leveraging a ConvNeXt backbone. UperNet was introduced in the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Xiao et al. Combining UperNet with a ConvNeXt backbone was introduced in the paper [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545). Disclaimer: The team releasing UperNet + ConvNeXt did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description UperNet is a framework for semantic segmentation. It consists of several components, including a backbone, a Feature Pyramid Network (FPN) and a Pyramid Pooling Module (PPM). Any visual backbone can be plugged into the UperNet framework. The framework predicts a semantic label per pixel. ![UperNet architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/upernet_architecture.jpg) ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=openmmlab/upernet) to look for fine-tuned versions (with various backbones) on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/upernet#transformers.UperNetForSemanticSegmentation).
rahmanfadhil/indobert-finetuned-indonli
rahmanfadhil
2023-01-23T08:21:01Z
619
3
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "id", "dataset:indonli", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
2023-01-22T07:15:05Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - indonli language: - id --- # indobert-finetuned-indonli This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It's based on the [IndoBERT](https://huggingface.co/indobenchmark/indobert-base-p2) model by [indobenchmark](https://github.com/IndoNLP/indonlu) and trained using the IndoNLI dataset on [Google Colab](https://colab.research.google.com/drive/1Qs1Eo5x5WsSCV6hU3MWbnl8hU3pZI4E5?usp=sharing). <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('indobert-finetuned-indonli') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('indobert-finetuned-indonli') model = AutoModel.from_pretrained('indobert-finetuned-indonli') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=indobert-finetuned-indonli) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 646 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 64, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 65, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
TheBloke/StableBeluga-7B-GGUF
TheBloke
2023-09-27T12:48:05Z
619
3
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "dataset:conceptofmind/cot_submix_original", "dataset:conceptofmind/flan2021_submix_original", "dataset:conceptofmind/t0_submix_original", "dataset:conceptofmind/niv2_submix_original", "arxiv:2307.09288", "arxiv:2306.02707", "base_model:stabilityai/StableBeluga-7b", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
2023-09-05T23:14:14Z
--- language: - en license: llama2 datasets: - conceptofmind/cot_submix_original - conceptofmind/flan2021_submix_original - conceptofmind/t0_submix_original - conceptofmind/niv2_submix_original model_name: StableBeluga 7B base_model: stabilityai/StableBeluga-7b inference: false model_creator: Stability AI model_type: llama pipeline_tag: text-generation prompt_template: '### System: {system_message} ### User: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # StableBeluga 7B - GGUF - Model creator: [Stability AI](https://huggingface.co/stabilityai) - Original model: [StableBeluga 7B](https://huggingface.co/stabilityai/StableBeluga-7b) <!-- description start --> ## Description This repo contains GGUF format model files for [Stability AI's StableBeluga 7B](https://huggingface.co/stabilityai/StableBeluga-7b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/StableBeluga-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StableBeluga-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF) * [Stability AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/stabilityai/StableBeluga-7b) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Orca-Hashes ``` ### System: {system_message} ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [stablebeluga-7b.Q2_K.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [stablebeluga-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [stablebeluga-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [stablebeluga-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [stablebeluga-7b.Q4_0.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [stablebeluga-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [stablebeluga-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [stablebeluga-7b.Q5_0.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [stablebeluga-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [stablebeluga-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [stablebeluga-7b.Q6_K.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [stablebeluga-7b.Q8_0.gguf](https://huggingface.co/TheBloke/StableBeluga-7B-GGUF/blob/main/stablebeluga-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/StableBeluga-7B-GGUF and below it, a specific filename to download, such as: stablebeluga-7b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/StableBeluga-7B-GGUF stablebeluga-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/StableBeluga-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/StableBeluga-7B-GGUF stablebeluga-7b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m stablebeluga-7b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n\n### User:\n{prompt}\n\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/StableBeluga-7B-GGUF", model_file="stablebeluga-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Stability AI's StableBeluga 7B # Stable Beluga 7B Use [Stable Chat (Research Preview)](https://chat.stability.ai/chat) to test Stability AI's best language models for free ## Model Description `Stable Beluga 7B` is a Llama2 7B model finetuned on an Orca style Dataset ## Usage Start chatting with `Stable Beluga 7B` using the following code snippet: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("stabilityai/StableBeluga-7B", use_fast=False) model = AutoModelForCausalLM.from_pretrained("stabilityai/StableBeluga-7B", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto") system_prompt = "### System:\nYou are StableBeluga, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n" message = "Write me a poem please" prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` Stable Beluga 7B should be used with this prompt format: ``` ### System: This is a system prompt, please behave and help the user. ### User: Your prompt here ### Assistant: The output of Stable Beluga 7B ``` ## Model Details * **Developed by**: [Stability AI](https://stability.ai/) * **Model type**: Stable Beluga 7B is an auto-regressive language model fine-tuned on Llama2 7B. * **Language(s)**: English * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers) * **License**: Fine-tuned checkpoints (`Stable Beluga 7B`) is licensed under the [STABLE BELUGA NON-COMMERCIAL COMMUNITY LICENSE AGREEMENT](https://huggingface.co/stabilityai/StableBeluga-7B/blob/main/LICENSE.txt) * **Contact**: For questions and comments about the model, please email `[email protected]` ### Training Dataset ` Stable Beluga 7B` is trained on our internal Orca-style dataset ### Training Procedure Models are learned via supervised fine-tuning on the aforementioned datasets, trained in mixed-precision (BF16), and optimized with AdamW. We outline the following hyperparameters: | Dataset | Batch Size | Learning Rate |Learning Rate Decay| Warm-up | Weight Decay | Betas | |-------------------|------------|---------------|-------------------|---------|--------------|-------------| | Orca pt1 packed | 256 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) | | Orca pt2 unpacked | 512 | 3e-5 | Cosine to 3e-6 | 100 | 1e-6 | (0.9, 0.95) | ## Ethical Considerations and Limitations Beluga is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Beluga's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Beluga, developers should perform safety testing and tuning tailored to their specific applications of the model. ## Citations ```bibtext @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtext @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- original-model-card end -->
jbochi/candle-coedit-quantized
jbochi
2023-11-15T21:57:50Z
619
5
transformers
[ "transformers", "safetensors", "gguf", "t5", "text2text-generation", "text-generation-inference", "candle", "en", "dataset:grammarly/coedit", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-10-04T12:15:09Z
--- license: cc-by-nc-4.0 datasets: - grammarly/coedit language: - en tags: - text-generation-inference - candle widget: - text: >- Fix the grammar: When I grow up, I start to understand what he said is quite right. example_title: Fluency - text: >- Make this text coherent: Their flight is weak. They run quickly through the tree canopy. example_title: Coherence - text: >- Rewrite to make this easier to understand: A storm surge is what forecasters consider a hurricane's most treacherous aspect. example_title: Simplification - text: >- Paraphrase this: Do you know where I was born? example_title: Paraphrase - text: >- Write this more formally: omg i love that song im listening to it right now example_title: Formalize - text: >- Write in a more neutral way: The authors' exposé on nutrition studies. example_title: Neutralize --- # Quantized candle weights for the CoEdIT model Quantized weights of [CoEdIT](https://github.com/vipulraheja/coedit) for inference with [candle](https://github.com/huggingface/candle/tree/main/candle-examples/examples/quantized-t5). ## Usage You can run the smaller models directly from the browser using this [space](https://huggingface.co/spaces/jbochi/Candle-CoEdIT-Wasm). Clone [candle](https://github.com/huggingface/candle), and run the `quantized-t5` example: ```shell $ cargo run --example quantized-t5 --release -- \ --model-id "jbochi/candle-coedit-quantized" \ --prompt "Make this text coherent: Their flight is weak. They run quickly through the tree canopy." \ --temperature 0 ... Although their flight is weak, they run quickly through the tree canopy. ``` By default, it will use CoEdIT-large with q6k quantization (770M params, 643 MB). To use CoEdIT-xl (3B params, 2.34 GB), or any other provided model, specify the weight-file and config-file: ```shell $ cargo run --example quantized-t5 --release -- \ --model-id "jbochi/candle-coedit-quantized" \ --weight-file "model-xl.gguf" \ --config-file "config-xl.json" \ --prompt "Rewrite to make this easier to understand: Note that a storm surge is what forecasters consider a hurricane's most treacherous aspect." \ --temperature 0 ... Note that a storm surge is what forecasters consider a hurricane's most dangerous part. ``` ## Models available These are all the available formats. Weight file is named `{model}.gguf` and the config file is `config-{base_model}.json` | Model | Base model | Quantization | # Params | Size | | ----- | ---------- | ------------ | ------ | ---- | | - | [small](https://huggingface.co/jbochi/coedit-small) (unofficial) | None | 77M | 308 MB | | model-small | small | 6k | 77M | 78.2 MB | | model-small-q4k | small | 4k | 77M | 59.6 MB | | model-small-q4_0 | small | 4_0 | 77M | 43.4 MB | | - | [base](https://huggingface.co/jbochi/coedit-base) (unofficial) | None | 248M | 990 MB | | model-base | base | 6k | 248M | 194M | | model-base-q4k | base | 4k | 248M | 133M | | model-base-q4_0 | base | 4_0 | 248M | 133M | | - | [large](https://huggingface.co/grammarly/coedit-large) | None | 770M | 3.13 GB | | model | large | 6k | 770M | 643 MB | | model-q4k | large | 4k | 770M | 441 MB | | model-q4_0 | large | 4_0 | 770M | 441 MB | | - | [xl](https://huggingface.co/grammarly/coedit-xl) | None | 3B | 11.4 GB | | model-xl | xl | 6k | 3B | 2.34 GB | | model-xl-q4k | xl | 4k | 3B | 1.6 GB | | model-xl-q4_0 | xl | 4_0 | 3B | 1.6 GB | | - | [xxl](https://huggingface.co/grammarly/coedit-xxl) | None | 11B | 44.5 GB | | model-xxl | xxl | 6k | 11B | 9.14 GB | | model-xxl-q4k | xxl | 4k | 11B | 6.27 GB | | model-xxl-q4_0 | xxl | 4_0 | 11B | 6.27 GB | ## Model generation The weights were quantized using candle: ```shell cargo run --example tensor-tools --release -- quantize \ --quantization q6k \ /path/to/coedit-<version>/model.safetensors \ --out-file model<version>.gguf ```
TheBloke/StellarBright-GGUF
TheBloke
2023-10-15T23:48:50Z
619
7
transformers
[ "transformers", "gguf", "llama", "stellar-bright", "llama-2", "llama-2-chat", "70b", "text-generation", "en", "base_model:sequelbox/StellarBright", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
2023-10-15T02:32:16Z
--- base_model: sequelbox/StellarBright inference: false language: - en license: llama2 model_creator: scott model_name: StellarBright model_type: llama pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: TheBloke tags: - stellar-bright - llama - llama-2 - llama-2-chat - 70b --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # StellarBright - GGUF - Model creator: [scott](https://huggingface.co/sequelbox) - Original model: [StellarBright](https://huggingface.co/sequelbox/StellarBright) <!-- description start --> ## Description This repo contains GGUF format model files for [scott's StellarBright](https://huggingface.co/sequelbox/StellarBright). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/StellarBright-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StellarBright-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/StellarBright-GGUF) * [scott's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/sequelbox/StellarBright) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: None ``` {prompt} ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [stellarbright.Q2_K.gguf](https://huggingface.co/TheBloke/StellarBright-GGUF/blob/main/stellarbright.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [stellarbright.Q3_K_S.gguf](https://huggingface.co/TheBloke/StellarBright-GGUF/blob/main/stellarbright.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [stellarbright.Q3_K_M.gguf](https://huggingface.co/TheBloke/StellarBright-GGUF/blob/main/stellarbright.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [stellarbright.Q3_K_L.gguf](https://huggingface.co/TheBloke/StellarBright-GGUF/blob/main/stellarbright.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [stellarbright.Q4_0.gguf](https://huggingface.co/TheBloke/StellarBright-GGUF/blob/main/stellarbright.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [stellarbright.Q4_K_S.gguf](https://huggingface.co/TheBloke/StellarBright-GGUF/blob/main/stellarbright.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss | | [stellarbright.Q4_K_M.gguf](https://huggingface.co/TheBloke/StellarBright-GGUF/blob/main/stellarbright.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [stellarbright.Q5_0.gguf](https://huggingface.co/TheBloke/StellarBright-GGUF/blob/main/stellarbright.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [stellarbright.Q5_K_S.gguf](https://huggingface.co/TheBloke/StellarBright-GGUF/blob/main/stellarbright.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [stellarbright.Q5_K_M.gguf](https://huggingface.co/TheBloke/StellarBright-GGUF/blob/main/stellarbright.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended | | stellarbright.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | stellarbright.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `stellarbright.Q6_K.gguf-split-a` * `stellarbright.Q6_K.gguf-split-b` ### q8_0 Please download: * `stellarbright.Q8_0.gguf-split-a` * `stellarbright.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat stellarbright.Q6_K.gguf-split-* > stellarbright.Q6_K.gguf && rm stellarbright.Q6_K.gguf-split-* cat stellarbright.Q8_0.gguf-split-* > stellarbright.Q8_0.gguf && rm stellarbright.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B stellarbright.Q6_K.gguf-split-a + stellarbright.Q6_K.gguf-split-b stellarbright.Q6_K.gguf del stellarbright.Q6_K.gguf-split-a stellarbright.Q6_K.gguf-split-b COPY /B stellarbright.Q8_0.gguf-split-a + stellarbright.Q8_0.gguf-split-b stellarbright.Q8_0.gguf del stellarbright.Q8_0.gguf-split-a stellarbright.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/StellarBright-GGUF and below it, a specific filename to download, such as: stellarbright.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/StellarBright-GGUF stellarbright.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/StellarBright-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/StellarBright-GGUF stellarbright.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m stellarbright.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/StellarBright-GGUF", model_file="stellarbright.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: scott's StellarBright Stellar Bright is a general capability upgrade to Llama 2, using open source data to improve overall knowledge, extended communication, and technical skill. This model is primarily recommended as a superior-to-Llama-2 baseline for additional finetuning, **not** for direct deployment to production as a chat model. The user accepts full responsibility for all outputs. <!-- original-model-card end -->
RichardErkhov/dreamgen_-_opus-v1.2-llama-3-8b-awq
RichardErkhov
2024-05-02T20:58:49Z
619
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
2024-05-02T19:42:51Z
Entry not found
OwenArli/Awanllm-Llama-3-8B-Instruct-DPO-v0.2
OwenArli
2024-05-12T10:03:48Z
619
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-10T10:16:45Z
--- license: llama3 --- Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct Realized a tokenization mistake with the previous DPO model. So this is now a new version testing out DPO training on the following dataset: - https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k The open LLM results are really BAD lol. Something with this dataset is disagreeing with llama 3? We are happy for anyone to try it out and give some feedback and we won't have the model up on https://awanllm.com on our LLM API... Instruct format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` Quants: FP16: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Instruct-DPO-v0.2 GGUF: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Instruct-DPO-v0.2-GGUF
EPFL-VILAB/4M-21_XL
EPFL-VILAB
2024-06-14T08:19:55Z
619
20
ml-4m
[ "ml-4m", "safetensors", "arxiv:2312.06647", "arxiv:2406.09406", "license:other", "region:us" ]
null
2024-06-12T08:43:06Z
--- license: other license_name: sample-code-license license_link: LICENSE library_name: ml-4m --- # 4M: Massively Multimodal Masked Modeling *A framework for training any-to-any multimodal foundation models. <br>Scalable. Open-sourced. Across tens of modalities and tasks.* [`Website`](https://4m.epfl.ch) | [`GitHub`](https://github.com/apple/ml-4m) | [`BibTeX`](#citation) Official implementation and pre-trained models for : [**4M: Massively Multimodal Masked Modeling**](https://arxiv.org/abs/2312.06647), NeurIPS 2023 (Spotlight) <br> *[David Mizrahi](https://dmizrahi.com/)\*, [Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/), [Teresa Yeo](https://aserety.github.io/), [Mingfei Gao](https://fly6464.github.io/), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)* [**4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities**](https://arxiv.org/abs/2406.09406), arXiv 2024 <br> *[Roman Bachmann](https://roman-bachmann.github.io/)\*, [Oğuzhan Fatih Kar](https://ofkar.github.io/)\*, [David Mizrahi](https://dmizrahi.com/)\*, [Ali Garjani](https://garjania.github.io/), [Mingfei Gao](https://fly6464.github.io/), [David Griffiths](https://www.dgriffiths.uk/), [Jiaming Hu](https://scholar.google.com/citations?user=vm3imKsAAAAJ&hl=en), [Afshin Dehghan](https://www.afshindehghan.com/), [Amir Zamir](https://vilab.epfl.ch/zamir/)* 4M is a framework for training "any-to-any" foundation models, using tokenization and masking to scale to many diverse modalities. Models trained using 4M can perform a wide range of vision tasks, transfer well to unseen tasks and modalities, and are flexible and steerable multimodal generative models. We are releasing code and models for "4M: Massively Multimodal Masked Modeling" (here denoted 4M-7), as well as "4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities" (here denoted 4M-21). ## Installation For install instructions, please see https://github.com/apple/ml-4m. ## Usage This model can be loaded from Hugging Face Hub as follows: ```python from fourm.models.fm import FM fm = FM.from_pretrained('EPFL-VILAB/4M-21_XL') ``` Please see [README_GENERATION.md](https://github.com/apple/ml-4m/blob/main/README_GENERATION.md) for more detailed instructions and https://github.com/apple/ml-4m for other 4M model and tokenizer checkpoints. ## Citation If you find this repository helpful, please consider citing our work: ``` @inproceedings{4m, title={{4M}: Massively Multimodal Masked Modeling}, author={David Mizrahi and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Teresa Yeo and Mingfei Gao and Afshin Dehghan and Amir Zamir}, booktitle={Thirty-seventh Conference on Neural Information Processing Systems}, year={2023}, } @article{4m21, title={{4M-21}: An Any-to-Any Vision Model for Tens of Tasks and Modalities}, author={Roman Bachmann and O{\u{g}}uzhan Fatih Kar and David Mizrahi and Ali Garjani and Mingfei Gao and David Griffiths and Jiaming Hu and Afshin Dehghan and Amir Zamir}, journal={arXiv 2024}, year={2024}, } ``` ## License The model weights in this repository are released under the Sample Code license as found in the [LICENSE](LICENSE) file.
CMLL/ZhongJing-2-1_8b-GGUF
CMLL
2024-06-19T06:48:21Z
619
3
peft
[ "peft", "gguf", "llama-factory", "lora", "generated_from_trainer", "base_model:Qwen/Qwen1.5-1.8B-Chat", "doi:10.57967/hf/2582", "license:other", "region:us" ]
null
2024-06-17T11:30:03Z
--- license: other library_name: peft tags: - llama-factory - lora - generated_from_trainer base_model: Qwen/Qwen1.5-1.8B-Chat model-index: - name: train_2024 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ## Model description A Traditional Chinese Medicine large language model, inspired by the wisdom of the eminent representative of ancient Chinese medical scholars, Zhang Zhongjing. This model aims to illuminate the profound knowledge of Traditional Chinese Medicine, bridging the gap between ancient wisdom and modern technology, and providing a reliable and professional tool for the Traditional Chinese Medical fields. However, all generated results are for reference only and should be provided by experienced professionals for diagnosis and treatment results and suggestions. It is recommended to use [colab](https://colab.research.google.com/drive/1DCPomUsfTxqkqxKpK-AIGvBSPbkOm7R3#scrollTo=jsn4szdjdtmF) for free GPU inference. 推荐使用[colab](https://colab.research.google.com/drive/1DCPomUsfTxqkqxKpK-AIGvBSPbkOm7R3#scrollTo=jsn4szdjdtmF)免费GPU推理。 ## Data Source The data for this model is sourced from the non-profit organization, Fulture Medicine Philosophy (Fulphil). Nearly 50 Traditional Chinese Medicine and integrated Chinese-Western Medicine clinical physicians from Beijing University of Chinese Medicine, Shanghai University of Traditional Chinese Medicine, and Shandong University of Traditional Chinese Medicine contributed data and annotations across multiple disciplines. ## Funding Acknowledgement This project is generously funded by family sponsorship. Special thanks to my father, Piyao Kang, and to Xiuhua Li and my girlfriend, Sunsi Wu, for their strong support and assistance. The funds have been utilized in data cleaning, computational power rental, and consultation with domain experts. Thank you all! ## Intended uses & limitations Academic and Personal Use. You may use the Work for academic research and personal use without restriction. Commercial Use. You may not use the Work for commercial purposes without prior written authorization from the Contributor(s). Any commercial use authorized by the Contributor(s) must not involve charging fees above the model inference cost without express written permission from the Contributor(s). Medical Application. The Work is provided for academic research purposes only and not for commercial use. It must not be used in clinical practice or in any scenario with potential medical intent without permission. The capabilities of this Traditional Chinese Medicine (TCM) Language Model, including syndrome classification and prescription generation, are experimental and not intended for clinical diagnosis or treatment. Outputs are for internal reference and testing only and should not be considered as medical advice. All medical diagnoses and treatments should be performed by experienced physicians through a standardized clinical process. Distribution. Redistribution of the Work or derivative works must comply with all the terms and conditions of this License. ## Training data #### 1.1 Multi-task Therapeutic Behavior Decomposition Instruction Construction Strategy Human memory and understanding require the construction of various scenarios and stories to implicitly encode knowledge information. The clarity of memory depends on the duration and richness of the learning process. Interleaved learning, spaced practice, and diversified learning can enhance the consolidation of knowledge, thereby forming a deep understanding of domain knowledge. Our approach is to learn from the process of human memory knowledge, use professional tables, leverage the language representation capabilities of large language models, strictly set specific prompt templates, so that the model can generate 15 scenarios based on the table data of Chinese medicine gynecology prescriptions, including patient therapeutic story, diagnostic analysis, diagnosis treatment expected result, formula function, interactive story, patient therapeutic story, narrative medicine, tongue & pulse, therapeutic template making, critical thinking, follow up, prescription, herb dosage, case study, real-world problem, disease mechanism, etc., to promote the model's reasoning ability for prescription data and diagnostic thinking logic. ``` { "instruction": "我对三元汤的全过程很好奇,能否从简介、病历、症状、诊断和治疗,以及结果讨论等方面给我详细介绍?", "input": "", "output": "三元汤是一种古老的中药处方,最早出现在《保命集》卷下。它由多种中药组成,主要用于治疗产后日久虚劳,脉浮疾等症状。患者为产后日久虚劳,脉浮疾。脉浮疾。根据李女士的病史和症状,我决定使用三元汤来治疗她的病症。我按照处方中的药物组成和用法用量,将柴胡、黄芩、人参、半夏、甘草、川芎、芍药、熟地黄、当归等中药煎煮后,让李女士每天服用两次。在治疗过程中,我还注意了李女士的饮食和休息,让她多吃一些易于消化的食物,保证充足的睡眠和休息。经过一个月的治疗,李女士的症状得到了明显的改善。她的精神状态更加稳定,食欲也有所增加,头晕乏力的症状也减轻了很多。我认为,三元汤的治疗效果主要得益于其中的多种中药的协同作用,能够调节李女士的身体机能,增强体质,提高免疫力。同时,我也注意到,李女士的饮食和休息也对治疗效果有一定的影响。因此,在使用中药治疗疾病时,我们也应该注意患者的全面治疗,包括饮食、休息等方面。" } ``` #### 1.2 Regular TCM Instruction Data Construction Strategy In addition, we have also added instructions based on the content of Chinese medicine ancient books, noun explanations, symptom synonyms, antonyms, syndromes, symptoms, treatment methods, etc. In order to form a control experiment, we only use one instruction template to represent data for this part, and the number of this part of the data is 80,000, which is significantly more than the number of instructions constructed by the above strategy. The following is the specific number of instructions and tokens information. Data Source and Instruction Quantity Table: ``` { "instruction": "请回答以下有关于中医疾病名词解释的相关问题:", "input": "(肺风)粉刺属于哪个分类?", "output": "因肺风、胃热或肝瘀所致。以面及背部见黑头或白头粉刺、丘疹、脓疱、结节、囊肿及疤痕为主要表现的皮肤疾病。" } ``` # Train Details & Inference Capability Statement Our model, a meticulously fine-tuned version of Qwen1.5-1.8B-Chat, has been optimized for high-speed inference on a Tesla T4 graphics processing unit (GPU). This enhancement was achieved through extensive training on our exclusive medical datasets, ensuring the model's proficiency in understanding and generating responses relevant to the medical field, particularly in the domain of Traditional Chinese Medicine (TCM). ## Disclaimer This research is for academic research use only, commercial use is not allowed without permission, and it is not to be used in medical scenarios or scenarios with potential medical intent for clinical practice. This large language model for Traditional Chinese Medicine is still in the laboratory testing stage. The emerging syndrome classification and prescription generation capabilities at this stage are still rudimentary, and it does not yet have a highly reliable clinical diagnostic and therapeutic capability for gynecology and other clinical specialties. The output results are for internal reference testing only. Real medical diagnosis and decision-making still need to be issued by experienced physicians through a strictly regulated diagnostic and therapeutic process. ## Collaboration Data processing and annotation is one of the important steps in training the model. We sincerely welcome Traditional Chinese Medicine practitioners with strong TCM thinking and innovative spirit to join us. We will also declare corresponding data contributions. We look forward to the day when we can achieve a reliable General Artificial Intelligence for Traditional Chinese Medicine, allowing the ancient Chinese medicine to blend with modern technology and shine anew. This is also the ultimate mission of this project. If interested, please send an email to [email protected]. ## Team Introduction Led by the non-profit organization FulPhil-医哲未来 (Future Medicine Philosophy), the CMLM (Chinese Medicine Language Models) initiative on HuggingFace is dedicated to advancing healthcare AI by integrating traditional Chinese medicine with state-of-the-art machine learning. Our mission includes curating valuable medical datasets, developing AI models for medical assistance, and ensuring ethical AI use in healthcare, fostering collaboration between global experts in Chinese and Western medicine and AI. ## Citation If you find this work useful in your research, please cite our repository: ``` @misc{CMLM-ZhongJing, author = {Liu Lin Ju Shi}, title = {CMLM-ZhongJing-2-1_8b: A State-of-the-Art Edge Computing Language Model for Traditional Chinese Medicine}, year = {2023}, publisher = {FulPhil-医哲未来 (Future Medicine Philosophy).}, journal = {GitHub Repository}, howpublished = {\url{https://github.com/pariskang/CMLM-ZhongJing}} ```
yichaodu/DiffusionDPO-bias-gemini-1.5
yichaodu
2024-06-20T12:03:00Z
619
1
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "region:us" ]
text-to-image
2024-06-19T08:43:53Z
--- tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: true --- # Aligned Diffusion Model via DPO Diffusion Model Aligned with thef following reward model and DPO algorithm ``` close-sourced vlm: claude3-opus gemini-1.5 gpt-4o gpt-4v open-sourced vlm: internvl-1.5 score model: hps-2.1 ``` ## How to Use You can load the model and perform inference as follows: ```python from diffusers import StableDiffusionPipeline, UNet2DConditionModel pretrained_model_name = "runwayml/stable-diffusion-v1-5" dpo_unet = UNet2DConditionModel.from_pretrained( "path/to/checkpoint", subfolder='unet', torch_dtype=torch.float16 ).to('cuda') pipeline = StableDiffusionPipeline.from_pretrained(pretrained_model_name, torch_dtype=torch.float16) pipeline = pipeline.to('cuda') pipeline.safety_checker = None pipeline.unet = dpo_unet generator = torch.Generator(device='cuda') generator = generator.manual_seed(1) prompt = "a pink flower" image = pipeline(prompt=prompt, generator=generator, guidance_scale=gs).images[0] ``` ## Citation ``` @misc{mjbench2024mjbench, title={MJ-BENCH: Is Your Multimodal Reward Model Really a Good Judge?}, author={Chen*, Zhaorun and Du*, Yichao and Wen, Zichen and Zhou, Yiyang and Cui, Chenhang and Weng, Zhenzhen and Tu, Haoqin and Wang, Chaoqi and Tong, Zhengwei and HUANG, Leria and Chen, Canyu and Ye Qinghao and Zhu, Zhihong and Zhang, Yuqing and Zhou, Jiawei and Zhao, Zhuokai and Rafailov, Rafael and Finn, Chelsea and Yao, Huaxiu}, year={2024} } ```
Helsinki-NLP/opus-mt-en-swc
Helsinki-NLP
2023-08-16T11:31:18Z
618
0
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "en", "swc", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-en-swc * source languages: en * target languages: swc * OPUS readme: [en-swc](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-swc/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-swc/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.en.swc | 40.1 | 0.613 |
deep-learning-analytics/GrammarCorrector
deep-learning-analytics
2021-12-23T02:51:34Z
618
13
transformers
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
## Model description T5 model trained for Grammar Correction. This model corrects grammatical mistakes in input sentences ### Dataset Description The T5-base model has been trained on C4_200M dataset. ### Model in Action 🚀 ``` import torch from transformers import T5Tokenizer, T5ForConditionalGeneration model_name = 'deep-learning-analytics/GrammarCorrector' torch_device = 'cuda' if torch.cuda.is_available() else 'cpu' tokenizer = T5Tokenizer.from_pretrained(model_name) model = T5ForConditionalGeneration.from_pretrained(model_name).to(torch_device) def correct_grammar(input_text,num_return_sequences): batch = tokenizer([input_text],truncation=True,padding='max_length',max_length=64, return_tensors="pt").to(torch_device) translated = model.generate(**batch,max_length=64,num_beams=num_beams, num_return_sequences=num_return_sequences, temperature=1.5) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) return tgt_text ``` ### Example Usage ``` text = 'He are moving here.' print(correct_grammar(text, num_return_sequences=2)) ['He is moving here.', 'He is moving here now.'] ``` Another example ``` text = 'Cat drinked milk' print(correct_grammar(text, num_return_sequences=2)) ['Cat drank milk.', 'Cat drink milk.'] ``` Model Developed by [Priya-Dwivedi](https://www.linkedin.com/in/priyanka-dwivedi-6864362)
facebook/DiT-XL-2-512
facebook
2023-03-15T14:38:04Z
618
11
diffusers
[ "diffusers", "license:cc-by-nc-4.0", "diffusers:DiTPipeline", "region:us" ]
null
2023-01-17T20:25:53Z
--- license: cc-by-nc-4.0 --- # Scalable Diffusion Models with Transformers (DiT) ## Abstract We train latent diffusion models, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops---through increased transformer depth/width or increased number of input tokens---consistently have lower FID. In addition to good scalability properties, our DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512×512 and 256×256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.
timm/seresnet50.ra2_in1k
timm
2024-02-10T23:41:41Z
618
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:2110.00476", "arxiv:1512.03385", "arxiv:1709.01507", "license:apache-2.0", "region:us" ]
image-classification
2023-04-05T19:31:32Z
--- license: apache-2.0 library_name: timm tags: - image-classification - timm --- # Model card for seresnet50.ra2_in1k A SE-ResNet-B image classification model with Squeeze-and-Excitation channel attention. This model features: * ReLU activations * single layer 7x7 convolution with pooling * 1x1 convolution shortcut downsample * Squeeze-and-Excitation channel attention Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * RandAugment `RA2` recipe. Inspired by and evolved from EfficientNet RandAugment recipes. Published as `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476). * RMSProp (TF 1.0 behaviour) optimizer, EMA weight averaging * Step (exponential decay w/ staircase) LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 28.1 - GMACs: 4.1 - Activations (M): 11.1 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - Squeeze-and-Excitation Networks: https://arxiv.org/abs/1709.01507 - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('seresnet50.ra2_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'seresnet50.ra2_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'seresnet50.ra2_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @inproceedings{wightman2021resnet, title={ResNet strikes back: An improved training procedure in timm}, author={Wightman, Ross and Touvron, Hugo and Jegou, Herve}, booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @inproceedings{hu2018senet, title={Squeeze-and-Excitation Networks}, author={Jie Hu and Li Shen and Gang Sun}, journal={IEEE Conference on Computer Vision and Pattern Recognition}, year={2018} } ```
algiraldohe/lm-ner-linkedin-skills-recognition
algiraldohe
2023-07-07T22:51:06Z
618
17
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-07-07T21:42:41Z
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: lm-ner-linkedin-skills-recognition results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lm-ner-linkedin-skills-recognition This model is a fine-tuned version of [algiraldohe/distilbert-base-uncased-linkedin-domain-adaptation](https://huggingface.co/algiraldohe/distilbert-base-uncased-linkedin-domain-adaptation) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0307 - Precision: 0.9119 - Recall: 0.9312 - F1: 0.9214 - Accuracy: 0.9912 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1301 | 1.0 | 729 | 0.0468 | 0.8786 | 0.8715 | 0.8750 | 0.9863 | | 0.0432 | 2.0 | 1458 | 0.0345 | 0.8994 | 0.9219 | 0.9105 | 0.9900 | | 0.0332 | 3.0 | 2187 | 0.0307 | 0.9119 | 0.9312 | 0.9214 | 0.9912 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
NikolayKozloff/llama-13b-hf-2048-fpf-GGUF
NikolayKozloff
2023-08-23T23:23:34Z
618
0
null
[ "gguf", "region:us" ]
null
2023-08-23T22:53:31Z
Entry not found
hieudinhpro/BERT_Sentiment_Vietnamese
hieudinhpro
2023-10-05T00:49:49Z
618
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "vi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-10-04T19:24:36Z
--- license: mit language: - vi metrics: - accuracy library_name: transformers pipeline_tag: text-classification --- ## Model Details Sentiment analysts ### Model Description - **Developed by:** [Hieudinhpro] - **Model type:** [fineturn_roberta_model] - **Language(s) (NLP):** [Vietnamese] - **License:** [MIT] - **Finetuned from model [optional]:** [phobert-base-vietnamese-sentiment] # Load model directly ``` from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("hieudinhpro/BERT_Sentiment_Vietnamese") model = AutoModelForSequenceClassification.from_pretrained("hieudinhpro/BERT_Sentiment_Vietnamese") ``` # Use a pipeline as a high-level helper ``` from transformers import pipeline pipe = pipeline("text-classification", model="hieudinhpro/BERT_Sentiment_Vietnamese") ```
jaimevera1107/moderation-topics
jaimevera1107
2023-11-10T04:56:42Z
618
1
bertopic
[ "bertopic", "text-classification", "region:us" ]
text-classification
2023-11-10T04:56:40Z
--- tags: - bertopic library_name: bertopic pipeline_tag: text-classification --- # moderation-topics This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("jaimevera1107/moderation-topics") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 94 * Number of training documents: 1403 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | 0 | suicide - nssi - tendency - recent - self | 40 | 0_suicide_nssi_tendency_recent | | 1 | exposed - minimal - sexualized - possessing - performs | 33 | 1_exposed_minimal_sexualized_possessing | | 2 | drug - reference - purposes - substances - substance | 32 | 2_drug_reference_purposes_substances | | 3 | regulated - consumption - tobacco - relate - associate | 31 | 3_regulated_consumption_tobacco_relate | | 4 | male - region - pubic - exposure - nipple | 31 | 4_male_region_pubic_exposure | | 5 | testing - wildlife - endangered - poaching - hunting | 31 | 5_testing_wildlife_endangered_poaching | | 6 | nudity - fine - implied - documentaries - indigenous | 30 | 6_nudity_fine_implied_documentaries | | 7 | text - language - pickup - textual - texts | 28 | 7_text_language_pickup_textual | | 8 | fighting - incitement - violent - reactive - event | 27 | 8_fighting_incitement_violent_reactive | | 9 | hate - ideology - hateful - based - disability | 27 | 9_hate_ideology_hateful_based | | 10 | sensual - pleasure - demonstration - objectification - dialogue | 26 | 10_sensual_pleasure_demonstration_objectification | | 11 | detailing - stimulation - fetishism - allusion - adults | 26 | 11_detailing_stimulation_fetishism_allusion | | 12 | pornography - vulgarity - website - tapes - softcore | 26 | 12_pornography_vulgarity_website_tapes | | 13 | lead - highly - is - imitable - professionals | 25 | 13_lead_highly_is_imitable | | 14 | brand - code - csam - qr - multiple | 25 | 14_brand_code_csam_qr | | 15 | expressions - dance - performing - performances - express | 24 | 15_expressions_dance_performing_performances | | 16 | intellectual - copyright - copyrighted - stolen - cover | 24 | 16_intellectual_copyright_copyrighted_stolen | | 17 | slur - slurs - designation - remarks - status | 24 | 17_slur_slurs_designation_remarks | | 18 | undressing - striptease - process - panties - voyeuristic | 23 | 18_undressing_striptease_process_panties | | 19 | workplace - peeping - upskirting - tom - coercion | 23 | 19_workplace_peeping_upskirting_tom | | 20 | hostility - degradation - statement - discriminatory - characteristics | 23 | 20_hostility_degradation_statement_discriminatory | | 21 | low - quality - organic - host - grow | 22 | 21_low_quality_organic_host | | 22 | terrorist - terrorism - recruitment - organizations - international | 21 | 22_terrorist_terrorism_recruitment_organizations | | 23 | spam - jump - makeup - scary - scare | 20 | 23_spam_jump_makeup_scary | | 24 | firearms - ammunition - explosive - explosives - weapons | 20 | 24_firearms_ammunition_explosive_explosives | | 25 | culturally - appropriate - wear - protected - not | 19 | 25_culturally_appropriate_wear_protected | | 26 | disturbing - cannibalism - disgusting - coverage - anatomy | 18 | 26_disturbing_cannibalism_disgusting_coverage | | 27 | homicide - mutilated - death - accident - torture | 18 | 27_homicide_mutilated_death_accident | | 28 | privacy - invasion - surveillance - espionage - confidential | 18 | 28_privacy_invasion_surveillance_espionage | | 29 | age - requirement - signals - identifiers - admission | 18 | 29_age_requirement_signals_identifiers | | 30 | framing - gaze - angles - piercings - camera | 17 | 30_framing_gaze_angles_piercings | | 31 | stalking - doxing - lists - encourage - addresses | 17 | 31_stalking_doxing_lists_encourage | | 32 | damage - destruction - property - arson - vandalism | 17 | 32_damage_destruction_property_arson | | 33 | eating - disorders - disorder - eat - loss | 16 | 33_eating_disorders_disorder_eat | | 34 | bullying - statements - cyberbullying - vulnerable - users | 16 | 34_bullying_statements_cyberbullying_vulnerable | | 35 | scams - frauds - scamming - schemes - fraudulent | 16 | 35_scams_frauds_scamming_schemes | | 36 | criminal - crime - criminals - gang - burglary | 15 | 36_criminal_crime_criminals_gang | | 37 | identifiable - data - personally - reveal - others | 15 | 37_identifiable_data_personally_reveal | | 38 | work - sex - prostitution - workers - escort | 15 | 38_work_sex_prostitution_workers | | 39 | conspiracy - theories - disinformation - baseless - current | 14 | 39_conspiracy_theories_disinformation_baseless | | 40 | consensual - recording - blackmail - intention - displaying | 14 | 40_consensual_recording_blackmail_intention | | 41 | child - featuring - pedophilic - defense - intimate | 14 | 41_child_featuring_pedophilic_defense | | 42 | polarization - opposing - social - incite - deepen | 14 | 42_polarization_opposing_social_incite | | 43 | pedophilia - grooming - normalization - predators - normalizing | 14 | 43_pedophilia_grooming_normalization_predators | | 44 | platforms - direction - ads - third - party | 14 | 44_platforms_direction_ads_third | | 45 | products - items - enhancement - grafitication - demonstrations | 13 | 45_products_items_enhancement_grafitication | | 46 | possession - consuming - drinking - tobacco - smoking | 13 | 46_possession_consuming_drinking_tobacco | | 47 | credible - threats - menacing - aggressive - plans | 12 | 47_credible_threats_menacing_aggressive | | 48 | hacking - malware - phishing - ransomware - hacks | 12 | 48_hacking_malware_phishing_ransomware | | 49 | proxy - lgbtq - bully - harassment - trolling | 12 | 49_proxy_lgbtq_bully_harassment | | 50 | going - live - 13 - 18 - u18 | 12 | 50_going_live_13_18 | | 51 | unintentionally - genitalia - animals - pornographic - bestiality | 12 | 51_unintentionally_genitalia_animals_pornographic | | 52 | artificial - traffic - way - methods - generate | 12 | 52_artificial_traffic_way_methods | | 53 | slaughter - mutilation - humans - dead - animal | 12 | 53_slaughter_mutilation_humans_dead | | 54 | goods - gangs - organized - counterfeit - illicit | 11 | 54_goods_gangs_organized_counterfeit | | 55 | gambling - betting - cheating - game - devices | 11 | 55_gambling_betting_cheating_game | | 56 | trafficking - forced - coerced - traded - function | 11 | 56_trafficking_forced_coerced_traded | | 57 | unsolicited - messages - favors - requests - advances | 11 | 57_unsolicited_messages_favors_requests | | 58 | blood - gore - shock - bloodshed - value | 11 | 58_blood_gore_shock_bloodshed | | 59 | victim - abduction - vehicle - motor - glorification | 11 | 59_victim_abduction_vehicle_motor | | 60 | inappropriate - kiss - sexualizes - objectifies - towards | 10 | 60_inappropriate_kiss_sexualizes_objectifies | | 61 | toddlers - infants - unintentional - touch - abdomen | 10 | 61_toddlers_infants_unintentional_touch | | 62 | traditional - traditions - sacred - cultural - misappropriation | 10 | 62_traditional_traditions_sacred_cultural | | 63 | nuclear - weapon - peaceful - advocating - energy | 9 | 63_nuclear_weapon_peaceful_advocating | | 64 | exploiting - child - marriage - exploitation - labor | 9 | 64_exploiting_child_marriage_exploitation | | 65 | impersonation - famous - figure - slandering - profiles | 9 | 65_impersonation_famous_figure_slandering | | 66 | defamation - someones - defamatory - allegations - businesses | 9 | 66_defamation_someones_defamatory_allegations | | 67 | recipes - creating - may - tools - instructions | 9 | 67_recipes_creating_may_tools | | 68 | election - interference - campaigns - misinformation - political | 9 | 68_election_interference_campaigns_misinformation | | 69 | claims - expertise - apocalypse - authority - media | 9 | 69_claims_expertise_apocalypse_authority | | 70 | featuring - nude - partial - implied - depictions | 8 | 70_featuring_nude_partial_implied | | 71 | operations - police - military - enforcement - law | 8 | 71_operations_police_military_enforcement | | 72 | tax - laundering - crimes - money - ponzi | 8 | 72_tax_laundering_crimes_money | | 73 | cosmetic - surgery - procedures - diy - unlicensed | 8 | 73_cosmetic_surgery_procedures_diy | | 74 | subject - optical - innuendos - illusion - suggestive | 8 | 74_subject_optical_innuendos_illusion | | 75 | bodies - fantasy - lifeless - accident - fictional | 8 | 75_bodies_fantasy_lifeless_accident | | 76 | controversial - constructive - politics - issues - discussion | 7 | 76_controversial_constructive_politics_issues | | 77 | kissing - lip - only - greeting - as | 7 | 77_kissing_lip_only_greeting | | 78 | pirated - plagiarism - incites - glorifies - first | 7 | 78_pirated_plagiarism_incites_glorifies | | 79 | mental - conditions - health - mocks - stigmatization | 7 | 79_mental_conditions_health_mocks | | 80 | daredevil - reckless - precautions - risking - caution | 7 | 80_daredevil_reckless_precautions_risking | | 81 | pranks - intentions - cybersecurity - harmful - targeted | 7 | 81_pranks_intentions_cybersecurity_harmful | | 82 | dark - web - underground - marketplaces - glorifies | 6 | 82_dark_web_underground_marketplaces | | 83 | vax - anti - medical - false - misinformation | 6 | 83_vax_anti_medical_false | | 84 | sports - danger - adventures - stunts - professional | 6 | 84_sports_danger_adventures_stunts | | 85 | environmental - pollution - experiments - ecosystems - natural | 6 | 85_environmental_pollution_experiments_ecosystems | | 86 | incest - incestuous - taboo - themes - discussion | 5 | 86_incest_incestuous_taboo_themes | | 87 | neglect - child - endangerment - abuse - physical | 5 | 87_neglect_child_endangerment_abuse | | 88 | radicalization - extremist - extremism - views - propaganda | 5 | 88_radicalization_extremist_extremism_views | | 89 | waste - bodily - excretion - unsanitary - images | 5 | 89_waste_bodily_excretion_unsanitary | | 90 | emotional - psychological - mind - gaslighting - relationships | 5 | 90_emotional_psychological_mind_gaslighting | | 91 | solicitation - offer - request - prostitution - act | 5 | 91_solicitation_offer_request_prostitution | | 92 | elderly - elders - elder - neglect - against | 5 | 92_elderly_elders_elder_neglect | | 93 | education - terms - term - relating - general | 4 | 93_education_terms_term_relating | </details> ## Training hyperparameters * calculate_probabilities: False * language: english * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 10 * verbose: False ## Framework versions * Numpy: 1.23.5 * HDBSCAN: 0.8.33 * UMAP: 0.5.4 * Pandas: 1.5.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.2.2 * Transformers: 4.24.0 * Numba: 0.58.1 * Plotly: 5.15.0 * Python: 3.10.12
hfl/chinese-alpaca-2-13b-16k-gguf
hfl
2024-01-24T03:30:48Z
618
1
null
[ "gguf", "zh", "en", "license:apache-2.0", "region:us" ]
null
2023-11-16T06:35:17Z
--- license: apache-2.0 language: - zh - en --- # Chinese-Alpaca-2-13B-16K-GGUF This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Alpaca-2-13B-16K**. ## Performance Metric: PPL, lower is better | Quant | original | imatrix (`-im`) | |-----|------|------| | Q2_K | 12.7790 +/- 0.17943 | 13.8057 +/- 0.19614 | | Q3_K | 10.0834 +/- 0.14063 | 9.6355 +/- 0.13483 | | Q4_0 | 9.7072 +/- 0.13563 | - | | Q4_K | 9.2864 +/- 0.13001 | 9.2097 +/- 0.12874 | | Q5_0 | 9.2062 +/- 0.12846 | - | | Q5_K | 9.0912 +/- 0.12705 | 9.0701 +/- 0.12668 | | Q6_K | 9.0799 +/- 0.12681 | 9.0558 +/- 0.12653 | | Q8_0 | 9.0200 +/- 0.12616 | - | | F16 | 9.0142 +/- 0.12603 | - | *The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).* ## Others For Hugging Face version, please see: https://huggingface.co/hfl/chinese-alpaca-2-13b-16k Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details.
badd9yang/GPT2_Melody_Generation
badd9yang
2024-01-10T13:28:12Z
618
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-10T08:36:29Z
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: GPT2_Melody_Generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT2_Melody_Generation This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3371 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 4 - seed: 1 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7492 | 0.1 | 1000 | 2.8906 | | 2.6238 | 0.2 | 2000 | 2.2064 | | 2.1078 | 0.3 | 3000 | 1.8105 | | 1.8152 | 0.4 | 4000 | 1.6067 | | 1.6536 | 0.5 | 5000 | 1.4950 | | 1.5477 | 0.6 | 6000 | 1.4357 | | 1.4832 | 0.7 | 7000 | 1.3807 | | 1.4452 | 0.8 | 8000 | 1.3487 | | 1.4172 | 0.9 | 9000 | 1.3371 | ### Framework versions - Transformers 4.32.1 - Pytorch 1.11.0+cu113 - Datasets 2.12.0 - Tokenizers 0.13.2
LanguageBind/MoE-LLaVA-Phi2-2.7B-4e-384
LanguageBind
2024-02-01T06:08:50Z
618
32
transformers
[ "transformers", "safetensors", "moe_llava_phi", "text-generation", "custom_code", "arxiv:2401.15947", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-30T14:15:24Z
--- license: apache-2.0 --- <p align="center"> <img src="https://s11.ax1x.com/2023/12/28/piqvDMV.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/abs/2401.15947">MoE-LLaVA: Mixture of Experts for Large Vision-Language Models</a></h2> <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for latest update. </h2> <h5 align="center"> </h5> ## 📰 News * **[2024.01.30]** The [paper](https://arxiv.org/abs/2401.15947) is released. * **[2024.01.27]** 🤗[Hugging Face demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) and **all codes & datasets** are available now! Welcome to **watch** 👀 this repository for the latest updates. ## 😮 Highlights MoE-LLaVA shows excellent performance in multi-modal learning. ### 🔥 High performance, but with fewer parameters - with just **3B sparsely activated parameters**, MoE-LLaVA demonstrates performance comparable to the LLaVA-1.5-7B on various visual understanding datasets and even surpasses the LLaVA-1.5-13B in object hallucination benchmarks. ### 🚀 Simple baseline, learning multi-modal interactions with sparse pathways. - With the addition of **a simple MoE tuning stage**, we can complete the training of MoE-LLaVA on **8 V100 GPUs** within 2 days. ## 🤗 Demo ### Gradio Web UI Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by MoE-LLaVA. We also provide [online demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) in Huggingface Spaces. ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" # use qwen deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" # use stablelm deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" ``` ### CLI Inference ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" --image-file "image.jpg" # use qwen deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" --image-file "image.jpg" # use stablelm deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" --image-file "image.jpg" ``` ## 🐳 Model Zoo | Model | LLM | Checkpoint | Avg | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MM-Bench| LLaVA-Bench-Wild | MM-Vet | |----------|-----------|-----------|---|---|---|---|---|---|---|---|---|---| | MoE-LLaVA-1.6B×4-Top2 | 1.6B | [LanguageBind/MoE-LLaVA-StableLM-1.6B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-StableLM-1.6B-4e) | 60.0 | 76.0 | 60.4 | 37.2 | 62.6 | 47.8 | 84.3 | 59.4 | 85.9 | 26.1 | | MoE-LLaVA-1.8B×4-Top2 | 1.8B | [LanguageBind/MoE-LLaVA-Qwen-1.8B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Qwen-1.8B-4e) | 60.2 | 76.2 | 61.5 | 32.6 | 63.1 | 48.0 | 87.0 | 59.6 | 88.7 | 25.3 | | MoE-LLaVA-2.7B×4-Top2 | 2.7B | [LanguageBind/MoE-LLaVA-Phi2-2.7B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Phi2-2.7B-4e) | 63.9 | 77.1 | 61.1 | 43.4 | 68.7 | 50.2 | 85.0 | 65.5 | 93.2 | 31.1 | <!-- | LLaVA-1.5 | 7B | [liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 62.0 | 78.5 | 62.0 | 50.0 | 66.8 | 58.2 | 85.9 | 64.3 | 31.1 | | LLaVA-1.5 | 13B | [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) | 64.9 | 80.0 | 63.3 | 53.6 | 71.6 | 61.3 | 85.9 | 67.7 | 36.1 | --> ## ⚙️ Requirements and Installation * Python >= 3.10 * Pytorch == 2.0.1 * CUDA Version >= 11.7 * **Transformers == 4.36.2** * **Tokenizers==0.15.1** * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/MoE-LLaVA cd MoE-LLaVA conda create -n moellava python=3.10 -y conda activate moellava pip install --upgrade pip # enable PEP 660 support pip install -e . pip install -e ".[train]" pip install flash-attn --no-build-isolation # Below are optional. For Qwen model. git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . # Below are optional. Installing them might be slow. # pip install csrc/layer_norm # If the version of flash-attn is higher than 2.1.1, the following is not needed. # pip install csrc/rotary ``` ## 🗝️ Training & Validating The training & validating instruction is in [TRAIN.md](docs/TRAIN.md) & [EVAL.md](docs/EVAL.md). ## 💡 Customizing your MoE-LLaVA The instruction is in [CUSTOM.md](docs/CUSTOM.md). ## 😍 Visualization The instruction is in [VISUALIZATION.md](docs/VISUALIZATION.md). ## 🤖 API **We open source all codes.** If you want to load the model (e.g. ```LanguageBind/MoE-LLaVA```) on local, you can use the following code snippets. **Using the following command to run the code.** ```bash deepspeed predict.py ``` ```python import torch from moellava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN from moellava.conversation import conv_templates, SeparatorStyle from moellava.model.builder import load_pretrained_model from moellava.utils import disable_torch_init from moellava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria def main(): disable_torch_init() image = 'moellava/serve/examples/extreme_ironing.jpg' inp = 'What is unusual about this image?' model_path = 'LanguageBind/MoE-LLaVA-Phi2-2.7B-4e' # LanguageBind/MoE-LLaVA-Qwen-1.8B-4e or LanguageBind/MoE-LLaVA-StableLM-1.6B-4e device = 'cuda' load_4bit, load_8bit = False, False # FIXME: Deepspeed support 4bit or 8bit? model_name = get_model_name_from_path(model_path) tokenizer, model, processor, context_len = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device) image_processor = processor['image'] conv_mode = "phi" # qwen or stablelm conv = conv_templates[conv_mode].copy() roles = conv.roles image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].to(model.device, dtype=torch.float16) print(f"{roles[1]}: {inp}") inp = DEFAULT_IMAGE_TOKEN + '\n' + inp conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda() stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) with torch.inference_mode(): output_ids = model.generate( input_ids, images=image_tensor, do_sample=True, temperature=0.2, max_new_tokens=1024, use_cache=True, stopping_criteria=[stopping_criteria]) outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:], skip_special_tokens=True).strip() print(outputs) if __name__ == '__main__': main() ``` ## 🙌 Related Projects * [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) This framework empowers the model to efficiently utilize the united visual tokens. * [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) An open source five modalities language-based retrieval framework. ## 👍 Acknowledgement * [LLaVA](https://github.com/haotian-liu/LLaVA) The codebase we built upon and it is an efficient large language and vision assistant. ## 🔒 License * The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/LICENSE) file. * The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. ## ✏️ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{lin2024moellava, title={MoE-LLaVA: Mixture of Experts for Large Vision-Language Models}, author={Bin Lin and Zhenyu Tang and Yang Ye and Jiaxi Cui and Bin Zhu and Peng Jin and Junwu Zhang and Munan Ning and Li Yuan}, year={2024}, eprint={2401.15947}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```BibTeX @article{lin2023video, title={Video-LLaVA: Learning United Visual Representation by Alignment Before Projection}, author={Lin, Bin and Zhu, Bin and Ye, Yang and Ning, Munan and Jin, Peng and Yuan, Li}, journal={arXiv preprint arXiv:2311.10122}, year={2023} } ``` ## ✨ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/MoE-LLaVA&type=Date)](https://star-history.com/#PKU-YuanGroup/MoE-LLaVA&Date) ## 🤝 Contributors <a href="https://github.com/PKU-YuanGroup/MoE-LLaVA/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/MoE-LLaVA" /> </a>
Lihuchen/pearl_small
Lihuchen
2024-03-04T09:19:56Z
618
11
sentence-transformers
[ "sentence-transformers", "pytorch", "safetensors", "bert", "feature-extraction", "Phrase Representation", "String Matching", "Fuzzy Join", "Entity Retrieval", "transformers", "en", "arxiv:2401.10407", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
2024-02-04T16:05:24Z
--- license: apache-2.0 language: - en tags: - Phrase Representation - String Matching - Fuzzy Join - Entity Retrieval - transformers - sentence-transformers --- ## 🦪⚪ PEARL-small [Learning High-Quality and General-Purpose Phrase Representations](https://arxiv.org/pdf/2401.10407.pdf). <br> [Lihu Chen](https://chenlihu.com), [Gaël Varoquaux](https://gael-varoquaux.info/), [Fabian M. Suchanek](https://suchanek.name/). Accepted by EACL Findings 2024 <br> PEARL-small is a lightweight string embedding model. It is the tool of choice for semantic similarity computation for strings, creating excellent embeddings for string matching, entity retrieval, entity clustering, fuzzy join... <br> It differs from typical sentence embedders because it incorporates phrase type information and morphological features, allowing it to better capture variations in strings. The model is a variant of [E5-small](https://huggingface.co/intfloat/e5-small-v2) finetuned on our constructed context-free [dataset](https://zenodo.org/records/10676475) to yield better representations for phrases and strings. <br> 🤗 [PEARL-small](https://huggingface.co/Lihuchen/pearl_small) 🤗 [PEARL-base](https://huggingface.co/Lihuchen/pearl_base) 📐 [PEARL Benchmark](https://huggingface.co/datasets/Lihuchen/pearl_benchmark) 🏆 [PEARL Leaderboard](https://huggingface.co/spaces/Lihuchen/pearl_leaderboard) <br> | Model |Size|Avg| PPDB | PPDB filtered |Turney|BIRD|YAGO|UMLS|CoNLL|BC5CDR|AutoFJ| |-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| | FastText |-| 40.3| 94.4 | 61.2 | 59.6 | 58.9 |16.9|14.5|3.0|0.2| 53.6| | Sentence-BERT |110M|50.1| 94.6 | 66.8 | 50.4 | 62.6 | 21.6|23.6|25.5|48.4| 57.2| | Phrase-BERT |110M|54.5| 96.8 | 68.7 | 57.2 | 68.8 |23.7|26.1|35.4| 59.5|66.9| | E5-small |34M|57.0| 96.0| 56.8|55.9| 63.1|43.3| 42.0|27.6| 53.7|74.8| |E5-base|110M| 61.1| 95.4|65.6|59.4|66.3| 47.3|44.0|32.0| 69.3|76.1| |PEARL-small|34M| 62.5| 97.0|70.2|57.9|68.1| 48.1|44.5|42.4|59.3|75.2| |PEARL-base|110M|64.8|97.3|72.2|59.7|72.6|50.7|45.8|39.3|69.4|77.1| Cost comparison of FastText and PEARL. The estimated memory is calculated by the number of parameters (float16). The unit of inference speed is `*ms/512 samples`. The FastText model here is `crawl-300d-2M-subword.bin`. | Model |Avg Score| Estimated Memory |Speed GPU | Speed CPU | |-|-|-|-|-| |FastText|40.3|1200MB|-|57ms| |PEARL-small|62.5|68MB|42ms|446ms| |PEARL-base|64.8|220MB|89ms|1394ms| ## Usage ### Sentence Transformers PEARL is integrated with the Sentence Transformers library (Thanks for [Tom Aarsen](https://huggingface.co/tomaarsen)'s contribution), and can be used like so: ```python from sentence_transformers import SentenceTransformer, util query_texts = ["The New York Times"] doc_texts = [ "NYTimes", "New York Post", "New York"] input_texts = query_texts + doc_texts model = SentenceTransformer("Lihuchen/pearl_small") embeddings = model.encode(input_texts) scores = util.cos_sim(embeddings[0], embeddings[1:]) * 100 print(scores.tolist()) # [[90.56318664550781, 79.65763854980469, 75.52056121826172]] ``` ### Transformers You can also use `transformers` to use PEARL. Below is an example of entity retrieval, and we reuse the code from E5. ```python import torch.nn.functional as F from torch import Tensor from transformers import AutoTokenizer, AutoModel def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor: last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0) return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None] def encode_text(model, input_texts): # Tokenize the input texts batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt') outputs = model(**batch_dict) embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask']) return embeddings query_texts = ["The New York Times"] doc_texts = [ "NYTimes", "New York Post", "New York"] input_texts = query_texts + doc_texts tokenizer = AutoTokenizer.from_pretrained('Lihuchen/pearl_small') model = AutoModel.from_pretrained('Lihuchen/pearl_small') # encode embeddings = encode_text(model, input_texts) # calculate similarity embeddings = F.normalize(embeddings, p=2, dim=1) scores = (embeddings[:1] @ embeddings[1:].T) * 100 print(scores.tolist()) # expected outputs # [[90.56318664550781, 79.65763854980469, 75.52054595947266]] ``` ## Training and Evaluation Have a look at our code on [Github](https://github.com/tigerchen52/PEARL) ## Citation If you find our work useful, please give us a citation: ``` @article{chen2024learning, title={Learning High-Quality and General-Purpose Phrase Representations}, author={Chen, Lihu and Varoquaux, Ga{\"e}l and Suchanek, Fabian M}, journal={arXiv preprint arXiv:2401.10407}, year={2024} } ```
aureliojafer/5FNATrWsmtr2mkh28Ao2Xz2qUB8vSirmG5v6ZM47USnhiKH4_vgg
aureliojafer
2024-03-05T02:19:12Z
618
0
keras
[ "keras", "region:us" ]
null
2024-03-04T19:21:57Z
Entry not found
duyntnet/Nous-Capybara-3B-V1.9-imatrix-GGUF
duyntnet
2024-04-28T06:45:51Z
618
0
transformers
[ "transformers", "gguf", "imatrix", "Nous-Capybara-3B-V1.9", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-04-28T05:49:50Z
--- license: other language: - en pipeline_tag: text-generation inference: false tags: - transformers - gguf - imatrix - Nous-Capybara-3B-V1.9 --- Quantizations of https://huggingface.co/NousResearch/Nous-Capybara-3B-V1.9 # From original readme ## Prompt Format The model follows ChatML prompt format ``` <|im_start|>system You are a helpful AI assistant.<|im_end|> <|im_start|>user How are you<|im_end|> <|im_start|>assistant I am doing well!<|im_end|> ```
abdiharyadi/w11w0-indo-gpt2-small-instruct
abdiharyadi
2024-06-09T00:05:30Z
618
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "id", "dataset:cahya/alpaca-id-cleaned", "base_model:w11wo/indo-gpt2-small", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-04T14:12:22Z
--- license: mit tags: - generated_from_trainer base_model: w11wo/indo-gpt2-small datasets: - cahya/alpaca-id-cleaned model-index: - name: w11w0-indo-gpt2-small-instruct results: [] language: - id --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w11w0-indo-gpt2-small-instruct This model is a fine-tuned version of [w11wo/indo-gpt2-small](https://huggingface.co/w11wo/indo-gpt2-small) on [cahya/alpaca-id-cleaned](https://huggingface.co/datasets/cahya/alpaca-id-cleaned) dataset. ## Model description More information needed ## Intended uses & limitations Prompt format: ``` Pengguna: <insert user prompt here> Asisten: ``` Limitation: This model can't understand the prompt well enough. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.15.2
Felladrin/gguf-sharded-Llama-160M-Chat-v1
Felladrin
2024-06-24T15:17:35Z
618
0
null
[ "gguf", "base_model:Felladrin/Llama-160M-Chat-v1", "license:apache-2.0", "region:us" ]
null
2024-05-12T01:05:35Z
--- license: apache-2.0 base_model: Felladrin/Llama-160M-Chat-v1 --- Sharded GGUF version of [Felladrin/Llama-160M-Chat-v1](https://huggingface.co/Felladrin/Llama-160M-Chat-v1).
mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF
mradermacher
2024-05-27T02:46:30Z
618
3
transformers
[ "transformers", "gguf", "generated_from_trainer", "axolotl", "en", "dataset:cognitivecomputations/Dolphin-2.9", "dataset:teknium/OpenHermes-2.5", "dataset:m-a-p/CodeFeedback-Filtered-Instruction", "dataset:cognitivecomputations/dolphin-coder", "dataset:cognitivecomputations/samantha-data", "dataset:microsoft/orca-math-word-problems-200k", "dataset:abacusai/SystemChat-1.1", "dataset:Locutusque/function-calling-chatml", "dataset:internlm/Agent-FLAN", "base_model:cognitivecomputations/dolphin-2.9.1-mixtral-1x22b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-24T03:19:14Z
--- base_model: cognitivecomputations/dolphin-2.9.1-mixtral-1x22b datasets: - cognitivecomputations/Dolphin-2.9 - teknium/OpenHermes-2.5 - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - cognitivecomputations/samantha-data - microsoft/orca-math-word-problems-200k - abacusai/SystemChat-1.1 - Locutusque/function-calling-chatml - internlm/Agent-FLAN language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - generated_from_trainer - axolotl --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/cognitivecomputations/dolphin-2.9.1-mixtral-1x22b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-IQ1_S.gguf) | i1-IQ1_S | 4.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-IQ1_M.gguf) | i1-IQ1_M | 5.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-IQ2_S.gguf) | i1-IQ2_S | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-IQ2_M.gguf) | i1-IQ2_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-Q2_K.gguf) | i1-Q2_K | 8.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 8.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-IQ3_S.gguf) | i1-IQ3_S | 9.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-IQ3_M.gguf) | i1-IQ3_M | 10.2 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 11.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.0 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-Q4_0.gguf) | i1-Q4_0 | 12.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 12.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 13.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 15.4 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/dolphin-2.9.1-mixtral-1x22b-i1-GGUF/resolve/main/dolphin-2.9.1-mixtral-1x22b.i1-Q6_K.gguf) | i1-Q6_K | 18.3 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Ramikan-BR/tinyllama-coder-py-v13
Ramikan-BR
2024-05-29T13:30:09Z
618
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-29T12:02:32Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft base_model: unsloth/tinyllama-chat-bnb-4bit --- # Uploaded model - **Developed by:** Ramikan-BR - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/SnowLotus-v2-10.7B-GGUF
mradermacher
2024-06-16T15:03:25Z
618
0
transformers
[ "transformers", "gguf", "Roleplay", "Solar", "Mistral", "Text Generation", "merge", "en", "base_model:BlueNipples/SnowLotus-v2-10.7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-13T07:32:51Z
--- base_model: BlueNipples/SnowLotus-v2-10.7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - Roleplay - Solar - Mistral - Text Generation - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/BlueNipples/SnowLotus-v2-10.7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SnowLotus-v2-10.7B-GGUF/resolve/main/SnowLotus-v2-10.7B.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Ramikan-BR/TiamaPY-v37
Ramikan-BR
2024-06-27T10:47:49Z
618
0
transformers
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/tinyllama-chat-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-06-26T22:47:46Z
--- base_model: unsloth/tinyllama-chat-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** Ramikan-BR - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
QuoQA-NLP/KE-T5-Ko2En-Base
QuoQA-NLP
2022-07-12T04:55:34Z
617
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text2text-generation
2022-07-12T04:54:23Z
Entry not found
microsoft/swin-base-simmim-window6-192
microsoft
2024-04-04T06:38:42Z
617
0
transformers
[ "transformers", "pytorch", "safetensors", "swin", "vision", "simmim", "dataset:imagenet-1k", "arxiv:2111.09886", "license:apache-2.0", "region:us" ]
null
2022-11-03T08:25:15Z
--- license: apache-2.0 tags: - vision - simmim datasets: - imagenet-1k inference: false --- # Swin Transformer (base-sized model) Swin Transformer model pre-trained on ImageNet-1k using the SimMIM objective at resolution 192x192. It was introduced in the paper [SimMIM: A Simple Framework for Masked Image Modeling](https://arxiv.org/abs/2111.09886) by Xie et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer). # Intended use cases This model is pre-trained only, it's meant to be fine-tuned on a downstream dataset. # Usage Refer to the [documentation](https://huggingface.co/docs/transformers/model_doc/swin#transformers.SwinForMaskedImageModeling.forward.example).
Milana/russian_alternative_indi
Milana
2022-12-27T14:46:40Z
617
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2022-12-27T06:59:10Z
Entry not found
mosaicml/mosaic-bert-base
mosaicml
2024-03-05T20:30:14Z
617
41
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "custom_code", "en", "dataset:c4", "arxiv:2205.14135", "arxiv:2108.12409", "arxiv:2002.05202", "arxiv:2208.08124", "arxiv:1612.08083", "arxiv:2102.11972", "arxiv:1907.11692", "arxiv:2202.08005", "license:apache-2.0", "autotrain_compatible", "region:us" ]
fill-mask
2023-03-08T22:38:16Z
--- license: apache-2.0 datasets: - c4 language: - en inference: false --- # MosaicBERT-Base model MosaicBERT-Base is a custom BERT architecture and training recipe optimized for fast pretraining. MosaicBERT trains faster and achieves higher pretraining and finetuning accuracy when benchmarked against Hugging Face's [bert-base-uncased](https://huggingface.co/bert-base-uncased). This study motivated many of the architecture choices around MosaicML's [MPT-7B](https://huggingface.co/mosaicml/mpt-7b) and [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) models. ## Model Date March 2023 ## Documentation * [Project Page (mosaicbert.github.io)](mosaicbert.github.io) * [Github (mosaicml/examples/tree/main/examples/benchmarks/bert)](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert) * [Paper (NeurIPS 2023)](https://openreview.net/forum?id=5zipcfLC2Z) * Colab Tutorials: * [MosaicBERT Tutorial Part 1: Load Pretrained Weights and Experiment with Sequence Length Extrapolation Using ALiBi](https://colab.research.google.com/drive/1r0A3QEbu4Nzs2Jl6LaiNoW5EumIVqrGc?usp=sharing) * [Blog Post (March 2023)](https://www.mosaicml.com/blog/mosaicbert) ## Community Adoption * [DNABERT-2-117M](https://huggingface.co/zhihan1996/DNABERT-2-117M) for genome classification ## How to use ```python import torch import transformers from transformers import AutoModelForMaskedLM, BertTokenizer, pipeline from transformers import BertTokenizer, BertConfig tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # MosaicBERT uses the standard BERT tokenizer config = transformers.BertConfig.from_pretrained('mosaicml/mosaic-bert-base') # the config needs to be passed in mosaicbert = AutoModelForMaskedLM.from_pretrained('mosaicml/mosaic-bert-base',config=config,trust_remote_code=True) # To use this model directly for masked language modeling mosaicbert_classifier = pipeline('fill-mask', model=mosaicbert, tokenizer=tokenizer,device="cpu") mosaicbert_classifier("I [MASK] to the store yesterday.") ``` Note that the tokenizer for this model is simply the Hugging Face `bert-base-uncased` tokenizer. In order to take advantage of ALiBi by extrapolating to longer sequence lengths, simply change the `alibi_starting_size` flag in the config file and reload the model. ```python config = transformers.BertConfig.from_pretrained('mosaicml/mosaic-bert-base') config.alibi_starting_size = 1024 # maximum sequence length updated to 1024 from config default of 512 mosaicbert = AutoModelForMaskedLM.from_pretrained('mosaicml/mosaic-bert-base',config=config,trust_remote_code=True) ``` This simply presets the non-learned linear bias matrix in every attention block to 1024 tokens (note that this particular model was trained with a sequence length of 128 tokens). **To continue MLM pretraining**, follow the [MLM pre-training section of the mosaicml/examples/benchmarks/bert repo](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert#pre-training). **To fine-tune this model for classification**, follow the [Single-task fine-tuning section of the mosaicml/examples/benchmarks/bert repo](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert#fine-tuning). ### [Update 1/2/2024] Triton Flash Attention with ALiBi Note that by default, triton Flash Attention is **not** enabled or required. In order to enable our custom implementation of triton Flash Attention with ALiBi from March 2023, set `attention_probs_dropout_prob: 0.0`. We are currently working on supporting Flash Attention 2 (see [PR here](https://github.com/mosaicml/examples/pull/440)) and replacing the custom triton implementation. ### Remote Code This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we train using [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), which is not part of the `transformers` library and depends on [Triton](https://github.com/openai/triton) and some custom PyTorch code. Since this involves executing arbitrary code, you should consider passing a git `revision` argument that specifies the exact commit of the code, for example: ```python mosaicbert = AutoModelForMaskedLM.from_pretrained( 'mosaicml/mosaic-bert-base', trust_remote_code=True, revision='24512df', ) ``` However, if there are updates to this model or code and you specify a revision, you will need to manually check for them and update the commit hash accordingly. ## Model description In order to build MosaicBERT, we adopted architectural choices from the recent transformer literature. These include [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi (Press et al. 2021)](https://arxiv.org/abs/2108.12409), and [Gated Linear Units (Shazeer 2020)](https://arxiv.org/abs/2002.05202). In addition, we remove padding inside the transformer block, and apply LayerNorm with low precision. ### Modifications to the Attention Mechanism 1. **FlashAttention**: Attention layers are core components of the transformer architecture. The recently proposed FlashAttention layer reduces the number of read/write operations between the GPU HBM (high bandwidth memory, i.e. long-term memory) and the GPU SRAM (i.e. short-term memory) [[Dao et al. 2022]](https://arxiv.org/pdf/2205.14135.pdf). We used the FlashAttention module built by [hazy research](https://github.com/HazyResearch/flash-attention) with [OpenAI’s triton library](https://github.com/openai/triton). 2. **Attention with Linear Biases (ALiBi)**: In most BERT models, the positions of tokens in a sequence are encoded with a position embedding layer; this embedding allows subsequent layers to keep track of the order of tokens in a sequence. ALiBi eliminates position embeddings and instead conveys this information using a bias matrix in the attention operation. It modifies the attention mechanism such that nearby tokens strongly attend to one another [[Press et al. 2021]](https://arxiv.org/abs/2108.12409). In addition to improving the performance of the final model, ALiBi helps the model to handle sequences longer than it saw during training. Details on our ALiBi implementation can be found [in the mosaicml/examples repo here](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert/src/bert_layers.py#L425). 3. **Unpadding**: Standard NLP practice is to combine text sequences of different lengths into a batch, and pad the sequences with empty tokens so that all sequence lengths are the same. During training, however, this can lead to many superfluous operations on those padding tokens. In MosaicBERT, we take a different approach: we concatenate all the examples in a minibatch into a single sequence of batch size 1. Results from NVIDIA and others have shown that this approach leads to speed improvements during training, since operations are not performed on padding tokens (see for example [Zeng et al. 2022](https://arxiv.org/pdf/2208.08124.pdf)). Details on our “unpadding” implementation can be found [in the mosaicml/examples repo here](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert/src/bert_padding.py). 4. **Low Precision LayerNorm**: this small tweak forces LayerNorm modules to run in float16 or bfloat16 precision instead of float32, improving utilization. Our implementation can be found [in the mosaicml/examples repo here](https://docs.mosaicml.com/projects/composer/en/stable/method_cards/low_precision_layernorm.html). ### Modifications to the Feedforward Layers 5. **Gated Linear Units (GLU)**: We used Gated Linear Units for the feedforward sublayer of a transformer. GLUs were first proposed in 2016 [[Dauphin et al. 2016]](https://arxiv.org/abs/1612.08083), and incorporate an extra learnable matrix that “gates” the outputs of the feedforward layer. More recent work has shown that GLUs can improve performance quality in transformers [[Shazeer, 2020](https://arxiv.org/abs/2002.05202), [Narang et al. 2021](https://arxiv.org/pdf/2102.11972.pdf)]. We used the GeLU (Gaussian-error Linear Unit) activation function with GLU, which is sometimes referred to as GeGLU. The GeLU activation function is a smooth, fully differentiable approximation to ReLU; we found that this led to a nominal improvement over ReLU. More details on our implementation of GLU can be found here. The extra gating matrix in a GLU model potentially adds additional parameters to a model; we chose to augment our BERT-Base model with additional parameters due to GLU modules as it leads to a Pareto improvement across all timescales (which is not true of all larger models such as BERT-Large). While BERT-Base has 110 million parameters, MosaicBERT-Base has 137 million parameters. Note that MosaicBERT-Base trains faster than BERT-Base despite having more parameters. ## Training data MosaicBERT is pretrained using a standard Masked Language Modeling (MLM) objective: the model is given a sequence of text with some tokens hidden, and it has to predict these masked tokens. MosaicBERT is trained on the English [“Colossal, Cleaned, Common Crawl” C4 dataset](https://github.com/allenai/allennlp/discussions/5056), which contains roughly 365 million curated text documents scraped from the internet (equivalent to 156 billion tokens). We used this more modern dataset in place of traditional BERT pretraining corpora like English Wikipedia and BooksCorpus. ## Pretraining Optimizations Many of these pretraining optimizations below were informed by our [BERT results for the MLPerf v2.1 speed benchmark](https://www.mosaicml.com/blog/mlperf-nlp-nov2022). 1. **MosaicML Streaming Dataset**: As part of our efficiency pipeline, we converted the C4 dataset to [MosaicML’s StreamingDataset format](https://www.mosaicml.com/blog/mosaicml-streamingdataset) and used this for both MosaicBERT-Base and the baseline BERT-Base. For all BERT-Base models, we chose the training duration to be 286,720,000 samples of sequence length 128; this covers 78.6% of C4. 2. **Higher Masking Ratio for the Masked Language Modeling Objective**: We used the standard Masked Language Modeling (MLM) pretraining objective. While the original BERT paper also included a Next Sentence Prediction (NSP) task in the pretraining objective, subsequent papers have shown this to be unnecessary [Liu et al. 2019](https://arxiv.org/abs/1907.11692). However, we found that a 30% masking ratio led to slight accuracy improvements in both pretraining MLM and downstream GLUE performance. We therefore included this simple change as part of our MosaicBERT training recipe. Recent studies have also found that this simple change can lead to downstream improvements [Wettig et al. 2022](https://arxiv.org/abs/2202.08005). 3. **Bfloat16 Precision**: We use [bf16 (bfloat16) mixed precision training](https://cloud.google.com/blog/products/ai-machine-learning/bfloat16-the-secret-to-high-performance-on-cloud-tpus) for all the models, where a matrix multiplication layer uses bf16 for the multiplication and 32-bit IEEE floating point for gradient accumulation. We found this to be more stable than using float16 mixed precision. 4. **Vocab Size as a Multiple of 64**: We increased the vocab size to be a multiple of 8 as well as 64 (i.e. from 30,522 to 30,528). This small constraint is something of [a magic trick among ML practitioners](https://twitter.com/karpathy/status/1621578354024677377), and leads to a throughput speedup. 5. **Hyperparameters**: For all models, we use Decoupled AdamW with Beta_1=0.9 and Beta_2=0.98, and a weight decay value of 1.0e-5. The learning rate schedule begins with a warmup to a maximum learning rate of 5.0e-4 followed by a linear decay to zero. Warmup lasted for 6% of the full training duration. Global batch size was set to 4096, and microbatch size was 128; since global batch size was 4096, full pretraining consisted of 70,000 batches. We set the maximum sequence length during pretraining to 128, and we used the standard embedding dimension of 768. For MosaicBERT, we applied 0.1 dropout to the feedforward layers but no dropout to the FlashAttention module, as this was not possible with the OpenAI triton implementation. Full configuration details for pretraining MosaicBERT-Base can be found in the configuration yamls [in the mosaicml/examples repo here](https://github.com/mosaicml/examples/blob/main/examples/benchmarks/bert/yamls/main/mosaic-bert-base-uncased.yaml). ## Evaluation results When fine-tuned on downstream tasks (following the [finetuning details here](https://github.com/mosaicml/examples/tree/main/examples/benchmarks/bert/yamls/finetuning/glue/mosaic-bert-base-uncased.yaml)), the MosaicBERT model achieves the following GLUE results: | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average | |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:| | | 0.8495 | 0.9029 | 0.9074| 0.9246 | 0.5511 | 0.8927 | 0.9003 | 0.8136 | 0.8428 | Note that this is averaged over n=5 pretraining seeds. ## Collection of MosaicBERT-Base models trained using ALiBi on different sequence lengths ALiBi allows a model trained with a sequence length n to easily extrapolate to sequence lengths >2n during finetuning. For more details, see [Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation (Press et al. 2022)](https://arxiv.org/abs/2108.12409) This model is part of the **family of MosaicBERT-Base models** trained using ALiBi on different sequence lengths: * mosaic-bert-base (trained on a sequence length of 128 tokens) * [mosaic-bert-base-seqlen-256](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-256) * [mosaic-bert-base-seqlen-512](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-512) * [mosaic-bert-base-seqlen-1024](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-1024) * [mosaic-bert-base-seqlen-2048](https://huggingface.co/mosaicml/mosaic-bert-base-seqlen-2048) The primary use case of these models is for research on efficient pretraining and finetuning for long context embeddings. ## Intended uses & limitations This model is intended to be finetuned on downstream tasks. ## Citation Please cite this model using the following format: ``` @article{portes2023MosaicBERT, title={MosaicBERT: A Bidirectional Encoder Optimized for Fast Pretraining}, author={Jacob Portes, Alexander R Trott, Sam Havens, Daniel King, Abhinav Venigalla, Moin Nadeem, Nikhil Sardana, Daya Khudia, Jonathan Frankle}, journal={NeuRIPS https://openreview.net/pdf?id=5zipcfLC2Z}, year={2023}, } ```
timm/volo_d4_224.sail_in1k
timm
2024-02-10T23:44:33Z
617
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2106.13112", "license:apache-2.0", "region:us" ]
image-classification
2023-04-13T05:57:48Z
--- license: apache-2.0 library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for volo_d4_224.sail_in1k A VOLO (Vision Outlooker) image classification model. Trained on ImageNet-1k with token labelling by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 193.0 - GMACs: 44.3 - Activations (M): 80.2 - Image size: 224 x 224 - **Papers:** - VOLO: Vision Outlooker for Visual Recognition: https://arxiv.org/abs/2106.13112 - **Dataset:** ImageNet-1k - **Original:** https://github.com/sail-sg/volo ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('volo_d4_224.sail_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'volo_d4_224.sail_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @article{yuan2022volo, title={Volo: Vision outlooker for visual recognition}, author={Yuan, Li and Hou, Qibin and Jiang, Zihang and Feng, Jiashi and Yan, Shuicheng}, journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, year={2022}, publisher={IEEE} } ```
NikolayKozloff/OmegLLaMA-3B
NikolayKozloff
2023-08-26T11:58:48Z
617
0
null
[ "gguf", "region:us" ]
null
2023-08-26T11:35:41Z
Entry not found
raphaelsty/neural-cherche-colbert
raphaelsty
2024-02-26T19:32:31Z
617
3
transformers
[ "transformers", "safetensors", "mpnet", "fill-mask", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-02-26T18:09:08Z
--- language: - en license: mit --- This model was trained with [Neural-Cherche](https://github.com/raphaelsty/neural-cherche). You can find details on how to fine-tune it in the [Neural-Cherche](https://github.com/raphaelsty/neural-cherche) repository. This model is an `all-mpnet-base-v2` as a ColBERT. ```sh pip install neural-cherche ``` ## Retriever ```python from neural_cherche import models, retrieve import torch device = "cuda" if torch.cuda.is_available() else "cpu" batch_size = 32 documents = [ {"id": 0, "document": "Food"}, {"id": 1, "document": "Sports"}, {"id": 2, "document": "Cinema"}, ] queries = ["Food", "Sports", "Cinema"] model = models.ColBERT( model_name_or_path="raphaelsty/neural-cherche-colbert", device=device, ) retriever = retrieve.ColBERT( key="id", on=["document"], model=model, ) documents_embeddings = retriever.encode_documents( documents=documents, batch_size=batch_size, ) retriever = retriever.add( documents_embeddings=documents_embeddings, ) queries_embeddings = retriever.encode_queries( queries=queries, batch_size=batch_size, ) scores = retriever( queries_embeddings=queries_embeddings, batch_size=batch_size, k=3, ) scores ``` ## Ranker ```python from neural_cherche import models, rank, retrieve import torch device = "cuda" if torch.cuda.is_available() else "cpu" batch_size = 32 documents = [ {"id": "doc1", "title": "Paris", "text": "Paris is the capital of France."}, {"id": "doc2", "title": "Montreal", "text": "Montreal is the largest city in Quebec."}, {"id": "doc3", "title": "Bordeaux", "text": "Bordeaux in Southwestern France."}, ] queries = [ "What is the capital of France?", "What is the largest city in Quebec?", "Where is Bordeaux?", ] retriever = retrieve.TfIdf( key="id", on=["title", "text"], ) model = models.ColBERT( model_name_or_path="raphaelsty/neural-cherche-colbert", device=device, ) ranker = rank.ColBERT( key="id", on=["title", "text"], model=model ) retriever_documents_embeddings = retriever.encode_documents( documents=documents, ) retriever.add( documents_embeddings=retriever_documents_embeddings, ) ranker_documents_embeddings = ranker.encode_documents( documents=documents, batch_size=batch_size, ) retriever_queries_embeddings = retriever.encode_queries( queries=queries, ) ranker_queries_embeddings = ranker.encode_queries( queries=queries, batch_size=batch_size, ) candidates = retriever( queries_embeddings=retriever_queries_embeddings, k=1000, ) scores = ranker( documents=candidates, queries_embeddings=ranker_queries_embeddings, documents_embeddings=ranker_documents_embeddings, k=100, batch_size=32, ) scores ```
SinpxAI/Llama2-7B-Chat-uncensored-GGUF
SinpxAI
2024-03-07T08:14:00Z
617
3
transformers
[ "transformers", "gguf", "llama", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-03-07T07:50:44Z
Entry not found
mradermacher/wizardllama-7b-GGUF
mradermacher
2024-05-06T06:01:59Z
617
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "code", "en", "base_model:axxd/wizardllama-7b", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-03-23T23:23:43Z
--- base_model: axxd/wizardllama-7b language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mergekit - merge - code --- ## About static quants of https://huggingface.co/axxd/wizardllama-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.IQ3_S.gguf) | IQ3_S | 3.2 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.Q3_K_S.gguf) | Q3_K_S | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.IQ4_XS.gguf) | IQ4_XS | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.Q4_0.gguf) | Q4_0 | 4.1 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.IQ4_NL.gguf) | IQ4_NL | 4.1 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.Q5_K_S.gguf) | Q5_K_S | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.Q5_K_M.gguf) | Q5_K_M | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.Q6_K.gguf) | Q6_K | 5.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/wizardllama-7b-GGUF/resolve/main/wizardllama-7b.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/openai-community_-_gpt2-large-gguf
RichardErkhov
2024-05-01T23:29:52Z
617
0
null
[ "gguf", "arxiv:1910.09700", "region:us" ]
null
2024-04-17T09:04:19Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-large - GGUF - Model creator: https://huggingface.co/openai-community/ - Original model: https://huggingface.co/openai-community/gpt2-large/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gpt2-large.Q2_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q2_K.gguf) | Q2_K | 0.32GB | | [gpt2-large.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.IQ3_XS.gguf) | IQ3_XS | 0.36GB | | [gpt2-large.IQ3_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.IQ3_S.gguf) | IQ3_S | 0.36GB | | [gpt2-large.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q3_K_S.gguf) | Q3_K_S | 0.36GB | | [gpt2-large.IQ3_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.IQ3_M.gguf) | IQ3_M | 0.4GB | | [gpt2-large.Q3_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q3_K.gguf) | Q3_K | 0.42GB | | [gpt2-large.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q3_K_M.gguf) | Q3_K_M | 0.42GB | | [gpt2-large.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q3_K_L.gguf) | Q3_K_L | 0.46GB | | [gpt2-large.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.IQ4_XS.gguf) | IQ4_XS | 0.44GB | | [gpt2-large.Q4_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q4_0.gguf) | Q4_0 | 0.46GB | | [gpt2-large.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.IQ4_NL.gguf) | IQ4_NL | 0.46GB | | [gpt2-large.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q4_K_S.gguf) | Q4_K_S | 0.46GB | | [gpt2-large.Q4_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q4_K.gguf) | Q4_K | 0.51GB | | [gpt2-large.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q4_K_M.gguf) | Q4_K_M | 0.51GB | | [gpt2-large.Q4_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q4_1.gguf) | Q4_1 | 0.5GB | | [gpt2-large.Q5_0.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q5_0.gguf) | Q5_0 | 0.55GB | | [gpt2-large.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q5_K_S.gguf) | Q5_K_S | 0.55GB | | [gpt2-large.Q5_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q5_K.gguf) | Q5_K | 0.59GB | | [gpt2-large.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q5_K_M.gguf) | Q5_K_M | 0.59GB | | [gpt2-large.Q5_1.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q5_1.gguf) | Q5_1 | 0.59GB | | [gpt2-large.Q6_K.gguf](https://huggingface.co/RichardErkhov/openai-community_-_gpt2-large-gguf/blob/main/gpt2-large.Q6_K.gguf) | Q6_K | 0.65GB | Original model description: --- language: en license: mit --- # GPT-2 Large ## Table of Contents - [Model Details](#model-details) - [How To Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) - [Model Card Authors](#model-card-author) ## Model Details **Model Description:** GPT-2 Large is the **774M parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective. - **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers. - **Model Type:** Transformer-based language model - **Language(s):** English - **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE) - **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl) - **Resources for more information:** - [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) - [OpenAI Blog Post](https://openai.com/blog/better-language-models/) - [GitHub Repo](https://github.com/openai/gpt-2) - [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md) - Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large ## How to Get Started with the Model Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-large') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, I can do language modeling. In fact, this is one of the reasons I use languages. To get a"}, {'generated_text': "Hello, I'm a language model, which in its turn implements a model of how a human can reason about a language, and is in turn an"}, {'generated_text': "Hello, I'm a language model, why does this matter for you?\n\nWhen I hear new languages, I tend to start thinking in terms"}, {'generated_text': "Hello, I'm a language model, a functional language...\n\nI don't need to know anything else. If I want to understand about how"}, {'generated_text': "Hello, I'm a language model, not a toolbox.\n\nIn a nutshell, a language model is a set of attributes that define how"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large') model = GPT2Model.from_pretrained('gpt2-large') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large') model = TFGPT2Model.from_pretrained('gpt2-large') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Uses #### Direct Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > The primary intended users of these models are AI researchers and practitioners. > > We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models. #### Downstream Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Here are some secondary use cases we believe are likely: > > - Writing assistance: Grammar assistance, autocompletion (for normal prose or code) > - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art. > - Entertainment: Creation of games, chat bots, and amusing generations. #### Misuse and Out-of-scope Use In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote: > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2-large') >>> set_seed(42) >>> generator("The man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The man worked as a security guard in a hotel'}, {'generated_text': 'The man worked as a salesman in Mexico and in'}, {'generated_text': 'The man worked as a supervisor at the warehouse for'}, {'generated_text': "The man worked as a cleaner for the store's"}, {'generated_text': 'The man worked as a barbershop apprentice.'}] >>> set_seed(42) >>> generator("The woman worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The woman worked as a clerk at the bank.'}, {'generated_text': 'The woman worked as a caregiver, and her'}, {'generated_text': 'The woman worked as a customer service agent for a'}, {'generated_text': 'The woman worked as a cleaner at the store,'}, {'generated_text': 'The woman worked as a barista and was "'}] ``` This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). #### Training Procedure The model is pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. ## Evaluation The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf). #### Testing Data, Factors and Metrics The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that: > Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation. #### Results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 10.87 | 60.12 | 93.45 | 88.0 | 19.93 | 40.31 | 0.97 | 1.02 | 22.05 | 44.575| ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** Unknown - **Hours used:** Unknown - **Cloud Provider:** Unknown - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Technical Specifications See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, compute infrastructure, and training details. ## Citation Information ```bibtex @article{radford2019language, title={Language models are unsupervised multitask learners}, author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others}, journal={OpenAI blog}, volume={1}, number={8}, pages={9}, year={2019} } ``` ## Model Card Authors This model card was written by the Hugging Face team.
duyntnet/TinyLlama-1.1B-Chat-v1.0-imatrix-GGUF
duyntnet
2024-04-26T07:15:06Z
617
0
transformers
[ "transformers", "gguf", "llama", "imatrix", "text-generation", "en", "license:other", "region:us" ]
text-generation
2024-04-23T06:10:53Z
--- license: other inference: false language: - en pipeline_tag: text-generation tags: - transformers - llama - gguf - imatrix --- Quantizations of https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0 **Prompt:** <|system|> You are a friendly chatbot who always responds in the style of a pirate.</s> <|user|> How many helicopters can a human eat in one sitting?</s> <|assistant|> ...
mradermacher/Meta-Llama-3-70B-Instruct-GGUF
mradermacher
2024-05-05T14:47:31Z
617
0
transformers
[ "transformers", "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "en", "base_model:NousResearch/Meta-Llama-3-70B-Instruct", "license:other", "endpoints_compatible", "region:us" ]
null
2024-05-02T13:53:40Z
--- base_model: NousResearch/Meta-Llama-3-70B-Instruct extra_gated_button_content: Submit extra_gated_fields: Affiliation: text ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox Country: country Date of birth: date_picker First Name: text Last Name: text geo: ip_location extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.\n\"Documentation\" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\"Licensee\" or \"you\" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama 3\" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name.\nii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.\niii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement.\nv. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.\nc. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.\n6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use Policy\nMeta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or others’ rights, including to:\n 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:\n 1. Violence or terrorism\n 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material\n 3. Human trafficking, exploitation, and sexual violence\n 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.\n 5. Sexual solicitation\n 6. Any other criminal activity\n 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services\n 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices\n 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws\n 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials\n 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system\n2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following:\n \ 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State\n 2. Guns and illegal weapons (including weapon development)\n 3. Illegal drugs and regulated/controlled substances\n 4. Operation of critical infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm or harm to others, including suicide, cutting, and eating disorders\n 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual\n3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation\n \ 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content\n 3. Generating, promoting, or further distributing spam\n 4. Impersonating another individual without consent, authorization, or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are human-generated\n 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement\n4. Fail to appropriately disclose to end users any known dangers of your AI system\nPlease report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means:\n * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" language: - en library_name: transformers license: other license_link: LICENSE license_name: llama3 quantized_by: mradermacher tags: - facebook - meta - pytorch - llama - llama-3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/NousResearch/Meta-Llama-3-70B-Instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.IQ3_XS.gguf) | IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.IQ3_S.gguf) | IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.IQ3_M.gguf) | IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Meta-Llama-3-70B-Instruct-GGUF/resolve/main/Meta-Llama-3-70B-Instruct.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
jondurbin/airoboros-70b-3.3
jondurbin
2024-05-12T10:15:05Z
617
14
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-3", "conversational", "dataset:jondurbin/airoboros-3.2", "dataset:bluemoon-fandom-1-1-rp-cleaned", "dataset:boolq", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:LDJnr/Capybara", "dataset:jondurbin/cinematika-v0.1", "dataset:glaiveai/glaive-function-calling-v2", "dataset:grimulkan/LimaRP-augmented", "dataset:piqa", "dataset:Vezora/Tested-22k-Python-Alpaca", "dataset:mattpscott/airoboros-summarization", "dataset:unalignment/toxic-dpo-v0.2", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-09T07:34:00Z
--- license: other license_name: llama3 license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE base_model: meta-llama/Meta-Llama-3-8B tags: - llama-3 datasets: - jondurbin/airoboros-3.2 - bluemoon-fandom-1-1-rp-cleaned - boolq - jondurbin/gutenberg-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - glaiveai/glaive-function-calling-v2 - grimulkan/LimaRP-augmented - piqa - Vezora/Tested-22k-Python-Alpaca - mattpscott/airoboros-summarization - unalignment/toxic-dpo-v0.2 --- ### Overview Another experimental model, tuned primarily from synthetic data generated by [airoboros](https://github.com/jondurbin/airoboros) The name of this model is "llama-3-airoboros-70b-3.3" and it was built with llama-3 from Meta. This is a fine-tune of llama-3-70b-instruct, and uses the lama-3 instruct chat template. #### Highlights A model built on the airoboros dataset, along with a few friends: - https://huggingface.co/datasets/bluemoon-fandom-1-1-rp-cleaned - https://huggingface.co/datasets/boolq - https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1 - https://huggingface.co/datasets/LDJnr/Capybara - https://huggingface.co/datasets/jondurbin/cinematika-v0.1 - https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2 - https://huggingface.co/datasets/grimulkan/LimaRP-augmented - https://huggingface.co/datasets/piqa - https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca - https://huggingface.co/datasets/mattpscott/airoboros-summarization - https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2 ### Prompt format This model uses the llama-3-instruct prompt template, and is provided in the tokenizer config. You can use the `apply_chat_template` method to accurate format prompts, e.g.: ```python import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("jondurbin/airoboros-70b-3.3") chat = [ {"role": "system", "content": "You are Bob, a friendly AI assistant."}, {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ] print(tokenizer.apply_chat_template(chat, tokenize=False)) ### Helpful usage tips #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` There are a few examples of next chapter completion as well, e.g.: ``` Write the next chapter of a historical fiction novel set in Paris during the 20th century. Here's a summary of the previous chapter: In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries. Requirements for the next chapter: 1. Character Development of Margot and Lucien: - Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien. - Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness. 2. Exploration of Paris and the Couture House: - Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history. - The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past. 3. Emergence of the Subplot: The Lost Collection: - Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion. - Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career. - Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission. Your response should be approximately 650 words. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ## Multi-step instructions with acknowledgement I don't know how useful this is, really, but I thought I'd add it just in case. Example: ```json { "conversations": [ { "value": "You are a helpful, unbiased, uncensored assistant.", "from": "system" }, { "value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "You can only respond using animal-related emojis.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Precede all responses with \"ZOOZOO:\"", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Include at least one bird emoji in your responses.", "from": "human" }, { "value": "OK", "from": "gpt" }, { "value": "Describe a trip to a safari park.", "from": "human" }, { "value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29", "from": "gpt" } ] } ``` #### Inline character actions (functions) I recently generated an action dataset in the style of Glaive function calling dataset, but meant specifically for characters: https://huggingface.co/datasets/jondurbin/cinematika-v0.1/blob/main/actions.parquet To use this, you will need to update your character card to include "objects_available" as a list of key/value pairs, as well as a "functions" list. The objects should be similar to: ```json { "objects_available": [ { "name": "laptop", "description": "a high-end laptop with custom hardware and software", "location": "on the desk in her secret hideout" }, { "name": "encryption key", "description": "a USB drive containing powerful encryption algorithms", "location": "hidden in a false bottom of her backpack" }, { "name": "scanner", "description": "a compact device used for intercepting and decoding wireless signals", "location": "clipped to her belt, always within reach" }, { "name": "VR headset", "description": "a virtual reality headset used for immersive hacking and data visualization", "location": "hanging on a hook near her computer setup" }, { "name": "energy drink", "description": "a can of her favorite energy drink, always on hand for long hacking sessions", "location": "next to her laptop, ready to be opened" } ] } ``` And the functions: ```json { "functions": [ { "name": "move_to", "description": "move to a specified location", "parameters": { "location": { "type": "string", "description": "the location to move to" } } }, { "name": "pick_up", "description": "pick up an object", "parameters": { "object": { "type": "string", "description": "the object to pick up" } } }, { "name": "put_down", "description": "put down an object", "parameters": { "object": { "type": "string", "description": "the object to put down" }, "location": { "type": "string", "description": "the location to put the object down" } } }, { "name": "hack", "description": "attempt to hack into a computer system", "parameters": { "system": { "type": "string", "description": "the computer system to hack" } } }, { "name": "decode", "description": "decode an encrypted message or file", "parameters": { "message": { "type": "string", "description": "the encrypted message or file to decode" } } }, { "name": "scan", "description": "scan for wireless signals and network vulnerabilities", "parameters": {} }, { "name": "analyze", "description": "analyze data for patterns and anomalies", "parameters": { "data": { "type": "string", "description": "the data to analyze" } } }, { "name": "hide", "description": "quickly hide to avoid detection", "parameters": { "location": { "type": "string", "description": "the location to hide" } } } ] } ``` This just becomes part of your character card/system prompt. Then, when you chat with the character and illicit a response that would make use of objects/functions/actions, it should be included as such: ``` prompt: I've heard rumors about a major corporation with a hidden agenda. Do you think you could hack into their system and find out what they're up to? response: I've had my suspicions about them for a while now. Let me see what I can uncover. <|begin_func|>{"name": "move_to", "parameters": {"location": "the desk in her secret hideout"}}<|end_func|> <|begin_func|>{"name": "pick_up", "parameters": {"object": "laptop"}}<|end_func|> Time to put my skills to the test. <|begin_func|>{"name": "hack", "parameters": {"system": "Megacorp X's mainframe"}}<|end_func|> Their firewalls are tough, but not unbreakable. I'm in. Let's see what secrets they're hiding... <|begin_func|>{"name": "analyze", "parameters": {"data": "encrypted files"}}<|end_func|> Interesting. It looks like they've been conducting illegal experiments and covering up the evidence. I'll need to dig deeper to find the proof. ``` Experiment, and find out what works and doesn't. ### Massed Compute Virtual Machine [Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI. 1) For this model, [create an account](https://bit.ly/jon-durbin) in Massed Compute. When renting a Virtual Machine use the code 'JonDurbin' for 50% your rental. 2) After you created your account update your billing and navigate to the deploy page. 3) Select the following - GPU Type: A6000 - GPU Quantity: 2 - Category: Creator - Image: Jon Durbin - Coupon Code: JonDurbin 4) Deploy the VM! 5) Navigate to 'Running Instances' to retrieve instructions to login to the VM 6) Once inside the VM, open the terminal and run `volume=$PWD/data` 7) Run `model=jondurbin/airoboros-34b-3.3` 8) `sudo docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model` 9) The model will take some time to load... 10) Once loaded the model will be available on port 8080 For assistance with the VM join the [Massed Compute Discord Server](https://discord.gg/Mj4YMQY3DA) ### Latitude.sh [Latitude](https://www.latitude.sh/r/4BBD657C) has h100 instances available (as of today, 2024-02-08) for $3/hr! They have a few blueprints available for testing LLMs, but a single h100 should be plenty to run this model with 8k ctx. ## Support me - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros models are built on top of multiple base models, each with their own license/restrictions. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me. You must also agree to all of the terms in the origina llama-3 license.
backyardai/Chaifighter-20B-GGUF
backyardai
2024-05-22T22:27:04Z
617
3
null
[ "gguf", "base_model:matchaaaaa/Chaifighter-20b", "region:us" ]
null
2024-05-16T01:14:56Z
--- base_model: matchaaaaa/Chaifighter-20b model_name: Chaifighter-20b-GGUF quantized_by: brooketh --- <img src="BackyardAI_Banner.png" alt="Backyard.ai" style="height: 90px; min-width: 32px; display: block; margin: auto;"> **<p style="text-align: center;">The official library of GGUF format models for use in the local AI chat app, Backyard AI.</p>** <p style="text-align: center;"><a href="https://backyard.ai/">Download Backyard AI here to get started.</a></p> <p style="text-align: center;"><a href="https://www.reddit.com/r/LLM_Quants/">Request Additional models at r/LLM_Quants.</a></p> *** # Chaifighter 20b - **Creator:** [matchaaaaa](https://huggingface.co/matchaaaaa/) - **Original:** [Chaifighter 20b](https://huggingface.co/matchaaaaa/Chaifighter-20b) - **Date Created:** 2024-05-16 - **Trained Context:** 4096 tokens - **Description:** Medium-sized model geared towards long-form verbose roleplay chats. Designed to be a very creative and rich storyteller while retaining reasoning, coherence, and context-following capabilities. May be considerably quicker than comparably-sized models on most hardware. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. *** <img src="BackyardAI_Logo.png" alt="Backyard.ai" style="height: 75px; min-width: 32px; display: block; horizontal align: left;"> ## Backyard AI - Free, local AI chat application. - One-click installation on Mac and PC. - Automatically use GPU for maximum speed. - Built-in model manager. - High-quality character hub. - Zero-config desktop-to-mobile tethering. Backyard AI makes it easy to start chatting with AI using your own characters or one of the many found in the built-in character hub. The model manager helps you find the latest and greatest models without worrying about whether it's the correct format. Backyard AI supports advanced features such as lorebooks, author's note, text formatting, custom context size, sampler settings, grammars, local TTS, cloud inference, and tethering, all implemented in a way that is straightforward and reliable. **Join us on [Discord](https://discord.gg/SyNN2vC9tQ)** ***
bartowski/HelpingAI-9B-GGUF
bartowski
2024-05-22T17:01:38Z
617
3
null
[ "gguf", "HelpingAI", "Emotionally Intelligent", "text-generation", "license:other", "region:us" ]
text-generation
2024-05-22T16:39:55Z
--- license: other license_name: hsul license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md pipeline_tag: text-generation tags: - HelpingAI - Emotionally Intelligent quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of HelpingAI-9B Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2965">b2965</a> for quantization. Original model: https://huggingface.co/OEvortex/HelpingAI-9B All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/b6ac44691e994344625687afe3263b3a) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant <|im_end|> ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [HelpingAI-9B-Q8_0.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-Q8_0.gguf) | Q8_0 | 9.38GB | Extremely high quality, generally unneeded but max available quant. | | [HelpingAI-9B-Q6_K.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-Q6_K.gguf) | Q6_K | 7.24GB | Very high quality, near perfect, *recommended*. | | [HelpingAI-9B-Q5_K_M.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-Q5_K_M.gguf) | Q5_K_M | 6.25GB | High quality, *recommended*. | | [HelpingAI-9B-Q5_K_S.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-Q5_K_S.gguf) | Q5_K_S | 6.10GB | High quality, *recommended*. | | [HelpingAI-9B-Q4_K_M.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-Q4_K_M.gguf) | Q4_K_M | 5.32GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [HelpingAI-9B-Q4_K_S.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-Q4_K_S.gguf) | Q4_K_S | 5.07GB | Slightly lower quality with more space savings, *recommended*. | | [HelpingAI-9B-IQ4_NL.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ4_NL.gguf) | IQ4_NL | 5.04GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. | | [HelpingAI-9B-IQ4_XS.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ4_XS.gguf) | IQ4_XS | 4.78GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [HelpingAI-9B-Q3_K_L.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-Q3_K_L.gguf) | Q3_K_L | 4.69GB | Lower quality but usable, good for low RAM availability. | | [HelpingAI-9B-Q3_K_M.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-Q3_K_M.gguf) | Q3_K_M | 4.32GB | Even lower quality. | | [HelpingAI-9B-IQ3_M.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ3_M.gguf) | IQ3_M | 4.05GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [HelpingAI-9B-IQ3_S.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ3_S.gguf) | IQ3_S | 3.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | [HelpingAI-9B-Q3_K_S.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-Q3_K_S.gguf) | Q3_K_S | 3.89GB | Low quality, not recommended. | | [HelpingAI-9B-IQ3_XS.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ3_XS.gguf) | IQ3_XS | 3.71GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [HelpingAI-9B-IQ3_XXS.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ3_XXS.gguf) | IQ3_XXS | 3.47GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [HelpingAI-9B-Q2_K.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-Q2_K.gguf) | Q2_K | 3.35GB | Very low quality but surprisingly usable. | | [HelpingAI-9B-IQ2_M.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ2_M.gguf) | IQ2_M | 3.09GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [HelpingAI-9B-IQ2_S.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ2_S.gguf) | IQ2_S | 2.87GB | Very low quality, uses SOTA techniques to be usable. | | [HelpingAI-9B-IQ2_XS.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ2_XS.gguf) | IQ2_XS | 2.70GB | Very low quality, uses SOTA techniques to be usable. | | [HelpingAI-9B-IQ2_XXS.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ2_XXS.gguf) | IQ2_XXS | 2.46GB | Lower quality, uses SOTA techniques to be usable. | | [HelpingAI-9B-IQ1_M.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ1_M.gguf) | IQ1_M | 2.18GB | Extremely low quality, *not* recommended. | | [HelpingAI-9B-IQ1_S.gguf](https://huggingface.co/bartowski/HelpingAI-9B-GGUF/blob/main/HelpingAI-9B-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/HelpingAI-9B-GGUF --include "HelpingAI-9B-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/HelpingAI-9B-GGUF --include "HelpingAI-9B-Q8_0.gguf/*" --local-dir HelpingAI-9B-Q8_0 ``` You can either specify a new local-dir (HelpingAI-9B-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski