modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
AlekseyKorshuk/comedy-scripts
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
2022-09-13T23:26:30Z
--- license: mit --- ### Tubby Cats on Stable Diffusion This is the `<tubby>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<tubby> 0](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/21.jpeg) ![<tubby> 1](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/8.jpeg) ![<tubby> 2](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/18.jpeg) ![<tubby> 3](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/22.jpeg) ![<tubby> 4](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/14.jpeg) ![<tubby> 5](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/10.jpeg) ![<tubby> 6](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/1.jpeg) ![<tubby> 7](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/16.jpeg) ![<tubby> 8](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/15.jpeg) ![<tubby> 9](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/12.jpeg) ![<tubby> 10](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/11.jpeg) ![<tubby> 11](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/9.jpeg) ![<tubby> 12](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/23.jpeg) ![<tubby> 13](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/5.jpeg) ![<tubby> 14](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/0.jpeg) ![<tubby> 15](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/17.jpeg) ![<tubby> 16](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/4.jpeg) ![<tubby> 17](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/13.jpeg) ![<tubby> 18](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/2.jpeg) ![<tubby> 19](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/19.jpeg) ![<tubby> 20](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/3.jpeg) ![<tubby> 21](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/6.jpeg) ![<tubby> 22](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/20.jpeg) ![<tubby> 23](https://huggingface.co/sd-concepts-library/tubby-cats/resolve/main/concept_images/7.jpeg)
AlexMaclean/sentence-compression-roberta
[ "pytorch", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/Infill04") model = AutoModelForCausalLM.from_pretrained("BigSalmon/Infill04") ``` ``` Try it out here: https://huggingface.co/spaces/BigSalmon/TestAnyGPTModel ``` ``` prompt = """few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep]""" input_ids = tokenizer.encode(prompt, return_tensors='pt') outputs = model.generate(input_ids=input_ids, max_length=10 + len(prompt), temperature=1.0, top_k=50, top_p=0.95, do_sample=True, num_return_sequences=5, early_stopping=True) for i in range(5): print(tokenizer.decode(outputs[i])) ``` Most likely outputs (Disclaimer: I highly recommend using this over just generating): ``` prompt = """few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep]""" text = tokenizer.encode(prompt) myinput, past_key_values = torch.tensor([text]), None myinput = myinput myinput= myinput.to(device) logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(250) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() words = [] print(best_words) ``` Infill / Infilling / Masking / Phrase Masking ``` His contention [blank] by the evidence [sep] was refuted [answer] *** Few sights are as [blank] New York City as the colorful, flashing signage of its bodegas [sep] synonymous with [answer] *** When rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer] *** The library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer] *** ``` ``` original: Other film stars to have appeared in Scrubs include Heather Graham, while Friends actor Matthew Perry has guest-starred and directed an episode of the [MASK] star, who recently played the title role in historical blockbuster Alexander, will make a cameo appearance as an unruly Irishman. Its leading star, Zach Braff, has recently [MASK] the big screen in Garden State, which he also directed. Farrell is pencilled in to [MASK] of Crockett in a film version of 1980s police [MASK] Farrell's appearance is said to be a result of his friendship with Zach Braff, who stars in the programme. infill: Other film stars to have appeared in Scrubs include Heather Graham, while Friends actor Matthew Perry has guest-starred and directed an episode of the show. The film star, who recently played the title role in historical blockbuster Alexander, will make a cameo appearance as an unruly Irishman. Its leading star, Zach Braff, has recently been seen on the big screen in Garden State, which he also directed. Farrell is pencilled in to play the role of Crockett in a film version of 1980s police drama Miami Vice. Farrell's appearance is said to be a result of his friendship with Zach Braff, who stars in the programme. ```
AlexMaclean/sentence-compression
[ "pytorch", "distilbert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-d-nce-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.840515873015873 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6818181818181818 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.685459940652819 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8076709282934964 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.94 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6535087719298246 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6388888888888888 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9281301792978756 - name: F1 (macro) type: f1_macro value: 0.9254620165261186 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8856807511737088 - name: F1 (macro) type: f1_macro value: 0.7505936116426153 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7210184182015169 - name: F1 (macro) type: f1_macro value: 0.707381518416115 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9625791194268624 - name: F1 (macro) type: f1_macro value: 0.8830231594217628 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9207145095581323 - name: F1 (macro) type: f1_macro value: 0.9189981669115016 --- # relbert/roberta-large-semeval2012-mask-prompt-d-nce-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6818181818181818 - Accuracy on SAT: 0.685459940652819 - Accuracy on BATS: 0.8076709282934964 - Accuracy on U2: 0.6535087719298246 - Accuracy on U4: 0.6388888888888888 - Accuracy on Google: 0.94 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9281301792978756 - Micro F1 score on CogALexV: 0.8856807511737088 - Micro F1 score on EVALution: 0.7210184182015169 - Micro F1 score on K&H+N: 0.9625791194268624 - Micro F1 score on ROOT09: 0.9207145095581323 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.840515873015873 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-d-nce-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 25 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
AlexN/xls-r-300m-fr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
null
data: https://github.com/BigSalmon2/InformalToFormalDataset ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln78Paraphrase") model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln78Paraphrase") ``` ``` Demo: https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy ``` ``` prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:""" input_ids = tokenizer.encode(prompt, return_tensors='pt') outputs = model.generate(input_ids=input_ids, max_length=10 + len(prompt), temperature=1.0, top_k=50, top_p=0.95, do_sample=True, num_return_sequences=5, early_stopping=True) for i in range(5): print(tokenizer.decode(outputs[i])) ``` Most likely outputs (Disclaimer: I highly recommend using this over just generating): ``` prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:""" text = tokenizer.encode(prompt) myinput, past_key_values = torch.tensor([text]), None myinput = myinput myinput= myinput.to(device) logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False) logits = logits[0,-1] probabilities = torch.nn.functional.softmax(logits) best_logits, best_indices = logits.topk(250) best_words = [tokenizer.decode([idx.item()]) for idx in best_indices] text.append(best_indices[0].item()) best_probabilities = probabilities[best_indices].tolist() words = [] print(best_words) ``` ``` How To Make Prompt: informal english: i am very ready to do that just that. Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end. Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task. *** informal english: space is huge and needs to be explored. Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless. Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration. *** informal english: corn fields are all across illinois, visible once you leave chicago. Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago. informal english: ``` ``` original: microsoft word's [MASK] pricing invites competition. Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition. *** original: the library’s quiet atmosphere encourages visitors to [blank] in their work. Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work. ``` ``` Essay Intro (Warriors vs. Rockets in Game 7): text: eagerly anticipated by fans, game 7's are the highlight of the post-season. text: ever-building in suspense, game 7's have the crowd captivated. *** Essay Intro (South Korean TV Is Becoming Popular): text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ). text: increasingly held in critical esteem, south korean television continues to impress. text: at the forefront of quality content, south korea is quickly achieving celebrity status. *** Essay Intro ( ``` ``` Search: What is the definition of Checks and Balances? https://en.wikipedia.org/wiki/Checks_and_balances Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate. https://www.harvard.edu/glossary/Checks_and_Balances Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power https://www.law.cornell.edu/library/constitution/Checks_and_Balances Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power. *** Search: What is the definition of Separation of Powers? https://en.wikipedia.org/wiki/Separation_of_powers The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power. https://www.yale.edu/tcf/Separation_of_Powers.html Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined. *** Search: What is the definition of Connection of Powers? https://en.wikipedia.org/wiki/Connection_of_powers Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches. https://simple.wikipedia.org/wiki/Connection_of_powers The term Connection of Powers describes a system of government in which there is overlap between different parts of the government. *** Search: What is the definition of ``` ``` Search: What are phrase synonyms for "second-guess"? https://www.powerthesaurus.org/second-guess/synonyms Shortest to Longest: - feel dubious about - raise an eyebrow at - wrinkle their noses at - cast a jaundiced eye at - teeter on the fence about *** Search: What are phrase synonyms for "mean to newbies"? https://www.powerthesaurus.org/mean_to_newbies/synonyms Shortest to Longest: - readiness to balk at rookies - absence of tolerance for novices - hostile attitude toward newcomers *** Search: What are phrase synonyms for "make use of"? https://www.powerthesaurus.org/make_use_of/synonyms Shortest to Longest: - call upon - glean value from - reap benefits from - derive utility from - seize on the merits of - draw on the strength of - tap into the potential of *** Search: What are phrase synonyms for "hurting itself"? https://www.powerthesaurus.org/hurting_itself/synonyms Shortest to Longest: - erring - slighting itself - forfeiting its integrity - doing itself a disservice - evincing a lack of backbone *** Search: What are phrase synonyms for " ``` ``` - nebraska - unicamerical legislature - different from federal house and senate text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate. *** - penny has practically no value - should be taken out of circulation - just as other coins have been in us history - lost use - value not enough - to make environmental consequences worthy text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness. *** - ``` ``` original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick. infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick. *** original: ``` ``` wordy: classical music is becoming less popular more and more. Translate into Concise Text: interest in classic music is fading. *** wordy: ``` ``` sweet: savvy voters ousted him. longer: voters who were informed delivered his defeat. *** sweet: ``` ``` 1: commercial space company spacex plans to launch a whopping 52 flights in 2022. 2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022. 3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights. 4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company. 5: a commercial space company, spacex aims to conduct 52 flights in 2022. *** 1: ``` Keywords to sentences or sentence. ``` ngos are characterized by: □ voluntary citizens' group that is organized on a local, national or international level □ encourage political participation □ often serve humanitarian functions □ work for social, economic, or environmental change *** what are the drawbacks of living near an airbnb? □ noise □ parking □ traffic □ security □ strangers *** ``` ``` original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung. adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung. *** original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark. adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark. *** original: ``` ``` original: had trouble deciding. translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation. *** original: ``` ``` input: not loyal 1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ). *** input: ``` ``` first: ( was complicit in / was involved in ). antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ). *** first: ( have no qualms about / see no issue with ). antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ). *** first: ( do not see eye to eye / disagree often ). antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ). *** first: ``` ``` stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground. *** languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo. *** dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia. *** embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons. ``` Infill / Infilling / Masking / Phrase Masking (Works pretty decently actually, especially when you use logprobs code from above): ``` his contention [blank] by the evidence [sep] was refuted [answer] *** few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer] *** when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer] *** the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer] *** the joy of sport is that no two games are alike. for every exhilarating experience, however, there is an interminable one. the national pastime, unfortunately, has a penchant for the latter. what begins as a summer evening at the ballpark can quickly devolve into a game of tedium. the primary culprit is the [blank] of play. from batters readjusting their gloves to fielders spitting on their mitts, the action is [blank] unnecessary interruptions. the sport's future is [blank] if these tendencies are not addressed [sep] plodding pace [answer] riddled with [answer] bleak [answer] *** microsoft word's [blank] pricing [blank] competition [sep] unconscionable [answer] invites [answer] *** ``` ``` original: microsoft word's [MASK] pricing invites competition. Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition. *** original: the library’s quiet atmosphere encourages visitors to [blank] in their work. Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work. ``` Backwards ``` Essay Intro (National Parks): text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ). *** Essay Intro (D.C. Statehood): washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ). ``` ``` topic: the Golden State Warriors. characterization 1: the reigning kings of the NBA. characterization 2: possessed of a remarkable cohesion. characterization 3: helmed by superstar Stephen Curry. characterization 4: perched atop the league’s hierarchy. characterization 5: boasting a litany of hall-of-famers. *** topic: emojis. characterization 1: shorthand for a digital generation. characterization 2: more versatile than words. characterization 3: the latest frontier in language. characterization 4: a form of self-expression. characterization 5: quintessentially millennial. characterization 6: reflective of a tech-centric world. *** topic: ``` ``` regular: illinois went against the census' population-loss prediction by getting more residents. VBG: defying the census' prediction of population loss, illinois experienced growth. *** regular: microsoft word’s high pricing increases the likelihood of competition. VBG: extortionately priced, microsoft word is inviting competition. *** regular: ``` ``` source: badminton should be more popular in the US. QUERY: Based on the given topic, can you develop a story outline? target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing. *** source: movies in theaters should be free. QUERY: Based on the given topic, can you develop a story outline? target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay. *** source: ``` ``` in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure. *** the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule. *** the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement. *** ``` ``` indeed, many business leaders have { ceded control } to their workers. this change has led to a flowering of innovation. question: what does “this change” mean in the above context? (a) collaborative culture (b) empowerment of employees (c) decentralization of power (d) participatory management *** employees' refusal to assume { extraneous responsibilities } has not affected their performance in their official capacity. this trend, however, has been a particular irritant to management. question: what does “this trend” mean in the above context? (a) demarcating work boundaries (b) resisting management's overreach (c) sticking to one's prescribed duties (d) working within a contract *** in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure. question: what does “this orientation” mean in the above context? (a) visible business practices (b) candor with the public (c) open, honest communication (d) culture of accountability ```
Aliskin/xlm-roberta-base-finetuned-marc
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### Collage3-HubCity on Stable Diffusion This is the `<C3Hub>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<C3Hub> 0](https://huggingface.co/sd-concepts-library/collage3-hubcity/resolve/main/concept_images/4.jpeg) ![<C3Hub> 1](https://huggingface.co/sd-concepts-library/collage3-hubcity/resolve/main/concept_images/3.jpeg) ![<C3Hub> 2](https://huggingface.co/sd-concepts-library/collage3-hubcity/resolve/main/concept_images/2.jpeg) ![<C3Hub> 3](https://huggingface.co/sd-concepts-library/collage3-hubcity/resolve/main/concept_images/1.jpeg) ![<C3Hub> 4](https://huggingface.co/sd-concepts-library/collage3-hubcity/resolve/main/concept_images/0.jpeg)
Amba/wav2vec2-large-xls-r-300m-turkish-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### Goku on Stable Diffusion This is the `<goku>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<goku> 0](https://huggingface.co/sd-concepts-library/goku/resolve/main/concept_images/1.jpeg) ![<goku> 1](https://huggingface.co/sd-concepts-library/goku/resolve/main/concept_images/5.jpeg) ![<goku> 2](https://huggingface.co/sd-concepts-library/goku/resolve/main/concept_images/0.jpeg) ![<goku> 3](https://huggingface.co/sd-concepts-library/goku/resolve/main/concept_images/4.jpeg) ![<goku> 4](https://huggingface.co/sd-concepts-library/goku/resolve/main/concept_images/2.jpeg) ![<goku> 5](https://huggingface.co/sd-concepts-library/goku/resolve/main/concept_images/3.jpeg)
Amrrs/indian-foods
[ "pytorch", "tensorboard", "vit", "image-classification", "transformers", "huggingpics", "model-index", "autotrain_compatible" ]
image-classification
{ "architectures": [ "ViTForImageClassification" ], "model_type": "vit", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.1822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 13 | 3.1802 | | No log | 2.0 | 26 | 3.1813 | | No log | 3.0 | 39 | 3.1822 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.13.0.dev20220912 - Datasets 2.4.0 - Tokenizers 0.11.0
Analufm/Ana
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model ncthuan/tmp is restricted and you are not in the authorized list. Visit https://huggingface.co/ncthuan/tmp to ask for access.
AndrewMcDowell/wav2vec2-xls-r-1b-japanese-hiragana-katakana
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ja", "dataset:common_voice", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit --- ### Daycare Attendant Sun FNAF on Stable Diffusion This is the `<biblic-sun-fnaf>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<biblic-sun-fnaf> 0](https://huggingface.co/sd-concepts-library/daycare-attendant-sun-fnaf/resolve/main/concept_images/1.jpeg) ![<biblic-sun-fnaf> 1](https://huggingface.co/sd-concepts-library/daycare-attendant-sun-fnaf/resolve/main/concept_images/0.jpeg) ![<biblic-sun-fnaf> 2](https://huggingface.co/sd-concepts-library/daycare-attendant-sun-fnaf/resolve/main/concept_images/4.jpeg) ![<biblic-sun-fnaf> 3](https://huggingface.co/sd-concepts-library/daycare-attendant-sun-fnaf/resolve/main/concept_images/2.jpeg) ![<biblic-sun-fnaf> 4](https://huggingface.co/sd-concepts-library/daycare-attendant-sun-fnaf/resolve/main/concept_images/3.jpeg)
AnonymousSub/AR_EManuals-RoBERTa
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
2022-09-14T13:52:06Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-hoofdthemas results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-hoofdthemas This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1925 - Accuracy: 0.7327 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.0712 | 1.0 | 2107 | 0.9968 | 0.7035 | | 0.8567 | 2.0 | 4214 | 0.9297 | 0.7230 | | 0.6865 | 3.0 | 6321 | 0.9586 | 0.7346 | | 0.5546 | 4.0 | 8428 | 0.9767 | 0.7387 | | 0.4429 | 5.0 | 10535 | 1.0708 | 0.7370 | | 0.3395 | 6.0 | 12642 | 1.1925 | 0.7327 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/AR_rule_based_twostagetriplet_hier_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - Summarization - generated_from_trainer datasets: - amazon_reviews_multi metrics: - rouge model-index: - name: t5-finetuned-amazon-english results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: amazon_reviews_multi type: amazon_reviews_multi config: en split: train args: en metrics: - name: Rouge1 type: rouge value: 19.1814 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-finetuned-amazon-english This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 3.1713 - Rouge1: 19.1814 - Rouge2: 9.8673 - Rougel: 18.1982 - Rougelsum: 18.2963 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 3.3583 | 1.0 | 771 | 3.2513 | 16.6865 | 9.0598 | 15.8299 | 15.8472 | | 3.1022 | 2.0 | 1542 | 3.2147 | 16.8499 | 9.4849 | 16.1568 | 16.2437 | | 3.0067 | 3.0 | 2313 | 3.1718 | 16.9516 | 8.762 | 16.104 | 16.2186 | | 2.9482 | 4.0 | 3084 | 3.1854 | 18.9582 | 9.5416 | 18.0846 | 18.2938 | | 2.8934 | 5.0 | 3855 | 3.1669 | 18.857 | 9.934 | 17.9027 | 18.0272 | | 2.8389 | 6.0 | 4626 | 3.1782 | 18.6736 | 9.326 | 17.6943 | 17.8852 | | 2.8174 | 7.0 | 5397 | 3.1709 | 18.4342 | 9.6936 | 17.5714 | 17.6516 | | 2.8 | 8.0 | 6168 | 3.1713 | 19.1814 | 9.8673 | 18.1982 | 18.2963 | ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/AR_specter
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 --- This is a copy of [trinart_stable_diffusion_v2](https://huggingface.co/naclbit/trinart_stable_diffusion_v2) ported for use with the (diffusers)[https://github.com/huggingface/diffusers]) library. All credit for this model goes to [naclbit](https://huggingface.co/naclbit).
AnonymousSub/SR_EManuals-RoBERTa
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/rvidaurre/ddpm-butterflies-128/tensorboard?#scalars)
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa_copy
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: sbi-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sbi-model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5290 - F1: 0.8211 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.813 | 1.0 | 40 | 1.5304 | 0.5227 | | 1.2312 | 2.0 | 80 | 0.9138 | 0.7439 | | 0.7428 | 3.0 | 120 | 0.6869 | 0.7518 | | 0.5055 | 4.0 | 160 | 0.5766 | 0.8050 | | 0.3581 | 5.0 | 200 | 0.5454 | 0.8052 | | 0.2664 | 6.0 | 240 | 0.5208 | 0.8200 | | 0.2145 | 7.0 | 280 | 0.5218 | 0.8241 | | 0.1853 | 8.0 | 320 | 0.5290 | 0.8211 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Tokenizers 0.12.1
AnonymousSub/bert_mean_diff_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: en thumbnail: http://www.huggingtweets.com/ashoswai/1663179098941/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1493344204449275911/qQgn0Rtu_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Ashok Swain</div> <div style="text-align: center; font-size: 14px;">@ashoswai</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Ashok Swain. | Data | Ashok Swain | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 664 | | Short tweets | 157 | | Tweets kept | 2428 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1x78488z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ashoswai's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2x38jczu) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2x38jczu/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ashoswai') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AnonymousSub/cline-emanuals-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: mit tags: - generated_from_trainer model-index: - name: engg48112-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # engg48112-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.1+cu113 - Tokenizers 0.12.1
AnonymousSub/dummy_2_parent
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - FrozenLake-v1-8x8 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-isSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8 type: FrozenLake-v1-8x8 metrics: - type: mean_reward value: 0.44 +/- 0.50 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="michael20at/q-FrozenLake-v1-4x4-isSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
AnonymousSub/hier_triplet_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
AnonymousSub/roberta-base_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- license: mit --- ### kaneoya sachiko on Stable Diffusion This is the `<Kaneoya>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<Kaneoya> 0](https://huggingface.co/sd-concepts-library/kaneoya-sachiko/resolve/main/concept_images/0.jpeg) ![<Kaneoya> 1](https://huggingface.co/sd-concepts-library/kaneoya-sachiko/resolve/main/concept_images/3.jpeg) ![<Kaneoya> 2](https://huggingface.co/sd-concepts-library/kaneoya-sachiko/resolve/main/concept_images/1.jpeg) ![<Kaneoya> 3](https://huggingface.co/sd-concepts-library/kaneoya-sachiko/resolve/main/concept_images/2.jpeg) ![<Kaneoya> 4](https://huggingface.co/sd-concepts-library/kaneoya-sachiko/resolve/main/concept_images/4.jpeg)
AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit --- ### Retro-Girl on Stable Diffusion This is the `<retro-girl>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<retro-girl> 0](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/0.jpeg) ![<retro-girl> 1](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/3.jpeg) ![<retro-girl> 2](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/1.jpeg) ![<retro-girl> 3](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/2.jpeg) ![<retro-girl> 4](https://huggingface.co/sd-concepts-library/retro-girl/resolve/main/concept_images/4.jpeg)
AnonymousSub/rule_based_bert_mean_diff_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: mit --- ### Buddha statue on Stable Diffusion This is the `<buddha-statue>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<buddha-statue> 0](https://huggingface.co/sd-concepts-library/buddha-statue/resolve/main/concept_images/0.jpeg) ![<buddha-statue> 1](https://huggingface.co/sd-concepts-library/buddha-statue/resolve/main/concept_images/3.jpeg) ![<buddha-statue> 2](https://huggingface.co/sd-concepts-library/buddha-statue/resolve/main/concept_images/1.jpeg) ![<buddha-statue> 3](https://huggingface.co/sd-concepts-library/buddha-statue/resolve/main/concept_images/2.jpeg)
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit --- ### NOTE: USED WAIFU DIFFUSION <https://huggingface.co/hakurei/waifu-diffusion> ### hitokomoru-style Artist: <https://www.pixiv.net/en/users/30837811> This is the `<hitokomoru-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<hitokomoru-style> 0](https://huggingface.co/sd-concepts-library/hitokomoru-style/resolve/main/concept_images/0.jpeg) ![<hitokomoru-style> 1](https://huggingface.co/sd-concepts-library/hitokomoru-style/resolve/main/concept_images/3.jpeg) ![<hitokomoru-style> 2](https://huggingface.co/sd-concepts-library/hitokomoru-style/resolve/main/concept_images/5.jpeg) ![<hitokomoru-style> 4](https://huggingface.co/sd-concepts-library/hitokomoru-style/resolve/main/concept_images/1.jpeg) ![<hitokomoru-style> 5](https://huggingface.co/sd-concepts-library/hitokomoru-style/resolve/main/concept_images/2.jpeg)
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit --- ### plant style on Stable Diffusion This is the `<plant>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<plant> 0](https://huggingface.co/sd-concepts-library/plant-style/resolve/main/concept_images/0.jpeg) ![<plant> 1](https://huggingface.co/sd-concepts-library/plant-style/resolve/main/concept_images/3.jpeg) ![<plant> 2](https://huggingface.co/sd-concepts-library/plant-style/resolve/main/concept_images/1.jpeg) ![<plant> 3](https://huggingface.co/sd-concepts-library/plant-style/resolve/main/concept_images/2.jpeg)
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- license: mit widget: - src: >- https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Model Card for CLIP ViT-L/14 - LAION-2B # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) 7. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description A CLIP ViT L/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip). Model training ('babysitting') done by Ross Wightman on the [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) supercomputer. See acknowledgements below. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with the 2 Billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure The model was trained on 384 A100 GPUs using 200M sample 'virtual' epochs where dataset shards were sampled with replacement. The model was trained with 160 virtual epochs for a total of 32B samples seen. The first 68 epochs were trained with float16 AMP, global batch size 79K (208 per GPU). Initially running to epoch 75, where the loss spiked and training failed with NaN. Romain Beaumont was training H/14 and g/14 models at the same time on Stability cluster and hit similar instabilities. Collectively we tried restarts with, * different dataset shuffle seed * different LR * gradient clipping * modifications to the architecture * Norm modifications (stable norm for final, post embed norm for text transformer) as per https://github.com/mlfoundations/open_clip/pull/153 thanks to Phil Wang * Extra attention block norms ala Normformer (https://arxiv.org/abs/2110.09456) * Scaled cosine attention ala Swin-V2 (https://arxiv.org/abs/2111.09883) None of the above ended up working. Most blew up within the same epoch as original, with the exception of architecture mods. * Normformer mods signifcantly altered the network such that resuming did not quickly converge to previous performance, this was abandoned but might be worth trying from start. * Scaled cosine attn initially looked promising and lasted until epoch 90 before loss suddenly increased and appeared to remain 'stuck'. In the end, restarting at epoch 69 with `float32` precision solved all instabilities and training continued from there with global batch size 86k (224 per GPU). On A100 GPUs, `float32` had a minimal impact on the throughput once `tf32` matmuls were enabled in PyTorch. Approximately 10% slower than `float16 AMP`. Romain similary changed the precision but ended up using `bfloat16 AMP` to resolve issues. ### Slum Script ``` #SBATCH --nodes=96 #SBATCH --gres=gpu:4 #SBATCH --ntasks-per-node=4 #SBATCH --cpus-per-task=6 #SBATCH --wait-all-nodes=1 #SBATCH --job-name=open_clip_laion2b # load low-level libraries ml purge source /conda/bin/activate pytorch-112 export NCCL_ASYNC_ERROR_HANDLING=1 export CUDA_VISIBLE_DEVICES=0,1,2,3 export MASTER_PORT=12802 ### get the first node name as master address - customized for vgg slurm ### e.g. master(gnodee[2-5],gnoded1) == gnodee2 echo "NODELIST="${SLURM_NODELIST} master_addr=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1) export MASTER_ADDR=$master_addr"i" echo "MASTER_ADDR="$MASTER_ADDR cd /home/me/open_clip export PYTHONPATH="$PYTHONPATH:$PWD/src" srun --cpu_bind=none,v --accel-bind=gn python -u src/training/main.py \ --save-frequency 1 \ --zeroshot-frequency 1 \ --train-data="/data/laion2B-en/{00000..23295}.tar" \ --train-num-samples=200000000 \ --warmup 10000 \ --lr "1e-3" \ --batch-size=224 \ --epochs=160 \ --workers=6 \ --model ViT-L-14 \ --name "L14-laion2B" \ --report-to "tensorboard" \ --seed 0 \ --precision 'fp32' \ --ddp-static-graph \ --local-loss \ --dataset-resampled \ --gather-with-grad \ --grad-checkpointing ``` # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. **TODO** - more detail ## Results The model achieves a 75.3 zero-shot top-1 accuracy on ImageNet-1k. An initial round of benchmarks have been performed on a wider range of datasets, currently viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb **TODO** - create table for just this model's metrics. # Acknowledgements Acknowledging the Gauss Centre for Supercomputing e.V. (http://gauss-centre.eu) for funding this part of work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC). # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenAI CLIP paper ``` @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` OpenCLIP software ``` @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` # How to Get Started with the Model Use the code below to get started with the model. ** TODO ** - Hugging Face transformers, OpenCLIP, and timm getting started snippets
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit widget: - src: >- https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog library_name: open_clip pipeline_tag: zero-shot-image-classification --- # Model Card for CLIP ViT-H/14 - LAION-2B # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) 7. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description A CLIP ViT-H/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip). Model training done by Romain Beaumont on the [stability.ai](https://stability.ai/) cluster. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with the 2 Billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure Please see [training notes](https://docs.google.com/document/d/1EFbMLRWSSV0LUf9Du1pWzWqgeiIRPwEWX2s1C6mAk5c) and [wandb logs](https://wandb.ai/rom1504/eval_openclip/reports/H-14--VmlldzoyNDAxODQ3). # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. **TODO** - more detail ## Results The model achieves a 78.0 zero-shot top-1 accuracy on ImageNet-1k. An initial round of benchmarks have been performed on a wider range of datasets, currently viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb **TODO** - create table for just this model's metrics. # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model. # Citation **BibTeX:** LAION-5B ```bibtex @inproceedings{schuhmann2022laionb, title={{LAION}-5B: An open large-scale dataset for training next generation image-text models}, author={Christoph Schuhmann and Romain Beaumont and Richard Vencu and Cade W Gordon and Ross Wightman and Mehdi Cherti and Theo Coombes and Aarush Katta and Clayton Mullis and Mitchell Wortsman and Patrick Schramowski and Srivatsa R Kundurthy and Katherine Crowson and Ludwig Schmidt and Robert Kaczmarczyk and Jenia Jitsev}, booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track}, year={2022}, url={https://openreview.net/forum?id=M3Y74vmsMcY} } ``` OpenAI CLIP paper ``` @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` OpenCLIP software ``` @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` # How to Get Started with the Model Use the code below to get started with the model. ** TODO ** - Hugging Face transformers, OpenCLIP, and timm getting started snippets
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: mit widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png candidate_labels: playing music, playing sports example_title: Cat & Dog --- # Model Card for CLIP ViT-g/14 - LAION-2B # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Details](#training-details) 4. [Evaluation](#evaluation) 5. [Acknowledgements](#acknowledgements) 6. [Citation](#citation) 7. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description A CLIP ViT-g/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip). Model training done by Romain Beaumont on the [stability.ai](https://stability.ai/) cluster. # Uses As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model. The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset. ## Direct Use Zero-shot image classification, image and text retrieval, among others. ## Downstream Use Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others. ## Out-of-Scope Use As per the OpenAI models, **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below. # Training Details ## Training Data This model was trained with the 2 Billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/). **IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress. ## Training Procedure Please see [training notes](https://docs.google.com/document/d/1EFbMLRWSSV0LUf9Du1pWzWqgeiIRPwEWX2s1C6mAk5c) and [wandb logs](https://wandb.ai/rom1504/eval_openclip/reports/slow-g-14--VmlldzoyNTMwMjg5). # Evaluation Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark). ## Testing Data, Factors & Metrics ### Testing Data The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval. **TODO** - more detail ## Results The model achieves a 76.6 zero-shot top-1 accuracy on ImageNet-1k. An initial round of benchmarks have been performed on a wider range of datasets, currently viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb **TODO** - create table for just this model's metrics. # Acknowledgements Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model. # Citation **BibTeX:** In addition to forthcoming LAION-5B (https://laion.ai/blog/laion-5b/) paper, please cite: OpenAI CLIP paper ``` @inproceedings{Radford2021LearningTV, title={Learning Transferable Visual Models From Natural Language Supervision}, author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever}, booktitle={ICML}, year={2021} } ``` OpenCLIP software ``` @software{ilharco_gabriel_2021_5143773, author = {Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig}, title = {OpenCLIP}, month = jul, year = 2021, note = {If you use this software, please cite it as below.}, publisher = {Zenodo}, version = {0.1}, doi = {10.5281/zenodo.5143773}, url = {https://doi.org/10.5281/zenodo.5143773} } ``` # How to Get Started with the Model Use the code below to get started with the model. ** TODO ** - Hugging Face transformers, OpenCLIP, and timm getting started snippets
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: mit --- ### cham on Stable Diffusion This is the `<cham>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<cham> 0](https://huggingface.co/sd-concepts-library/cham/resolve/main/concept_images/0.jpeg) ![<cham> 1](https://huggingface.co/sd-concepts-library/cham/resolve/main/concept_images/1.jpeg) ![<cham> 2](https://huggingface.co/sd-concepts-library/cham/resolve/main/concept_images/2.jpeg)
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- license: mit --- ### mayor-richard-irvin on Stable Diffusion This is the `<Richard_Irvin>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<Richard_Irvin> 0](https://huggingface.co/sd-concepts-library/mayor-richard-irvin/resolve/main/concept_images/0.jpeg) ![<Richard_Irvin> 1](https://huggingface.co/sd-concepts-library/mayor-richard-irvin/resolve/main/concept_images/3.jpeg) ![<Richard_Irvin> 2](https://huggingface.co/sd-concepts-library/mayor-richard-irvin/resolve/main/concept_images/1.jpeg) ![<Richard_Irvin> 3](https://huggingface.co/sd-concepts-library/mayor-richard-irvin/resolve/main/concept_images/2.jpeg)
AnonymousSub/rule_based_hier_quadruplet_0.1_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: mit --- ### uma-meme on Stable Diffusion This is the `<uma-object-full>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<uma-object-full> 0](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_7_.jpg) ![<uma-object-full> 1](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/28.jpg) ![<uma-object-full> 2](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_11_.jpg) ![<uma-object-full> 3](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_12_.jpg) ![<uma-object-full> 4](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_1_.png) ![<uma-object-full> 5](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/22.jpg) ![<uma-object-full> 6](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/10.jpg) ![<uma-object-full> 7](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/KakaoTalk_20220904_015246222.jpg) ![<uma-object-full> 8](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/50.jpg) ![<uma-object-full> 9](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed.png) ![<uma-object-full> 10](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_6_.jpg) ![<uma-object-full> 11](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/21.jpg) ![<uma-object-full> 12](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/FbCVln9WIAA74Z2.png) ![<uma-object-full> 13](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/file.jpg) ![<uma-object-full> 14](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/tt0.png) ![<uma-object-full> 15](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/31.jpg) ![<uma-object-full> 16](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed-1.jpg) ![<uma-object-full> 17](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed.jpg) ![<uma-object-full> 18](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_5_.jpg) ![<uma-object-full> 19](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/3-30-25.png) ![<uma-object-full> 20](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/Fb-Pk97aMAIgbYr.png) ![<uma-object-full> 21](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/2.jpg) ![<uma-object-full> 22](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_2_.png) ![<uma-object-full> 23](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/6.jpg) ![<uma-object-full> 24](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_1_.jpg) ![<uma-object-full> 25](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/FZoyWUcXwAE3k2K.png) ![<uma-object-full> 26](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_4_.jpg) ![<uma-object-full> 27](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/2022-09-14_13-02-28.png) ![<uma-object-full> 28](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/16.jpg) ![<uma-object-full> 29](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_9_.jpg) ![<uma-object-full> 30](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_10_.jpg) ![<uma-object-full> 31](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/4.jpg) ![<uma-object-full> 32](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_3_.jpg) ![<uma-object-full> 33](https://huggingface.co/sd-concepts-library/uma-meme/resolve/main/concept_images/unnamed_8_.jpg)
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit --- ### thunderdome-cover on Stable Diffusion This is the `<thunderdome-cover>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<thunderdome-cover> 0](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/26.jpeg) ![<thunderdome-cover> 1](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/0.jpeg) ![<thunderdome-cover> 2](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/31.jpeg) ![<thunderdome-cover> 3](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/8.jpeg) ![<thunderdome-cover> 4](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/3.jpeg) ![<thunderdome-cover> 5](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/5.jpeg) ![<thunderdome-cover> 6](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/22.jpeg) ![<thunderdome-cover> 7](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/32.jpeg) ![<thunderdome-cover> 8](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/29.jpeg) ![<thunderdome-cover> 9](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/6.jpeg) ![<thunderdome-cover> 10](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/30.jpeg) ![<thunderdome-cover> 11](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/11.jpeg) ![<thunderdome-cover> 12](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/34.jpeg) ![<thunderdome-cover> 13](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/27.jpeg) ![<thunderdome-cover> 14](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/36.jpeg) ![<thunderdome-cover> 15](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/35.jpeg) ![<thunderdome-cover> 16](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/1.jpeg) ![<thunderdome-cover> 17](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/25.jpeg) ![<thunderdome-cover> 18](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/21.jpeg) ![<thunderdome-cover> 19](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/14.jpeg) ![<thunderdome-cover> 20](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/15.jpeg) ![<thunderdome-cover> 21](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/23.jpeg) ![<thunderdome-cover> 22](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/17.jpeg) ![<thunderdome-cover> 23](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/16.jpeg) ![<thunderdome-cover> 24](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/10.jpeg) ![<thunderdome-cover> 25](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/2.jpeg) ![<thunderdome-cover> 26](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/28.jpeg) ![<thunderdome-cover> 27](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/12.jpeg) ![<thunderdome-cover> 28](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/19.jpeg) ![<thunderdome-cover> 29](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/4.jpeg) ![<thunderdome-cover> 30](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/7.jpeg) ![<thunderdome-cover> 31](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/24.jpeg) ![<thunderdome-cover> 32](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/33.jpeg) ![<thunderdome-cover> 33](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/9.jpeg) ![<thunderdome-cover> 34](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/20.jpeg) ![<thunderdome-cover> 35](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/18.jpeg) ![<thunderdome-cover> 36](https://huggingface.co/sd-concepts-library/thunderdome-cover/resolve/main/concept_images/13.jpeg)
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.888095238095238 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6283422459893048 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.629080118694362 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7959977765425236 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.92 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5701754385964912 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6134259259259259 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9172819044749133 - name: F1 (macro) type: f1_macro value: 0.9134777544987239 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8516431924882629 - name: F1 (macro) type: f1_macro value: 0.6909836328773065 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6738894907908992 - name: F1 (macro) type: f1_macro value: 0.6623942225782876 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9517284551714544 - name: F1 (macro) type: f1_macro value: 0.8593035416288995 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9000313381385145 - name: F1 (macro) type: f1_macro value: 0.8976663712913519 --- # relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6283422459893048 - Accuracy on SAT: 0.629080118694362 - Accuracy on BATS: 0.7959977765425236 - Accuracy on U2: 0.5701754385964912 - Accuracy on U4: 0.6134259259259259 - Accuracy on Google: 0.92 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9172819044749133 - Micro F1 score on CogALexV: 0.8516431924882629 - Micro F1 score on EVALution: 0.6738894907908992 - Micro F1 score on K&H+N: 0.9517284551714544 - Micro F1 score on ROOT09: 0.9000313381385145 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.888095238095238 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: mit --- ### SEM_Mac2N on Stable Diffusion This is the `<SEM_Mac2N>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<SEM_Mac2N> 0](https://huggingface.co/sd-concepts-library/sem-mac2n/resolve/main/concept_images/0.jpeg) ![<SEM_Mac2N> 1](https://huggingface.co/sd-concepts-library/sem-mac2n/resolve/main/concept_images/1.jpeg) ![<SEM_Mac2N> 2](https://huggingface.co/sd-concepts-library/sem-mac2n/resolve/main/concept_images/2.jpeg) ![<SEM_Mac2N> 3](https://huggingface.co/sd-concepts-library/sem-mac2n/resolve/main/concept_images/3.jpeg) ![<SEM_Mac2N> 4](https://huggingface.co/sd-concepts-library/sem-mac2n/resolve/main/concept_images/4.jpeg) ![<SEM_Mac2N> 5](https://huggingface.co/sd-concepts-library/sem-mac2n/resolve/main/concept_images/5.jpeg)
AnonymousSub/rule_based_hier_triplet_0.1_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: mit --- ### hoi4 on Stable Diffusion This is the `<hoi4>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<hoi4> 0](https://huggingface.co/sd-concepts-library/hoi4/resolve/main/concept_images/0.jpeg) ![<hoi4> 1](https://huggingface.co/sd-concepts-library/hoi4/resolve/main/concept_images/3.jpeg) ![<hoi4> 2](https://huggingface.co/sd-concepts-library/hoi4/resolve/main/concept_images/1.jpeg) ![<hoi4> 3](https://huggingface.co/sd-concepts-library/hoi4/resolve/main/concept_images/2.jpeg) ![<hoi4> 4](https://huggingface.co/sd-concepts-library/hoi4/resolve/main/concept_images/4.jpeg)
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: mit --- ### sushi-pixel on Stable Diffusion This is the `<sushi-pixel>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<sushi-pixel> 0](https://huggingface.co/sd-concepts-library/sushi-pixel/resolve/main/concept_images/0.jpeg) ![<sushi-pixel> 1](https://huggingface.co/sd-concepts-library/sushi-pixel/resolve/main/concept_images/3.jpeg) ![<sushi-pixel> 2](https://huggingface.co/sd-concepts-library/sushi-pixel/resolve/main/concept_images/1.jpeg) ![<sushi-pixel> 3](https://huggingface.co/sd-concepts-library/sushi-pixel/resolve/main/concept_images/2.jpeg) ![<sushi-pixel> 4](https://huggingface.co/sd-concepts-library/sushi-pixel/resolve/main/concept_images/4.jpeg)
Anthos23/sentiment-roberta-large-english-finetuned-sentiment-analysis
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### Dan Mumford on Stable Diffusion This is the `<dan-mumford>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<dan-mumford> 0](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/0.jpeg) ![<dan-mumford> 1](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/8.jpeg) ![<dan-mumford> 2](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/3.jpeg) ![<dan-mumford> 3](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/5.jpeg) ![<dan-mumford> 4](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/6.jpeg) ![<dan-mumford> 5](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/11.jpeg) ![<dan-mumford> 6](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/1.jpeg) ![<dan-mumford> 7](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/14.jpeg) ![<dan-mumford> 8](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/15.jpeg) ![<dan-mumford> 9](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/17.jpeg) ![<dan-mumford> 10](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/16.jpeg) ![<dan-mumford> 11](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/10.jpeg) ![<dan-mumford> 12](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/2.jpeg) ![<dan-mumford> 13](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/12.jpeg) ![<dan-mumford> 14](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/19.jpeg) ![<dan-mumford> 15](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/4.jpeg) ![<dan-mumford> 16](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/7.jpeg) ![<dan-mumford> 17](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/9.jpeg) ![<dan-mumford> 18](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/18.jpeg) ![<dan-mumford> 19](https://huggingface.co/sd-concepts-library/dan-mumford/resolve/main/concept_images/13.jpeg)
gaurishhs/API
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: pixelcopter-simple-50000eps results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 5.10 +/- 4.91 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
AriakimTaiyo/DialoGPT-small-Kumiko
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: resnet50-finetuned-memes results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.5741885625965997 - task: type: image-classification name: Image Classification dataset: type: custom name: custom split: test metrics: - type: f1 value: 0.47811617701687364 name: F1 - type: precision value: 0.43689216537139497 name: Precision - type: recall value: 0.5695517774343122 name: Recall --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # resnet50-finetuned-memes This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.0625 - Accuracy: 0.5742 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00012 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4795 | 0.99 | 40 | 1.4641 | 0.4382 | | 1.3455 | 1.99 | 80 | 1.3281 | 0.4389 | | 1.262 | 2.99 | 120 | 1.2583 | 0.4583 | | 1.1975 | 3.99 | 160 | 1.1978 | 0.4876 | | 1.1358 | 4.99 | 200 | 1.1614 | 0.5139 | | 1.1273 | 5.99 | 240 | 1.1316 | 0.5379 | | 1.0379 | 6.99 | 280 | 1.1024 | 0.5464 | | 1.041 | 7.99 | 320 | 1.0927 | 0.5580 | | 0.9952 | 8.99 | 360 | 1.0790 | 0.5541 | | 1.0146 | 9.99 | 400 | 1.0625 | 0.5742 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
asaakyan/mbart-poetic-all
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
the wolf has a brown top hat in china license: unknown the wolf has a brown top hat in china the wolf has a brown top hat in china
ArnaudPannatier/MLPMixer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### MTG card on Stable Diffusion This is the `<mtg-card>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<mtg-card> 0](https://huggingface.co/sd-concepts-library/mtg-card/resolve/main/concept_images/0.jpeg) ![<mtg-card> 1](https://huggingface.co/sd-concepts-library/mtg-card/resolve/main/concept_images/3.jpeg) ![<mtg-card> 2](https://huggingface.co/sd-concepts-library/mtg-card/resolve/main/concept_images/1.jpeg) ![<mtg-card> 3](https://huggingface.co/sd-concepts-library/mtg-card/resolve/main/concept_images/2.jpeg)
Arnold/common_voiceha
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-15T15:30:08Z
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - news_commentary metrics: - bleu model-index: - name: pt-opus-news results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: news_commentary type: news_commentary config: en-pt split: train args: en-pt metrics: - name: Bleu type: bleu value: 37.5501808262607 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pt-opus-news This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on the news_commentary dataset. It achieves the following results on the evaluation set: - Loss: 1.0975 - Bleu: 37.5502 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Arnold/wav2vec2-large-xlsr-turkish-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: http://www.huggingtweets.com/pranshuj73/1663257057221/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1523333450291630080/Eh3DlhQT_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Pranshu Jha ⚡</div> <div style="text-align: center; font-size: 14px;">@pranshuj73</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Pranshu Jha ⚡. | Data | Pranshu Jha ⚡ | | --- | --- | | Tweets downloaded | 1828 | | Retweets | 249 | | Short tweets | 136 | | Tweets kept | 1443 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3k1j04sq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pranshuj73's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/29xrmfw8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/29xrmfw8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/pranshuj73') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Ashok/my-new-tokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-15T17:33:34Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1739 - F1: 0.8525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3 | 1.0 | 835 | 0.1894 | 0.8104 | | 0.1564 | 2.0 | 1670 | 0.1751 | 0.8423 | | 0.1032 | 3.0 | 2505 | 0.1739 | 0.8525 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.11.0 - Datasets 2.0.0 - Tokenizers 0.11.0
Atlasky/Turkish-Negator
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-base-cased-rte results: - task: name: Text Classification type: text-classification dataset: name: GLUE RTE type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.6534296028880866 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-rte This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE RTE dataset. It achieves the following results on the evaluation set: - Loss: 0.9753 - Accuracy: 0.6534 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4837 | 3.21 | 500 | 0.9753 | 0.6534 | | 0.0827 | 6.41 | 1000 | 1.6693 | 0.6715 | | 0.0253 | 9.62 | 1500 | 1.7777 | 0.6643 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.7.1 - Datasets 1.18.3 - Tokenizers 0.11.6
Atlasky/turkish-negator-nn
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - bert - adapter-transformers datasets: - glue language: - en --- # Adapter `WillHeld/pfadapter-bert-base-uncased-rte` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [glue](https://huggingface.co/datasets/glue/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("WillHeld/pfadapter-bert-base-uncased-rte", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
Augustvember/WokkaBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: multilingual_t5_model_for_law_simplification results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual_t5_model_for_law_simplification This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan - Rouge1: 0.2857 - Rouge2: 0.0 - Rougel: 0.2857 - Rougelsum: 0.2857 - Gen Len: 7.9033 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 157 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 | | No log | 2.0 | 314 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 | | No log | 3.0 | 471 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 | | 0.0 | 4.0 | 628 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 | | 0.0 | 5.0 | 785 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 | | 0.0 | 6.0 | 942 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 | | 0.0 | 7.0 | 1099 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 | | 0.0 | 8.0 | 1256 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 | | 0.0 | 9.0 | 1413 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 | | 0.0 | 10.0 | 1570 | nan | 0.2857 | 0.0 | 0.2857 | 0.2857 | 7.9033 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
Augustvember/WokkaBot2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum config: default split: train args: default metrics: - name: Rouge1 type: rouge value: 28.2804 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4789 - Rouge1: 28.2804 - Rouge2: 7.7039 - Rougel: 22.2002 - Rougelsum: 22.2019 - Gen Len: 18.8238 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.711 | 1.0 | 12753 | 2.4789 | 28.2804 | 7.7039 | 22.2002 | 22.2019 | 18.8238 | ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Augustvember/WokkaBot7
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en license: apache-2.0 tags: - ace --- # ACE Example
Augustvember/WokkaBot8
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### Kawaii Colors on Stable Diffusion This is the `<kawaii-colors-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<kawaii-colors-style 0](https://huggingface.co/sd-concepts-library/kawaii-colors/resolve/main/concept_images/0.jpeg) ![<kawaii-colors-style 1](https://huggingface.co/sd-concepts-library/kawaii-colors/resolve/main/concept_images/3.jpeg) ![<kawaii-colors-style 2](https://huggingface.co/sd-concepts-library/kawaii-colors/resolve/main/concept_images/1.jpeg) ![<kawaii-colors-style 3](https://huggingface.co/sd-concepts-library/kawaii-colors/resolve/main/concept_images/2.jpeg) ![<kawaii-colors-style 4](https://huggingface.co/sd-concepts-library/kawaii-colors/resolve/main/concept_images/4.jpeg)
Augustvember/wokka2
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - fastai - image-classification --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Augustvember/wokka4
[ "conversational" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en thumbnail: http://www.huggingtweets.com/eeriemachine/1665353005078/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1579097527982460934/-x9lVWzx_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Alea 🃏</div> <div style="text-align: center; font-size: 14px;">@eeriemachine</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Alea 🃏. | Data | Alea 🃏 | | --- | --- | | Tweets downloaded | 3240 | | Retweets | 68 | | Short tweets | 181 | | Tweets kept | 2991 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/15ucae0z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @eeriemachine's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1smqz4yt) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1smqz4yt/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/eeriemachine') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Augustvember/wokka5
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8666666666666667 - name: F1 type: f1 value: 0.8684210526315789 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3132 - Accuracy: 0.8667 - F1: 0.8684 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Augustvember/your-model-name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - bert - adapter-transformers datasets: - glue language: - en --- # Adapter `SALT-NLP/pfadapter-bert-base-uncased-rte-combined-value` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [glue](https://huggingface.co/datasets/glue/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("SALT-NLP/pfadapter-bert-base-uncased-rte-combined-value", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
Aurora/asdawd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - roberta - adapter-transformers datasets: - glue language: - en --- # Adapter `SALT-NLP/pfadapter-roberta-base-rte-combined-value` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [glue](https://huggingface.co/datasets/glue/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("SALT-NLP/pfadapter-roberta-base-rte-combined-value", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
Ayham/albert_distilgpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroiev2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroiev2 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Ayham/bertgpt2_cnn
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - adapter-transformers - bert datasets: - glue language: - en --- # Adapter `WillHeld/pfadapter-bert-base-uncased-qnli` for bert-base-uncased An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [glue](https://huggingface.co/datasets/glue/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoAdapterModel model = AutoAdapterModel.from_pretrained("bert-base-uncased") adapter_name = model.load_adapter("WillHeld/pfadapter-bert-base-uncased-qnli", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
Ayham/distilbert_distilgpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-base-cased-sst2 results: - task: name: Text Classification type: text-classification dataset: name: GLUE SST2 type: glue args: sst2 metrics: - name: Accuracy type: accuracy value: 0.9139908256880734 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-sst2 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.2345 - Accuracy: 0.9140 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6253 | 0.12 | 500 | 0.3641 | 0.8567 | | 0.3189 | 0.24 | 1000 | 0.2656 | 0.8899 | | 0.2701 | 0.36 | 1500 | 0.3463 | 0.8807 | | 0.2533 | 0.48 | 2000 | 0.2409 | 0.9071 | | 0.2436 | 0.59 | 2500 | 0.2345 | 0.9140 | | 0.2155 | 0.71 | 3000 | 0.2926 | 0.9002 | | 0.22 | 0.83 | 3500 | 0.2998 | 0.9094 | | 0.2146 | 0.95 | 4000 | 0.2481 | 0.9140 | | 0.1737 | 1.07 | 4500 | 0.2802 | 0.9128 | | 0.1578 | 1.19 | 5000 | 0.3536 | 0.9083 | | 0.1534 | 1.31 | 5500 | 0.4714 | 0.8830 | | 0.1641 | 1.43 | 6000 | 0.3235 | 0.9128 | | 0.1601 | 1.54 | 6500 | 0.3133 | 0.9094 | | 0.1644 | 1.66 | 7000 | 0.3021 | 0.9071 | | 0.1578 | 1.78 | 7500 | 0.3552 | 0.9094 | | 0.1582 | 1.9 | 8000 | 0.2896 | 0.9106 | | 0.1448 | 2.02 | 8500 | 0.3343 | 0.9232 | | 0.0989 | 2.14 | 9000 | 0.3882 | 0.9048 | | 0.1098 | 2.26 | 9500 | 0.3218 | 0.9037 | | 0.1056 | 2.38 | 10000 | 0.3426 | 0.9140 | | 0.112 | 2.49 | 10500 | 0.3631 | 0.9025 | | 0.1066 | 2.61 | 11000 | 0.4084 | 0.9106 | | 0.126 | 2.73 | 11500 | 0.3191 | 0.9117 | | 0.12 | 2.85 | 12000 | 0.4091 | 0.9048 | | 0.1092 | 2.97 | 12500 | 0.3602 | 0.9060 | | 0.0826 | 3.09 | 13000 | 0.3571 | 0.9163 | | 0.0603 | 3.21 | 13500 | 0.4021 | 0.9243 | | 0.0636 | 3.33 | 14000 | 0.3893 | 0.9186 | | 0.0775 | 3.44 | 14500 | 0.4373 | 0.9151 | | 0.0842 | 3.56 | 15000 | 0.4100 | 0.9174 | | 0.0902 | 3.68 | 15500 | 0.3878 | 0.9037 | | 0.092 | 3.8 | 16000 | 0.3723 | 0.9140 | | 0.0978 | 3.92 | 16500 | 0.3492 | 0.9163 | | 0.0682 | 4.04 | 17000 | 0.4597 | 0.9209 | | 0.0481 | 4.16 | 17500 | 0.4668 | 0.9186 | | 0.0561 | 4.28 | 18000 | 0.4083 | 0.9209 | | 0.0571 | 4.39 | 18500 | 0.4040 | 0.9174 | | 0.0511 | 4.51 | 19000 | 0.4032 | 0.9197 | | 0.062 | 4.63 | 19500 | 0.4090 | 0.9140 | | 0.0618 | 4.75 | 20000 | 0.4150 | 0.9106 | | 0.0599 | 4.87 | 20500 | 0.3623 | 0.9209 | | 0.0614 | 4.99 | 21000 | 0.4421 | 0.9083 | | 0.0385 | 5.11 | 21500 | 0.4328 | 0.9197 | | 0.0331 | 5.23 | 22000 | 0.4569 | 0.9209 | | 0.0343 | 5.34 | 22500 | 0.5130 | 0.9094 | | 0.0389 | 5.46 | 23000 | 0.4741 | 0.9232 | | 0.0413 | 5.58 | 23500 | 0.4654 | 0.9060 | | 0.0444 | 5.7 | 24000 | 0.4888 | 0.9014 | | 0.0406 | 5.82 | 24500 | 0.4085 | 0.9220 | | 0.031 | 5.94 | 25000 | 0.4760 | 0.9197 | | 0.037 | 6.06 | 25500 | 0.5403 | 0.9094 | | 0.0239 | 6.18 | 26000 | 0.5945 | 0.9060 | | 0.0267 | 6.29 | 26500 | 0.4595 | 0.9140 | | 0.0338 | 6.41 | 27000 | 0.4923 | 0.9106 | | 0.0293 | 6.53 | 27500 | 0.6128 | 0.8979 | | 0.0253 | 6.65 | 28000 | 0.5428 | 0.9083 | | 0.0296 | 6.77 | 28500 | 0.5244 | 0.9002 | | 0.0279 | 6.89 | 29000 | 0.5732 | 0.9048 | | 0.0321 | 7.01 | 29500 | 0.5824 | 0.9094 | | 0.0179 | 7.13 | 30000 | 0.6336 | 0.9094 | | 0.0177 | 7.24 | 30500 | 0.7145 | 0.9140 | | 0.0262 | 7.36 | 31000 | 0.5504 | 0.9083 | | 0.0182 | 7.48 | 31500 | 0.5924 | 0.9071 | | 0.0187 | 7.6 | 32000 | 0.5613 | 0.9151 | | 0.012 | 7.72 | 32500 | 0.6129 | 0.9083 | | 0.021 | 7.84 | 33000 | 0.5698 | 0.9106 | | 0.024 | 7.96 | 33500 | 0.6231 | 0.9083 | | 0.0136 | 8.08 | 34000 | 0.7155 | 0.9117 | | 0.0088 | 8.19 | 34500 | 0.7918 | 0.9060 | | 0.0129 | 8.31 | 35000 | 0.6727 | 0.9094 | | 0.0113 | 8.43 | 35500 | 0.6531 | 0.9117 | | 0.0141 | 8.55 | 36000 | 0.7040 | 0.9037 | | 0.0111 | 8.67 | 36500 | 0.6551 | 0.9094 | | 0.0111 | 8.79 | 37000 | 0.6928 | 0.9071 | | 0.0116 | 8.91 | 37500 | 0.6313 | 0.9094 | | 0.0107 | 9.03 | 38000 | 0.7104 | 0.9094 | | 0.006 | 9.14 | 38500 | 0.7446 | 0.9117 | | 0.0048 | 9.26 | 39000 | 0.7537 | 0.9140 | | 0.0099 | 9.38 | 39500 | 0.7715 | 0.9140 | | 0.0067 | 9.5 | 40000 | 0.7633 | 0.9117 | | 0.0037 | 9.62 | 40500 | 0.7669 | 0.9128 | | 0.006 | 9.74 | 41000 | 0.7714 | 0.9128 | | 0.0063 | 9.86 | 41500 | 0.8020 | 0.9106 | | 0.0107 | 9.98 | 42000 | 0.7985 | 0.9117 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.7.1 - Datasets 1.18.3 - Tokenizers 0.11.6
Ayham/distilbert_roberta_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- license: mit --- ### csgo_awp_texture_map on Stable Diffusion This is the `<csgo_awp_texture>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<csgo_awp_texture> 0](https://huggingface.co/sd-concepts-library/csgo-awp-texture-map/resolve/main/concept_images/0.jpeg) ![<csgo_awp_texture> 1](https://huggingface.co/sd-concepts-library/csgo-awp-texture-map/resolve/main/concept_images/2.jpeg) ![<csgo_awp_texture> 2](https://huggingface.co/sd-concepts-library/csgo-awp-texture-map/resolve/main/concept_images/4.jpeg) ![<csgo_awp_texture> 3](https://huggingface.co/sd-concepts-library/csgo-awp-texture-map/resolve/main/concept_images/1.jpeg) ![<csgo_awp_texture> 4](https://huggingface.co/sd-concepts-library/csgo-awp-texture-map/resolve/main/concept_images/3.jpeg)
Ayham/robertagpt2_xsum2
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit --- ### Hydrasuit on Stable Diffusion This is the `<hydrasuit>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<hydrasuit> 0](https://huggingface.co/sd-concepts-library/hydrasuit/resolve/main/concept_images/0.jpeg) ![<hydrasuit> 1](https://huggingface.co/sd-concepts-library/hydrasuit/resolve/main/concept_images/2.jpeg) ![<hydrasuit> 2](https://huggingface.co/sd-concepts-library/hydrasuit/resolve/main/concept_images/1.jpeg) ![<hydrasuit> 3](https://huggingface.co/sd-concepts-library/hydrasuit/resolve/main/concept_images/3.jpeg)
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6-e18
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2022-09-16T03:10:03Z
--- license: mit --- ### Wayne Reynolds Character on Stable Diffusion This is the `<warcharport>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<warcharport> 0](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/12.jpeg) ![<warcharport> 1](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/0.jpeg) ![<warcharport> 2](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/13.jpeg) ![<warcharport> 3](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/15.jpeg) ![<warcharport> 4](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/2.jpeg) ![<warcharport> 5](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/11.jpeg) ![<warcharport> 6](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/23.jpeg) ![<warcharport> 7](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/8.jpeg) ![<warcharport> 8](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/21.jpeg) ![<warcharport> 9](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/6.jpeg) ![<warcharport> 10](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/16.jpeg) ![<warcharport> 11](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/18.jpeg) ![<warcharport> 12](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/22.jpeg) ![<warcharport> 13](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/4.jpeg) ![<warcharport> 14](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/1.jpeg) ![<warcharport> 15](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/3.jpeg) ![<warcharport> 16](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/9.jpeg) ![<warcharport> 17](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/14.jpeg) ![<warcharport> 18](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/10.jpeg) ![<warcharport> 19](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/7.jpeg) ![<warcharport> 20](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/5.jpeg) ![<warcharport> 21](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/17.jpeg) ![<warcharport> 22](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/24.jpeg) ![<warcharport> 23](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/19.jpeg) ![<warcharport> 24](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/25.jpeg) ![<warcharport> 25](https://huggingface.co/sd-concepts-library/wayne-reynolds-character/resolve/main/concept_images/20.jpeg)
Azaghast/DistilBART-SCP-ParaSummarization
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "BartForConditionalGeneration" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 142, "min_length": 56, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-09-16T04:38:59Z
--- license: mit --- ### seraphimmoonshadow-art on Stable Diffusion This is the `<seraphimmoonshadow-art>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). AHAHAHAHHAHHAHHAHAHAH...............................................................welllllll. My own art, failing me. <img src="https://cdn.discordapp.com/attachments/1011389373775876116/1020201262244970527/kindaaaaa.png">
Bagus/wav2vec2-xlsr-greek-speech-emotion-recognition
[ "pytorch", "tensorboard", "wav2vec2", "el", "dataset:aesdd", "transformers", "audio", "audio-classification", "speech", "license:apache-2.0" ]
audio-classification
{ "architectures": [ "Wav2Vec2ForSpeechClassification" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
null
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: parrot_paraphraser_on_T5-finetuned-xsum-v0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # parrot_paraphraser_on_T5-finetuned-xsum-v0 This model is a fine-tuned version of [prithivida/parrot_paraphraser_on_T5](https://huggingface.co/prithivida/parrot_paraphraser_on_T5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4418 - Rouge1: 79.364 - Rouge2: 74.776 - Rougel: 78.997 - Rougelsum: 78.7013 - Gen Len: 18.6789 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 216 | 0.5427 | 79.4586 | 74.9115 | 78.8483 | 78.6557 | 18.6972 | | No log | 2.0 | 432 | 0.4922 | 79.5229 | 74.8555 | 78.7762 | 78.5797 | 18.6881 | | 0.5974 | 3.0 | 648 | 0.4628 | 79.4743 | 74.7622 | 78.7621 | 78.5631 | 18.6881 | | 0.5974 | 4.0 | 864 | 0.4517 | 79.6842 | 75.2876 | 79.3457 | 79.0682 | 18.6881 | | 0.4292 | 5.0 | 1080 | 0.4451 | 79.6571 | 75.2248 | 79.2939 | 79.0412 | 18.6881 | | 0.4292 | 6.0 | 1296 | 0.4409 | 79.3363 | 74.6763 | 78.9595 | 78.7335 | 18.6789 | | 0.3367 | 7.0 | 1512 | 0.4398 | 79.364 | 74.776 | 78.997 | 78.7013 | 18.6789 | | 0.3367 | 8.0 | 1728 | 0.4407 | 79.364 | 74.776 | 78.997 | 78.7013 | 18.6789 | | 0.3367 | 9.0 | 1944 | 0.4413 | 79.364 | 74.776 | 78.997 | 78.7013 | 18.6789 | | 0.3012 | 10.0 | 2160 | 0.4418 | 79.364 | 74.776 | 78.997 | 78.7013 | 18.6789 | ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Bagus/wav2vec2-xlsr-japanese-speech-emotion-recognition
[ "pytorch", "wav2vec2", "audio-classification", "ja", "dataset:jtes", "transformers", "audio", "speech", "speech-emotion-recognition", "has_space" ]
audio-classification
{ "architectures": [ "HubertForSequenceClassification" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
2022-09-16T07:34:58Z
--- language: ja datasets: - common_voice metrics: - wer - cer model-index: - name: wav2vec2-xls-r-300m finetuned on Japanese Hiragana with no word boundaries by Hyungshin Ryu of SLPlab results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice Japanese type: common_voice args: ja metrics: - name: Test WER type: wer value: 90.66 - name: Test CER type: cer value: 19.35 --- # Wav2Vec2-XLS-R-300M-Japanese-Hiragana Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Japanese Hiragana characters using the [Common Voice](https://huggingface.co/datasets/common_voice) and [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut). The sentence outputs do not contain word boundaries. Audio inputs should be sampled at 16kHz. ## Usage The model can be used directly as follows: ```python !pip install mecab-python3 !pip install unidic-lite !pip install pykakasi import torch import torchaudio from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from datasets import load_dataset, load_metric import pykakasi import MeCab import re # load datasets, processor, and model test_dataset = load_dataset("common_voice", "ja", split="test") wer = load_metric("wer") cer = load_metric("cer") PTM = "slplab/wav2vec2-xls-r-300m-japanese-hiragana" print("PTM:", PTM) processor = Wav2Vec2Processor.from_pretrained(PTM) model = Wav2Vec2ForCTC.from_pretrained(PTM) device = "cuda" model.to(device) # preprocess datasets wakati = MeCab.Tagger("-Owakati") kakasi = pykakasi.kakasi() chars_to_ignore_regex = "[、,。]" def speech_file_to_array_fn_hiragana_nospace(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).strip() batch["sentence"] = ''.join([d['hira'] for d in kakasi.convert(batch["sentence"])]) speech_array, sampling_rate = torchaudio.load(batch["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16000) batch["speech"] = resampler(speech_array).squeeze() return batch test_dataset = test_dataset.map(speech_file_to_array_fn_hiragana_nospace) #evaluate def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to(device)).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) for i in range(10): print("="*20) print("Prd:", result[i]["pred_strings"]) print("Ref:", result[i]["sentence"]) print("WER: {:2f}%".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) print("CER: {:2f}%".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` | Original Text | Prediction | | ------------- | ------------- | | この料理は家庭で作れます。 | このりょうりはかていでつくれます | | 日本人は、決して、ユーモアと無縁な人種ではなかった。 | にっぽんじんはけしてゆうもあどむえんなじんしゅではなかった | | 木村さんに電話を貸してもらいました。 | きむらさんにでんわおかしてもらいました | ## Test Results **WER:** 90.66%, **CER:** 19.35% ## Training Trained on JSUT and train+valid set of Common Voice Japanese. Tested on test set of Common Voice Japanese.
Bakkes/BakkesModWiki
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: imagefolder metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-geeve-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `imagefolder` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-128/tensorboard?#scalars)
Barleysack/AERoberta
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: mit tags: - generated_from_trainer datasets: - indonlu model-index: - name: Modelroberta results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Modelroberta This model is a fine-tuned version of [cahya/roberta-base-indonesian-522M](https://huggingface.co/cahya/roberta-base-indonesian-522M) on the indonlu dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Barytes/hellohf
[ "tf", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-09-16T09:09:15Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1245.42 +/- 483.73 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
BatuhanYilmaz/bert-finetuned-nerxD
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-16T09:35:53Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: parrot_paraphraser_on_T5-finetuned-xsum-v5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # parrot_paraphraser_on_T5-finetuned-xsum-v5 This model is a fine-tuned version of [prithivida/parrot_paraphraser_on_T5](https://huggingface.co/prithivida/parrot_paraphraser_on_T5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0345 - Rouge1: 86.5078 - Rouge2: 84.8978 - Rougel: 86.4798 - Rougelsum: 86.4726 - Gen Len: 17.8462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.0663 | 1.0 | 2002 | 0.0539 | 86.0677 | 84.063 | 86.0423 | 86.0313 | 17.8671 | | 0.0449 | 2.0 | 4004 | 0.0388 | 86.4564 | 84.7606 | 86.432 | 86.4212 | 17.8501 | | 0.0269 | 3.0 | 6006 | 0.0347 | 86.4997 | 84.8907 | 86.4814 | 86.4744 | 17.8501 | | 0.023 | 4.0 | 8008 | 0.0345 | 86.5078 | 84.8978 | 86.4798 | 86.4726 | 17.8462 | ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
BatuhanYilmaz/code-search-net-tokenizer1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model yiwuanwow/autotrain-anli-1480954206 is restricted and you are not in the authorized list. Visit https://huggingface.co/yiwuanwow/autotrain-anli-1480954206 to ask for access.
Baybars/wav2vec2-xls-r-300m-cv8-turkish
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "tr", "dataset:common_voice", "transformers", "common_voice", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-09-16T10:55:11Z
--- license: - apache-2.0 - bsd-3-clause tags: - summarization - summary - booksum - long-document - long-form datasets: - kmfoda/booksum metrics: - rouge languages: en widget: - text: large earthquakes along a given fault segment do not occur at random intervals because it takes time to accumulate the strain energy for the rupture. The rates at which tectonic plates move and accumulate strain at their boundaries are approximately uniform. Therefore, in first approximation, one may expect that large ruptures of the same fault segment will occur at approximately constant time intervals. If subsequent main shocks have different amounts of slip across the fault, then the recurrence time may vary, and the basic idea of periodic mainshocks must be modified. For great plate boundary ruptures the length and slip often vary by a factor of 2. Along the southern segment of the San Andreas fault the recurrence interval is 145 years with variations of several decades. The smaller the standard deviation of the average recurrence interval, the more specific could be the long term prediction of a future mainshock. example_title: earthquakes - text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates are fed into a neural network that predicts values in the reconstructed domain. Then, this domain is mapped to the sensor domain where sensor measurements are available as supervision. Class and Section Problems Addressed Generalization (Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid Representations (Section 3) Computation & memory efficiency, representation capacity, editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section 5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section 6) Edit ability, constraints, regularization. Table 2: The five classes of techniques in the neural field toolbox each addresses problems that arise in learning, inference, and control. (Section 3). We can supervise reconstruction via differentiable forward maps that transform Or project our domain (e.g, 3D reconstruction via 2D images; Section 4) With appropriate network architecture choices, we can overcome neural network spectral biases (blurriness) and efficiently compute derivatives and integrals (Section 5). Finally, we can manipulate neural fields to add constraints and regularizations, and to achieve editable representations (Section 6). Collectively, these classes constitute a ''toolbox'' of techniques to help solve problems with neural fields There are three components in a conditional neural field: (1) An encoder or inference function € that outputs the conditioning latent variable 2 given an observation 0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS a latent code Or feature code_ (2) A mapping function 4 between Z and neural field parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the most probable z given the observations O: argmaxz P(2/0). The decoder maximizes the inverse conditional probability to find the most probable 0 given Z: arg- max P(Olz). We discuss different encoding schemes with different optimality guarantees (Section 2.1.1), both global and local conditioning (Section 2.1.2), and different mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable prior over the sur- face in its reconstruction domain to generalize to the partial observations. A neural network expresses a prior via the function space of its architecture and parameters 0, and generalization is influenced by the inductive bias of this function space (Section 5).' example_title: scientific paper - text: 'Is a else or outside the cob and tree written being of early client rope and you have is for good reasons. On to the ocean in Orange for time. By''s the aggregate we can bed it yet. Why this please pick up on a sort is do and also M Getoi''s nerocos and do rain become you to let so is his brother is made in use and Mjulia''s''s the lay major is aging Masastup coin present sea only of Oosii rooms set to you We do er do we easy this private oliiishs lonthen might be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics. As you can see, I''m not socially my name is Michael Zelinger. I''m one of the task for this class and you might have already seen me in the first lecture where I made a quick appearance. I''m also going to give the tortillas in the last third of this course. So to give you a little bit about me, I''m a old student here with better Bulman and my research centres on casual inference applied to biomedical disasters, so that could be genomics or that could be hospital data. If any of you is interested in writing a bachelor thesis, a semester paper may be mastathesis about this topic feel for reach out to me. you have my name on models and my email address you can find in the directory I''d Be very happy to talk about it. you do not need to be sure about it, we can just have a chat. So with that said, let''s get on with the lecture. There''s an exciting topic today I''m going to start by sharing some slides with you and later on during the lecture we''ll move to the paper. So bear with me for a few seconds. Well, the projector is starting up. Okay, so let''s get started. Today''s topic is a very important one. It''s about a technique which really forms one of the fundamentals of data science, machine learning, and any sort of modern statistics. It''s called cross validation. I know you really want to understand this topic I Want you to understand this and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding cross validation. So to set the stage for this, I Want to introduce you to the validation problem in computational statistics. So the problem is the following: You trained a model on available data. You fitted your model, but you know the training data you got could always have been different and some data from the environment. Maybe it''s a random process. You do not really know what it is, but you know that somebody else who gets a different batch of data from the same environment they would get slightly different training data and you do not care that your method performs as well. On this training data. you want to to perform well on other data that you have not seen other data from the same environment. So in other words, the validation problem is you want to quantify the performance of your model on data that you have not seen. So how is this even possible? How could you possibly measure the performance on data that you do not know The solution to? This is the following realization is that given that you have a bunch of data, you were in charge. You get to control how much that your model sees. It works in the following way: You can hide data firms model. Let''s say you have a training data set which is a bunch of doubtless so X eyes are the features those are typically hide and national vector. It''s got more than one dimension for sure. And the why why eyes. Those are the labels for supervised learning. As you''ve seen before, it''s the same set up as we have in regression. And so you have this training data and now you choose that you only use some of those data to fit your model. You''re not going to use everything, you only use some of it the other part you hide from your model. And then you can use this hidden data to do validation from the point of you of your model. This hidden data is complete by unseen. In other words, we solve our problem of validation.' example_title: transcribed audio - lecture - text: 'Transformer-based models have shown to be very useful for many NLP tasks. However, a major limitation of transformers-based models is its O(n^2)O(n 2) time & memory complexity (where nn is sequence length). Hence, it''s computationally very expensive to apply transformer-based models on long sequences n > 512n>512. Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention try to remedy this problem by approximating the full attention matrix. You can checkout 🤗''s recent blog post in case you are unfamiliar with these models. BigBird (introduced in paper) is one of such recent models to address this issue. BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s attention) and can handle sequences up to a length of 4096 at a much lower computational cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this post is to give the reader an in-depth understanding of big bird implementation & ease one''s life in using BigBird with 🤗Transformers. But, before going into more depth, it is important to remember that the BigBird''s attention is an approximation of BERT''s full attention and therefore does not strive to be better than BERT''s full attention, but rather to be more efficient. It simply allows to apply transformer-based models to much longer sequences since BERT''s quadratic memory requirement quickly becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention would be preferred over block sparse attention (which we are going to discuss in this post). If you wonder why we need more compute when working with longer sequences, this blog post is just right for you! Some of the main questions one might have when working with standard BERT-like attention include: Do all tokens really have to attend to all other tokens? Why not compute attention only over important tokens? How to decide what tokens are important? How to attend to just a few tokens in a very efficient way? In this blog post, we will try to answer those questions. What tokens should be attended to? We will give a practical example of how attention works by considering the sentence ''BigBird is now available in HuggingFace for extractive question answering''. In BERT-like attention, every word would simply attend to all other tokens. Let''s think about a sensible choice of key tokens that a queried token actually only should attend to by writing some pseudo-code. Will will assume that the token available is queried and build a sensible list of key tokens to attend to. >>> # let''s consider following sentence as an example >>> example = [''BigBird'', ''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'', ''question'', ''answering''] >>> # further let''s assume, we''re trying to understand the representation of ''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an empty `set` and fill up the tokens of our interest as we proceed in this section. >>> key_tokens = [] # => currently ''available'' token doesn''t have anything to attend Nearby tokens should be important because, in a sentence (sequence of words), the current word is highly dependent on neighboring past & future tokens. This intuition is the idea behind the concept of sliding attention.' example_title: bigbird blog intro - text: 'To be fair, you have to have a very high IQ to understand Rick and Morty. The humour is extremely subtle, and without a solid grasp of theoretical physics most of the jokes will go over a typical viewer''s head. There''s also Rick''s nihilistic outlook, which is deftly woven into his characterisation- his personal philosophy draws heavily from Narodnaya Volya literature, for instance. The fans understand this stuff; they have the intellectual capacity to truly appreciate the depths of these jokes, to realise that they''re not just funny- they say something deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots- of course they wouldn''t appreciate, for instance, the humour in Rick''s existential catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s Russian epic Fathers and Sons. I''m smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius wit unfolds itself on their television screens. What fools.. how I pity them. 😂 And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it. It''s for the ladies'' eyes only- and even then they have to demonstrate that they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel kid 😎' example_title: Richard & Mortimer parameters: max_length: 48 min_length: 2 no_repeat_ngram_size: 3 encoder_no_repeat_ngram_size: 3 early_stopping: true length_penalty: 0.1 num_beams: 2 model-index: - name: pszemraj/pegasus-x-large-book-summary results: - task: type: summarization name: Summarization dataset: name: samsum type: samsum config: samsum split: test metrics: - type: rouge value: 33.1401 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjQ1NjY1OGVjYWEwMzBjMzk3ZmMyZDA0ZTcxOTdmZTUxNTc0OGYxYmY3MzJkMzFmYTVjNzU2ZTk4MzE0NWMzMSIsInZlcnNpb24iOjF9.PSHB6DMF6tkwSw5nsFE57a2ApRAy_tkS6ziKA6PSTWddEdaqfca4pfig6_olmRmcS4KxN6HHcsmioHzv4LJQBw - type: rouge value: 9.3095 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzk3MTA3NmY1OGE3MzFjZTJhYWYzNGU4NTUzMTgwM2Y1NWZjMmEyNDNmNmEzYmQzZThjOGExMjc2ZjAyZjMzZCIsInZlcnNpb24iOjF9.tfgp8p-WlkVrfducTSg4zs-byeZMCmdZw1aizPQHXm_qRAwGtKcuVkZcmza5Y3o3VqsAEmGzg5HQD1vnZvWIDA - type: rouge value: 24.8552 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTVmMTIwNDQwNTI4MmI2MmY1ODc1Mjk0NGQ5ZWE4ZTYzOGNkMjY2ZmJhMjg2MTZlNTdhYTA2ZDAxNTFjMjA2MSIsInZlcnNpb24iOjF9.9HLgy9842oIDm6ABb3L94R1P4zAqTI0QN8aP62xzIyDxUXTbWw68PEDufYLiBJbTgZ8ElopZ9I7aou2zCgXeAA - type: rouge value: 29.0391 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmNhYWJjYjdjMzMxMmE4ZTE4NGEzMDdmZDZjODI5ZWRjZWJmYTEyZGIzYWQ2NjM3YzQ4MjI4ZTM4MmU5MzRjZSIsInZlcnNpb24iOjF9.d2yoVdmxjVJnsgIYFiLuaBO5Krgw4Axl5yeOSTKrvHygrAxoqT1nl4anzQiyoR3PwYBXwBkwmgpJUfZ7RNXtDQ - type: loss value: 2.288182497024536 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzM5NGIwODMxOTA3MTY3ODc2ZDczYTNmMTMwM2QyZmNlZjFmZDJjMGY3NWNkMDEyYzA4OTA2ZDRiODY3Zjg4OCIsInZlcnNpb24iOjF9.8k9mC050OS7mQSR9oA8liDRDQvEx1VxmTXGLmDYJVYYtTh2HYJFGP8Vy_krocFRIYDxh-IHPEOOSr5NrLMWHBA - type: gen_len value: 45.2173 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWZhNzQ5OTQ5Yjg5YjhlOTZiZmJhZjZiODNmY2E2OTg4YTg4NWVhYzRkNzM2Mzk4NzdlMDgxM2M4NjY2YzhhYSIsInZlcnNpb24iOjF9.tDEEsPUclZDygAdGhNrBGrF24vR8ao08Nw7hmtUt5lmSZZZK_u-8rpz97QgVS6MCJdjFVnbYC4bkFnlQWI_FAA - task: type: summarization name: Summarization dataset: name: launch/gov_report type: launch/gov_report config: plain_text split: test metrics: - type: rouge value: 39.7279 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTAxODk3OTUwMTIzODU3NzU2YzAzZjE2NTM3MzBjNDA0ZWRmZGU3NWUzNTg1YThhNDQ1NjQ5ZmM3OWI2YzBhNSIsInZlcnNpb24iOjF9.vnNKucBNt2-nIyODj9P2HeaWPX5AQR8L-DL8QzrO7kj58-vZnjT6hsAGmepRNzdZ1TLF-3j2J2plcNJ8lUO8Dg - type: rouge value: 10.8944 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjYzMmIxOTJmZjkxOGI5N2U0NTRmMmQwOGJhMzMxYWIzMWMzYzUwMDEyMDdiZDQ2YTUzOWU0OTViMTI2YTAwYiIsInZlcnNpb24iOjF9.De0PaAikWqfWpoIXTCYP-mSFu3PUATLX08Qq74OHXM8784heFVDX1E1sXlh_QbbKJbuMuZtTKM4qr7oLUizOAw - type: rouge value: 19.7018 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzI3MjQzOGQ3MGE3NDNkZTEyMWRkYjUyYTYzNDEwOWVjMGFmNTBiZjE4ZTBhMGYzMmI1Yzk0YjBmYmIzMWMxZSIsInZlcnNpb24iOjF9.FVikJ5Ma0gUgM-tpbomWXnC4jtmvhxqikPqCk84t4IbIdU0CIYGTQEONiz-VqI0fJeNrnTS6lxpBv7XxKoq3BQ - type: rouge value: 36.5634 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTI2OTVmNDZiZWE5ZjNkODIwZjJiNTU2ZjJjYjczODUwM2JiNDEzYmE3N2U5YWM5NzJjOWEzMmYzZjdlYWJmYyIsInZlcnNpb24iOjF9.poR4zcqRvdaierfWFdTa53Cv6ZbNbnRwyRTi9HukHF5AWAQgc6zpBLkwOYFYoWjuSH83ohWeMM3MoIdw3zypBw - type: loss value: 2.473011016845703 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDFmMjg3NWQ2YTMxMTc1OGZiYWYzNjg5NDY3MWE4MjY5ZDQxZDZhZGI1OTc5MzZkZGEzYmVlNWFiMzZjNDdhNCIsInZlcnNpb24iOjF9.05nKB3SmEfFKSduJqlleF4Fd2_IhwJS8eTOrnzZYCQQfLCfpJAZLhp3eLQCuBY4htd-FNrZftrThL66zVxyrCQ - type: gen_len value: 212.8243 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGNjMTg4ZDZlZjAxZGNhN2M0NWI0ZTA0OWEzNDkzNDAzOTJhODA2MmVkODI4YjYzN2FiOTU1ZDMwM2VlNWMyYyIsInZlcnNpb24iOjF9.WYx6XJFKokY2heoN-jpAMp1Z1gsyJus3zpktQgNd0FOYJxOUqW40A0kkHtd15y4dUhsbccLpuJGY1fNJgHOiDw - task: type: summarization name: Summarization dataset: name: billsum type: billsum config: default split: test metrics: - type: rouge value: 42.1065 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDJhNDM2MWEwMjJlYjRmZTVkYzljODcwMzlmMGUxMDA4ZmRjNjM0NmY3ZWJlMmZjNGI3NDQ3NTQyOTQ3MjBkNSIsInZlcnNpb24iOjF9.l1MiZbXyFyXAcsfFChMrTvSaBhzBR6AuDnBuII8zY3Csz3ShWK0vo09MkQdZ1epe8PKWV9wwUBuJyKk3wL7MDw - type: rouge value: 15.4079 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY3NDBkYTVkNjdhY2I0ZmY0NTA4YzVkMGE5YWE5ODdjOGE1MDhkOTJhOWY3NmI2ZWI1MGU2MGI1NDRlYjI3MSIsInZlcnNpb24iOjF9.VN-5eK2SzFDCJnFTHHu7XCU_lynaxW_JEDc3llmcNo_ffDgRmISHHGaqV7fPFymBBMXpPly7XblO_sukyqj1Cg - type: rouge value: 24.8814 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDYyNGZmNDY3MTY4YzI4ZjZhODE0NGIyN2ZkOGEyYzM3MWZjM2QzZTg5ZjNmZmYzZDE5NzhiZDQ4OGM1YjNiMyIsInZlcnNpb24iOjF9.L73M1M5XdMQkf8zSdfLN0MUrxtO0r6UiLjoOkHfrIGbWNsNJ8tU5lciYFNIhJrICUL8LchCsFqR9LAClKS4bCg - type: rouge value: 36.0375 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTBlMTQ5OTQxNTA3ZmFiMGYyZWQ0MGM0ODY2YWI3MzgyNjkwNzQyM2FmNGRjMzc3MjJmZDZkOWY4M2RhZTg2MSIsInZlcnNpb24iOjF9.IiMSSVahBgH8n34bGCC_DDGpujDXQbIvGhlcpVV2EBVQLLWUqcCy5WwBdbRrxPC-asBRCNERQxj8Uii4FvPsDQ - type: loss value: 1.9130958318710327 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTg2NTMxZDE3MDg3MDFkMTYxNjY1OTc5YjQ4ODcyMGUxMTFiZjJiNDgyYWZhN2NjZmE1MDQ1NTRmZGY0NjQzZSIsInZlcnNpb24iOjF9.kADUBMO8i6-oGDDt1cOiGMrGcMkF_Qc1jSpS2NSFyksDRusQa_YuuShefF4DuHVEr3CS0hNjjRH9_JBeX9ZQDg - type: gen_len value: 179.2184 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjM4NGNiMTY3YzZjMzg4MTRiMDdiZDFiMzA1ZDIyMDM2MDk1OWRhYWQzN2UxZDNlODIxOWVhY2JlYjk4Mjk5YyIsInZlcnNpb24iOjF9.nU8ImMNWgjg9BKjUBJQLFaJOBq3kyIne8ldlpL0OV0e4888wOntIAcJP0dCCYfRSLVmZuXQ1M8cpDuTf50hNCw - task: type: summarization name: Summarization dataset: name: kmfoda/booksum type: kmfoda/booksum config: kmfoda--booksum split: test metrics: - type: rouge value: 35.2154 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWQ5MGMzNDc4MDBiNmRiNDY5ZDM4N2QzYTJlYTNiYTcwNDBlMzdlM2I4N2VmM2ZjMmQ3NGU3OTRlMTMzMTg3NyIsInZlcnNpb24iOjF9.E55gu7HvMwc4HejF3YOD6yqQJj7_6GCoCMWm78sY5_w2glR-oM98tu9IsG27VaPva7UklxsspzT2DIVaVKY0CQ - type: rouge value: 6.8702 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjFhN2JlYzlmMGZmYzkwYjBlNjY4YzhlYzNmMTdmZWYyYmU3NWI0ZTRkMTgxNmRiM2EyZWMyMWFjY2JkNzg1MCIsInZlcnNpb24iOjF9.I9BoHbGt8LLNtLAssIXm9tQ4lHqFCMt0zJS_zTezzxGRMS5On71c3jnlzrDtwEm6wjmZEwYIJK8qqJh-Qa5YAA - type: rouge value: 17.6693 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGZlZjcwOTZjMmNjZWFkM2M5Zjg1OTgzMzcxOTM2Y2RkMzY4NGU2NDE2MTVjMjcyMWIwNWI4ODc0YTY3YTA2MSIsInZlcnNpb24iOjF9.Ou1C6U6PrOtXPxlk9PMucdJ_vlnVnSk94QrLJL4b_g2pcY3D80Xrw09iz4BTOPzZ2UTNBLyn8YdLY3m2vHpiAQ - type: rouge value: 32.8365 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmIzMGQ5MzQ1MjI4MTU0ZGZkZTRhODllNWQyOTQ4ZjA5YWE4ZTJjMzQ2ZWQzOGFiMWUzZDMxOTU5NzkxYjliZiIsInZlcnNpb24iOjF9.2mYURQZYo7e3AY0tfkpqFMNhoHvrysvBXza-XYYrX_xLpruMU9Gzrwc3jvpi2wtp4eeyhzIiZJvH0O6la6zxCg - type: loss value: 2.9878039360046387 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGU0ODBmN2I3OGFkNTFiM2I3YWQyNmUzNzUwYzEwNzczZWEwZjIxYTAwZDE2ZTIwMGE3ZGNmMDQzNTFmNjEwYyIsInZlcnNpb24iOjF9.0IKWIImKTXqysQUb2IMPk2eeHlOcBjndiPcU42nfFBMhRTqeXdBqOCP6cidlho7pVN4hsC-77ArJ9pZlbTFuBg - type: gen_len value: 200.6785 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDUzYTE3MmIxZGM3MWI1MjNhMTU3MTdkMjJjNjY5Y2UzYTdjYWRiY2I4MmUxMDY4NTA5NWZjYWU0NzliODdkYiIsInZlcnNpb24iOjF9.BqmCaWzbCMNUied6zNO744Dl-0LC47FCIv-l8kDjkhSkwQcb_hi93VYts5PTsrFY_MmM8j7AsY1PiFr6nNFMBQ - task: type: summarization name: Summarization dataset: name: big_patent type: big_patent config: y split: test metrics: - type: rouge value: 37.376 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWI4ZjMxODcxMThiMzE3NjQ3Zjg0NzhmZjlhY2ZmYjQwMGY5ZjlkZGY1MzZmY2M5YTU4NmY1Y2NhZDA3YWFkOCIsInZlcnNpb24iOjF9.sYh4IynXgOpVetYYSWUp0v5QZWvXC1x7_uJR0LZUxaeYKEc4yfICNmDOPzNzoroaV4ELeOaPjHQpYVm-lpAHBA - type: rouge value: 11.4432 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTZkOGIyYzU3YTQ5ZTFmMDU3MjQ5ZWM2NGQ1MzgwMDYyZDkxN2Q2YjgyZTkzMTEyYjczMGJiYmNkZmU5MTQ3NSIsInZlcnNpb24iOjF9.Qk38acpjPjU64Z1nXEuqMXjKZrGvdC9oY586EjuCPeEAJCSzKimp8FsB-1QrjMH73q6rN2CdumJUxih6HF-KAA - type: rouge value: 22.2754 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzlmOTUxYmEzYzYyYmVjNGZlNzNiZWIwZmQ5OWVlY2U3NTBiZDExYWUwODQ0Y2ZjMmQyMTNmMTlmNjdmZWUwNCIsInZlcnNpb24iOjF9.bUVhxaepySyaityby71j6h4YO_l4x8OSeZoblagwUMYGXRc0Ej286QzEtZFeRGygMJ5sjUN_loWCtOmAnHY2BA - type: rouge value: 32.5087 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDEyNjM5NjAzYTNjN2MwZTY4MWY2Y2U5YWUyM2Y1YjAyNjBhZTM0YTAyZjM5N2M1ZDkxOWUxNzE2OWZkYTBmMSIsInZlcnNpb24iOjF9.QfMHkcoAR3xqzsgL1xjHk3Lui1xhE12pJKvYujQ_h5o6PBXT79dsENsrqDGGBjiKdTKNwWqADgaviy1VrWMDCQ - type: loss value: 2.9867310523986816 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTUzM2Q5MmE5MzU4YmFlMjFiMmUzZGU2NDAzMTQ1Y2NjZDVlYWI3NGE5MjM0NmMxMjdiOWI3MTU0NDk3NmNkZiIsInZlcnNpb24iOjF9.VoQqu6ZU3AR_cji82UkpvbLnTmZ17fZmR2E4DeonjCyTZpyyfvUsQ2nbKDovQf34DBkYXENk42EUsUF1mBZNBg - type: gen_len value: 172.7776 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTEzNTMyMDY1N2Q5ZTMxNjNlMTI0Nzk5ZDc1ZWQ5Y2IwZWM0NWNhNWY2MTk3YTRkYzUwMTI4NjZiOWVhOGQwYSIsInZlcnNpb24iOjF9.-Rek2VFmGqIEgqeFoxU_0aCWdFbGYi9BV5c7x-izm9_4vtZdYQ4ITXm4T8C3UlpOax60veJQt2Uax5vyiFc9Ag --- # pszemraj/pegasus-x-large-book-summary <a href="https://colab.research.google.com/gist/pszemraj/6c326c0649233ab017d63adc36958d1a/pegasus-x-large-booksum-demo.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> Get SparkNotes-esque summaries of arbitrary text! Due to the model size, it's recommended to try it out in Colab (linked above) as the API textbox may time out. This model is a fine-tuned version of [google/pegasus-x-large](https://huggingface.co/google/pegasus-x-large) on the `kmfoda/booksum` dataset for approx eight epochs. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters #### Epochs 1-4 TODO #### Epochs 5 & 6 The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: _ADAN_ using lucidrains' `adan-pytorch` with default betas - lr_scheduler_type: constant_with_warmup - data type: TF32 - num_epochs: 2 #### Epochs 7 & 8 - epochs 5 & 6 were trained with 12288 tokens input - this fixes that with 2 epochs at 16384 tokens input The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: _ADAN_ using lucidrains' `adan-pytorch` with default betas - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 ### Framework versions - Transformers 4.22.0 - Pytorch 1.11.0a0+17540c5 - Datasets 2.4.0 - Tokenizers 0.12.1
BearThreat/distilbert-base-uncased-finetuned-cola
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
2022-09-16T12:01:39Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: SentimentBert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SentimentBert This model is a fine-tuned version of [cahya/bert-base-indonesian-522M](https://huggingface.co/cahya/bert-base-indonesian-522M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2005 - Accuracy: 0.965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 275 | 0.7807 | 0.715 | | 0.835 | 2.0 | 550 | 1.0588 | 0.635 | | 0.835 | 3.0 | 825 | 0.2764 | 0.94 | | 0.5263 | 4.0 | 1100 | 0.1913 | 0.97 | | 0.5263 | 5.0 | 1375 | 0.2005 | 0.965 | ### Framework versions - Transformers 4.22.0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Bella4322/Sarah
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8648740833380706 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1365 - F1: 0.8649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 | | 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 | | 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
BenGeorge/MyModel
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8476587301587302 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5962566844919787 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5964391691394659 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7559755419677598 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.87 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5043859649122807 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5902777777777778 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9135151423836071 - name: F1 (macro) type: f1_macro value: 0.9077476621792441 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8568075117370892 - name: F1 (macro) type: f1_macro value: 0.6862949146842514 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6793066088840737 - name: F1 (macro) type: f1_macro value: 0.6733689760415943 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9559713431174793 - name: F1 (macro) type: f1_macro value: 0.8691131481598299 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8934503290504543 - name: F1 (macro) type: f1_macro value: 0.8925413349776822 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5962566844919787 - Accuracy on SAT: 0.5964391691394659 - Accuracy on BATS: 0.7559755419677598 - Accuracy on U2: 0.5043859649122807 - Accuracy on U4: 0.5902777777777778 - Accuracy on Google: 0.87 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9135151423836071 - Micro F1 score on CogALexV: 0.8568075117370892 - Micro F1 score on EVALution: 0.6793066088840737 - Micro F1 score on K&H+N: 0.9559713431174793 - Micro F1 score on ROOT09: 0.8934503290504543 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8476587301587302 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
BenWitter/DialoGPT-small-Tyrion
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 225.16 +/- 74.59 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Beri/legal-qa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 271.78 +/- 14.35 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Bharathdamu/wav2vec2-model-hindi-stt
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - scientific names - text generation license: cc-by-sa-4.0 --- # t5-base-sci-names Biodiversity literature is dedicated to the identification, documentation, and categorization of plants, fungi, animals, and other living organisms. Correctly extracting the name of an organism within these documents involves finding the entire scientific name–including the genus, specific epithet, and author name. Extracting these names allows biologists to access documents about a species more comprehensively, and to track an organism’s history of documentation, which includes biological changes and changes in how scientists describe them. **t5-base-sci-names** uses advances in text-to-text generation to generate scientific names and authors from biodiversity literature. This model was trained on hand-labeled biodiversity texts, including labeled information about a mentioned organism's genus (abbreviated and expanded), specific epithet, and author. This model was trained to output 0-N scientific names with specific prefixes (e.g. "genus = " or "epithet = ") and performs best with anywhere from 20-120 words. You can also use the model in this tutorial for [scientific names generation](https://colab.research.google.com/drive/1GEpnCaMJYiPIhuZiDJ1X1pZsGtGSm8Ds?usp=sharing). Thanks to Damon Little and Nelson Salinas at the New York Botanical Gardens for their support. *Note that this model is still a work in progress. Any feedback is welcome.*
BigDaddyNe1L/Hhaa
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-16T16:02:11Z
--- language: - en thumbnail: "https://s3.amazonaws.com/moonup/production/uploads/1663756797814-62bd5f951e22ec84279820e8.png" tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image datasets: - lambdalabs/pokemon-blip-captions --- __Stable Diffusion fine tuned on Pokémon by [Lambda Labs](https://lambdalabs.com/).__ Put in a text prompt and generate your own Pokémon character, no "prompt engineering" required! If you want to find out how to train your own Stable Diffusion variants, see this [example](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning) from Lambda Labs. ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1663756797814-62bd5f951e22ec84279820e8.png) > Girl with a pearl earring, Cute Obama creature, Donald Trump, Boris Johnson, Totoro, Hello Kitty ## Usage Make sure you have setup the Stable Diffusion repo and downloaded `ema-only-epoch=000142.ckpt` ```bash python scripts/txt2img.py \ --prompt 'robotic cat with wings' \ --outdir 'outputs/generated_pokemon' \ --H 512 --W 512 \ --n_samples 4 \ --config 'configs/stable-diffusion/pokemon.yaml' \ --ckpt ema-only-epoch=000142.ckpt ``` You can also use the normal stable diffusion inference config. ## Model description Trained on [BLIP captioned Pokémon images](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) using 2xA6000 GPUs on [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud) for around 15,000 step (about 6 hours, at a cost of about $10). ## Links - [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers) - [Captioned Pokémon dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) - [Model weights in Diffusers format](https://huggingface.co/lambdalabs/sd-pokemon-diffusers) - [Original model weights](https://huggingface.co/justinpinkney/pokemon-stable-diffusion) - [Training code](https://github.com/justinpinkney/stable-diffusion) Trained by [Justin Pinkney](justinpinkney.com) ([@Buntworthy](https://twitter.com/Buntworthy)) at [Lambda Labs](https://lambdalabs.com/).
BigSalmon/GPT2HardandEasy
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: "en" thumbnail: tags: - Source Separation - Speech Separation - Audio Source Separation - Libri2Mix - SepFormer - Transformer - audio-to-audio - audio-source-separation - speechbrain license: "apache-2.0" datasets: - Libri2Mix metrics: - SI-SNRi - SDRi --- <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe> <br/><br/> # SepFormer trained on Libri2Mix This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2) model, implemented with SpeechBrain, and pretrained on Libri2Mix dataset. For a better experience we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance is 20.6 dB on the test set of Libri2Mix dataset. | Release | Test-Set SI-SNRi | Test-Set SDRi | |:-------------:|:--------------:|:--------------:| | 16-09-22 | 20.6dB | 20.9dB | You can listen to example results obtained on the test set of WSJ0-2/3Mix through [here](https://sourceseparationresearch.com/static/sepformer_example_results/sepformer_results.html). ## Install SpeechBrain First of all, please install SpeechBrain with the following command: ``` pip install speechbrain ``` Please notice that we encourage you to read our tutorials and learn more about [SpeechBrain](https://speechbrain.github.io). ### Perform source separation on your own audio file ```python from speechbrain.pretrained import SepformerSeparation as separator import torchaudio model = separator.from_hparams(source="speechbrain/sepformer-libri2mix", savedir='pretrained_models/sepformer-libri2mix') # for custom file, change path est_sources = model.separate_file(path='speechbrain/sepformer-wsj02mix/test_mixture.wav') torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000) torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000) ``` The system expects input recordings sampled at 8kHz (single channel). If your signal has a different sample rate, resample it (e.g, using torchaudio or sox) before using the interface. ### Inference on GPU To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method. ### Training The model was trained with SpeechBrain (fc2eabb7). To train it from scratch follows these steps: 1. Clone SpeechBrain: ```bash git clone https://github.com/speechbrain/speechbrain/ ``` 2. Install it: ``` cd speechbrain pip install -r requirements.txt pip install -e . ``` 3. Run Training: ``` cd recipes/Libri2Mix/separation python train.py hparams/sepformer.yaml --data_folder=your_data_folder ``` You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1NPTXw4i9Vmahhr5BSQQa-ZTTm45FwYJA). ### Limitations The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets. #### Referencing SpeechBrain ```bibtex @misc{speechbrain, title={{SpeechBrain}: A General-Purpose Speech Toolkit}, author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio}, year={2021}, eprint={2106.04624}, archivePrefix={arXiv}, primaryClass={eess.AS}, note={arXiv:2106.04624} } ``` #### Referencing SepFormer ```bibtex @inproceedings{subakan2021attention, title={Attention is All You Need in Speech Separation}, author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong}, year={2021}, booktitle={ICASSP 2021} } @misc{subakan2022sepformer author = {Subakan, Cem and Ravanelli, Mirco and Cornell, Samuele and Grondin, Francois and Bronzi, Mirko}, title = {On Using Transformers for Speech-Separation}, year = {2022}, copyright = {arXiv.org perpetual, non-exclusive license} } ``` # **About SpeechBrain** - Website: https://speechbrain.github.io/ - Code: https://github.com/speechbrain/speechbrain/ - HuggingFace: https://huggingface.co/speechbrain/
BigSalmon/InfillFormalLincoln
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-09-16T19:24:43Z
--- license: mit --- ### harmless-ai-1 on Stable Diffusion This is the `<bee-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<bee-style> 0](https://huggingface.co/sd-concepts-library/harmless-ai-1/resolve/main/concept_images/(swarm+of+bees),+The+computer+is+the+enemy+of+transhumanity,+detailed,+beautiful+masterpiece,+unreal+engine,+4k-0.024599999999999973.png) ![<bee-style> 1](https://huggingface.co/sd-concepts-library/harmless-ai-1/resolve/main/concept_images/(swarm+of+bees),+The+computer+is+the+enemy+of+transhumanity,+detailed,+beautiful+masterpiece,+unreal+engine,+4k-0.02-3024.png) ![<bee-style> 2](https://huggingface.co/sd-concepts-library/harmless-ai-1/resolve/main/concept_images/beehiveperson.png) ![<bee-style> 3](https://huggingface.co/sd-concepts-library/harmless-ai-1/resolve/main/concept_images/download-5.png) ![<bee-style> 4](https://huggingface.co/sd-concepts-library/harmless-ai-1/resolve/main/concept_images/download-11.png) ![<bee-style> 5](https://huggingface.co/sd-concepts-library/harmless-ai-1/resolve/main/concept_images/abstractbee.png) ![<bee-style> 6](https://huggingface.co/sd-concepts-library/harmless-ai-1/resolve/main/concept_images/abstractbee2.png)
BigSalmon/MrLincoln14
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="matemato/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
BigSalmon/T52
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
8
2022-09-16T23:47:28Z
--- license: mit --- ### shvoren-style on Stable Diffusion This is the `<shvoren-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<shvoren-style> 0](https://huggingface.co/sd-concepts-library/shvoren-style/resolve/main/concept_images/0.jpeg) ![<shvoren-style> 1](https://huggingface.co/sd-concepts-library/shvoren-style/resolve/main/concept_images/1.jpeg) ![<shvoren-style> 2](https://huggingface.co/sd-concepts-library/shvoren-style/resolve/main/concept_images/2.jpeg)
BobBraico/bert-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-17T03:43:06Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Abdulmateen/abdul-distillbert-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Abdulmateen/abdul-distillbert-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8507 - Validation Loss: 2.5825 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -687, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.8507 | 2.5825 | 0 | ### Framework versions - Transformers 4.22.1 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
BotterHax/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-09-17T07:10:15Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: finetuned_HelsinkiNLP-opus-mt-en-vi_PhoMT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_HelsinkiNLP-opus-mt-en-vi_PhoMT This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-vi](https://huggingface.co/Helsinki-NLP/opus-mt-en-vi) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3069 - Bleu: 42.4251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:------:|:---------------:|:-------:| | 1.4437 | 1.0 | 186125 | 1.3648 | 40.6353 | | 1.3748 | 2.0 | 372250 | 1.3362 | 41.4991 | | 1.3182 | 3.0 | 558375 | 1.3224 | 41.9860 | | 1.2829 | 4.0 | 744500 | 1.3113 | 42.2649 | | 1.2641 | 5.0 | 930625 | 1.3069 | 42.4233 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.10.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Branex/gpt-neo-2.7B
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-17T07:16:07Z
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - spearmanr model-index: - name: bert-base-cased-stsb results: - task: name: Text Classification type: text-classification dataset: name: GLUE STSB type: glue args: stsb metrics: - name: Spearmanr type: spearmanr value: 0.8963451800582044 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-stsb This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 0.4322 - Pearson: 0.9007 - Spearmanr: 0.8963 - Combined Score: 0.8985 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 1.6464 | 1.39 | 500 | 0.5662 | 0.8820 | 0.8814 | 0.8817 | | 0.3329 | 2.78 | 1000 | 0.5070 | 0.8913 | 0.8883 | 0.8898 | | 0.173 | 4.17 | 1500 | 0.4465 | 0.8988 | 0.8943 | 0.8966 | | 0.1085 | 5.56 | 2000 | 0.4537 | 0.8958 | 0.8917 | 0.8937 | | 0.0816 | 6.94 | 2500 | 0.4594 | 0.8977 | 0.8933 | 0.8955 | | 0.0621 | 8.33 | 3000 | 0.4450 | 0.8997 | 0.8950 | 0.8974 | | 0.0519 | 9.72 | 3500 | 0.4322 | 0.9007 | 0.8963 | 0.8985 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.7.1 - Datasets 1.18.3 - Tokenizers 0.11.6
Brayan/CNN_Brain_Tumor
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: quote-death-faith-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # quote-death-faith-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3069 - Accuracy: 0.87 - F1: 0.8622 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Brendan/cse244b-hw2-roberta
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: train args: plus metrics: - name: Accuracy type: accuracy value: 0.9461290322580646 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2712 - Accuracy: 0.9461 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.2629 | 1.0 | 318 | 1.6048 | 0.7368 | | 1.2437 | 2.0 | 636 | 0.8148 | 0.8565 | | 0.6604 | 3.0 | 954 | 0.4768 | 0.9161 | | 0.4054 | 4.0 | 1272 | 0.3548 | 0.9352 | | 0.2987 | 5.0 | 1590 | 0.3084 | 0.9419 | | 0.2549 | 6.0 | 1908 | 0.2909 | 0.9435 | | 0.232 | 7.0 | 2226 | 0.2804 | 0.9458 | | 0.221 | 8.0 | 2544 | 0.2749 | 0.9458 | | 0.2145 | 9.0 | 2862 | 0.2722 | 0.9468 | | 0.2112 | 10.0 | 3180 | 0.2712 | 0.9461 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.10.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
BrianTin/MTBERT
[ "pytorch", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base960-english-phoneme_v2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base960-english-phoneme_v2 This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4069 - Cer: 0.0900 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.18 | 6.94 | 500 | 0.3118 | 0.0923 | | 0.2622 | 13.88 | 1000 | 0.4387 | 0.1218 | | 0.2145 | 20.83 | 1500 | 0.4441 | 0.1121 | | 0.1429 | 27.77 | 2000 | 0.4001 | 0.1045 | | 0.0927 | 34.72 | 2500 | 0.4692 | 0.1062 | | 0.0598 | 41.66 | 3000 | 0.3960 | 0.0971 | | 0.0356 | 48.61 | 3500 | 0.4069 | 0.0900 | ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.1.post201 - Datasets 2.5.2.dev0 - Tokenizers 0.12.1
Brinah/1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: bert-base-cased-mnli results: - task: name: Text Classification type: text-classification dataset: name: GLUE MNLI type: glue args: mnli metrics: - name: Accuracy type: accuracy value: 0.8280105777054516 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-mnli This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4392 - Accuracy: 0.8280 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 1.1127 | 0.02 | 500 | 1.0847 | 0.3851 | | 1.0759 | 0.04 | 1000 | 1.0449 | 0.4540 | | 1.006 | 0.06 | 1500 | 0.9296 | 0.5892 | | 0.8848 | 0.08 | 2000 | 0.8357 | 0.6364 | | 0.8063 | 0.1 | 2500 | 0.7549 | 0.6811 | | 0.7527 | 0.12 | 3000 | 0.7219 | 0.6964 | | 0.7259 | 0.14 | 3500 | 0.6928 | 0.7117 | | 0.6822 | 0.16 | 4000 | 0.6547 | 0.7355 | | 0.6826 | 0.18 | 4500 | 0.6424 | 0.7424 | | 0.6625 | 0.2 | 5000 | 0.6370 | 0.7428 | | 0.6676 | 0.22 | 5500 | 0.6116 | 0.7511 | | 0.633 | 0.24 | 6000 | 0.6241 | 0.7526 | | 0.629 | 0.26 | 6500 | 0.5994 | 0.7557 | | 0.6161 | 0.29 | 7000 | 0.6057 | 0.7546 | | 0.6003 | 0.31 | 7500 | 0.5514 | 0.7778 | | 0.5847 | 0.33 | 8000 | 0.5475 | 0.7790 | | 0.5747 | 0.35 | 8500 | 0.5481 | 0.7817 | | 0.5579 | 0.37 | 9000 | 0.5356 | 0.7836 | | 0.5719 | 0.39 | 9500 | 0.5311 | 0.7873 | | 0.5666 | 0.41 | 10000 | 0.5242 | 0.7922 | | 0.5586 | 0.43 | 10500 | 0.5295 | 0.7890 | | 0.5604 | 0.45 | 11000 | 0.5239 | 0.7932 | | 0.5511 | 0.47 | 11500 | 0.5271 | 0.7891 | | 0.5379 | 0.49 | 12000 | 0.5082 | 0.8021 | | 0.537 | 0.51 | 12500 | 0.5098 | 0.7983 | | 0.5583 | 0.53 | 13000 | 0.5046 | 0.8011 | | 0.5368 | 0.55 | 13500 | 0.5286 | 0.7937 | | 0.5445 | 0.57 | 14000 | 0.5116 | 0.8009 | | 0.5228 | 0.59 | 14500 | 0.5473 | 0.7912 | | 0.5378 | 0.61 | 15000 | 0.4972 | 0.8030 | | 0.5281 | 0.63 | 15500 | 0.4951 | 0.8068 | | 0.5304 | 0.65 | 16000 | 0.5083 | 0.7989 | | 0.539 | 0.67 | 16500 | 0.4953 | 0.8052 | | 0.5189 | 0.69 | 17000 | 0.4962 | 0.8064 | | 0.5241 | 0.71 | 17500 | 0.4939 | 0.8101 | | 0.5128 | 0.73 | 18000 | 0.5054 | 0.8073 | | 0.5074 | 0.75 | 18500 | 0.5074 | 0.8015 | | 0.523 | 0.77 | 19000 | 0.4786 | 0.8144 | | 0.5073 | 0.79 | 19500 | 0.4890 | 0.8131 | | 0.4934 | 0.81 | 20000 | 0.4901 | 0.8104 | | 0.5194 | 0.84 | 20500 | 0.4712 | 0.8147 | | 0.5076 | 0.86 | 21000 | 0.4781 | 0.8109 | | 0.5023 | 0.88 | 21500 | 0.4884 | 0.8132 | | 0.5145 | 0.9 | 22000 | 0.4675 | 0.8159 | | 0.5087 | 0.92 | 22500 | 0.4971 | 0.8041 | | 0.499 | 0.94 | 23000 | 0.4767 | 0.8111 | | 0.4964 | 0.96 | 23500 | 0.5074 | 0.8103 | | 0.4992 | 0.98 | 24000 | 0.4786 | 0.8109 | | 0.4936 | 1.0 | 24500 | 0.4812 | 0.8115 | | 0.4122 | 1.02 | 25000 | 0.4860 | 0.8213 | | 0.4022 | 1.04 | 25500 | 0.4916 | 0.8249 | | 0.4028 | 1.06 | 26000 | 0.4567 | 0.8256 | | 0.4023 | 1.08 | 26500 | 0.5356 | 0.7995 | | 0.3888 | 1.1 | 27000 | 0.5084 | 0.8168 | | 0.4084 | 1.12 | 27500 | 0.4924 | 0.8115 | | 0.4108 | 1.14 | 28000 | 0.4779 | 0.8231 | | 0.4175 | 1.16 | 28500 | 0.4759 | 0.8173 | | 0.405 | 1.18 | 29000 | 0.4848 | 0.8174 | | 0.4115 | 1.2 | 29500 | 0.4742 | 0.8163 | | 0.4082 | 1.22 | 30000 | 0.5293 | 0.8139 | | 0.4087 | 1.24 | 30500 | 0.4740 | 0.8214 | | 0.3931 | 1.26 | 31000 | 0.4909 | 0.8204 | | 0.4067 | 1.28 | 31500 | 0.4602 | 0.8248 | | 0.4063 | 1.3 | 32000 | 0.4568 | 0.8252 | | 0.4006 | 1.32 | 32500 | 0.4851 | 0.8209 | | 0.4099 | 1.34 | 33000 | 0.4543 | 0.8295 | | 0.4135 | 1.36 | 33500 | 0.4682 | 0.8268 | | 0.4077 | 1.39 | 34000 | 0.4709 | 0.8241 | | 0.4004 | 1.41 | 34500 | 0.4937 | 0.8231 | | 0.4134 | 1.43 | 35000 | 0.4714 | 0.8250 | | 0.4105 | 1.45 | 35500 | 0.4666 | 0.8256 | | 0.4056 | 1.47 | 36000 | 0.4786 | 0.8240 | | 0.4036 | 1.49 | 36500 | 0.4652 | 0.8254 | | 0.4043 | 1.51 | 37000 | 0.4618 | 0.8274 | | 0.4048 | 1.53 | 37500 | 0.4713 | 0.8294 | | 0.4036 | 1.55 | 38000 | 0.4734 | 0.8265 | | 0.4219 | 1.57 | 38500 | 0.4432 | 0.8295 | | 0.3965 | 1.59 | 39000 | 0.4812 | 0.8230 | | 0.4074 | 1.61 | 39500 | 0.4736 | 0.8251 | | 0.4028 | 1.63 | 40000 | 0.4583 | 0.8261 | | 0.3933 | 1.65 | 40500 | 0.4725 | 0.8284 | | 0.4082 | 1.67 | 41000 | 0.4524 | 0.8270 | | 0.415 | 1.69 | 41500 | 0.4564 | 0.8289 | | 0.4177 | 1.71 | 42000 | 0.4635 | 0.8265 | | 0.4093 | 1.73 | 42500 | 0.4488 | 0.8335 | | 0.4117 | 1.75 | 43000 | 0.4690 | 0.8291 | | 0.408 | 1.77 | 43500 | 0.4732 | 0.8203 | | 0.4079 | 1.79 | 44000 | 0.4717 | 0.8277 | | 0.4197 | 1.81 | 44500 | 0.4754 | 0.8259 | | 0.4093 | 1.83 | 45000 | 0.4518 | 0.8315 | | 0.4013 | 1.85 | 45500 | 0.4504 | 0.8308 | | 0.3958 | 1.87 | 46000 | 0.4524 | 0.8281 | | 0.4082 | 1.89 | 46500 | 0.4495 | 0.8305 | | 0.4033 | 1.91 | 47000 | 0.4554 | 0.8272 | | 0.4055 | 1.94 | 47500 | 0.4565 | 0.8266 | | 0.398 | 1.96 | 48000 | 0.4699 | 0.8240 | | 0.4044 | 1.98 | 48500 | 0.4459 | 0.8269 | | 0.4023 | 2.0 | 49000 | 0.4623 | 0.8271 | | 0.2913 | 2.02 | 49500 | 0.5247 | 0.8248 | | 0.2915 | 2.04 | 50000 | 0.4998 | 0.8261 | | 0.2675 | 2.06 | 50500 | 0.5703 | 0.8241 | | 0.2708 | 2.08 | 51000 | 0.5629 | 0.8267 | | 0.2802 | 2.1 | 51500 | 0.5333 | 0.8295 | | 0.2847 | 2.12 | 52000 | 0.5118 | 0.8316 | | 0.275 | 2.14 | 52500 | 0.5331 | 0.8279 | | 0.2705 | 2.16 | 53000 | 0.5702 | 0.8186 | | 0.2651 | 2.18 | 53500 | 0.5187 | 0.8301 | | 0.2715 | 2.2 | 54000 | 0.5698 | 0.8230 | | 0.2859 | 2.22 | 54500 | 0.5558 | 0.8287 | | 0.2779 | 2.24 | 55000 | 0.5662 | 0.8303 | | 0.2763 | 2.26 | 55500 | 0.5605 | 0.8246 | | 0.2704 | 2.28 | 56000 | 0.5801 | 0.8310 | | 0.2787 | 2.3 | 56500 | 0.5745 | 0.8224 | | 0.3032 | 2.32 | 57000 | 0.5360 | 0.8252 | | 0.282 | 2.34 | 57500 | 0.4963 | 0.8318 | | 0.2921 | 2.36 | 58000 | 0.5654 | 0.8176 | | 0.2934 | 2.38 | 58500 | 0.5242 | 0.8318 | | 0.2788 | 2.4 | 59000 | 0.5904 | 0.8237 | | 0.2897 | 2.42 | 59500 | 0.5342 | 0.8244 | | 0.2795 | 2.44 | 60000 | 0.5495 | 0.8294 | | 0.3001 | 2.46 | 60500 | 0.5422 | 0.8242 | | 0.2831 | 2.49 | 61000 | 0.5394 | 0.8215 | | 0.2925 | 2.51 | 61500 | 0.5181 | 0.8278 | | 0.2886 | 2.53 | 62000 | 0.5727 | 0.8241 | | 0.2983 | 2.55 | 62500 | 0.4796 | 0.8315 | | 0.2897 | 2.57 | 63000 | 0.5624 | 0.8225 | | 0.3107 | 2.59 | 63500 | 0.5108 | 0.8310 | | 0.2994 | 2.61 | 64000 | 0.5153 | 0.8308 | | 0.3015 | 2.63 | 64500 | 0.5199 | 0.8292 | | 0.2973 | 2.65 | 65000 | 0.5756 | 0.8281 | | 0.2997 | 2.67 | 65500 | 0.5135 | 0.8290 | | 0.2987 | 2.69 | 66000 | 0.5447 | 0.8272 | | 0.2874 | 2.71 | 66500 | 0.5268 | 0.8281 | | 0.3095 | 2.73 | 67000 | 0.4899 | 0.8288 | | 0.2992 | 2.75 | 67500 | 0.5107 | 0.8293 | | 0.303 | 2.77 | 68000 | 0.5308 | 0.8280 | | 0.3008 | 2.79 | 68500 | 0.5207 | 0.8330 | | 0.3018 | 2.81 | 69000 | 0.5148 | 0.8265 | | 0.3049 | 2.83 | 69500 | 0.5331 | 0.8285 | | 0.3179 | 2.85 | 70000 | 0.4961 | 0.8310 | | 0.3173 | 2.87 | 70500 | 0.5201 | 0.8319 | | 0.2996 | 2.89 | 71000 | 0.5355 | 0.8349 | | 0.2925 | 2.91 | 71500 | 0.5456 | 0.8291 | | 0.3112 | 2.93 | 72000 | 0.4907 | 0.8316 | | 0.306 | 2.95 | 72500 | 0.5189 | 0.8330 | | 0.3043 | 2.97 | 73000 | 0.4903 | 0.8346 | | 0.3133 | 2.99 | 73500 | 0.5277 | 0.8241 | | 0.2258 | 3.01 | 74000 | 0.6482 | 0.8349 | | 0.1931 | 3.04 | 74500 | 0.7001 | 0.8278 | | 0.2004 | 3.06 | 75000 | 0.7123 | 0.8282 | | 0.21 | 3.08 | 75500 | 0.6523 | 0.8230 | | 0.1951 | 3.1 | 76000 | 0.6700 | 0.8328 | | 0.2066 | 3.12 | 76500 | 0.6454 | 0.8245 | | 0.2018 | 3.14 | 77000 | 0.6720 | 0.8329 | | 0.2103 | 3.16 | 77500 | 0.6594 | 0.8272 | | 0.2143 | 3.18 | 78000 | 0.6653 | 0.8287 | | 0.21 | 3.2 | 78500 | 0.6134 | 0.8311 | | 0.208 | 3.22 | 79000 | 0.7034 | 0.8337 | | 0.211 | 3.24 | 79500 | 0.6400 | 0.8284 | | 0.2127 | 3.26 | 80000 | 0.6509 | 0.8342 | | 0.2181 | 3.28 | 80500 | 0.6740 | 0.8310 | | 0.208 | 3.3 | 81000 | 0.6810 | 0.8306 | | 0.215 | 3.32 | 81500 | 0.6483 | 0.8329 | | 0.2112 | 3.34 | 82000 | 0.6621 | 0.8262 | | 0.2241 | 3.36 | 82500 | 0.6127 | 0.8258 | | 0.2128 | 3.38 | 83000 | 0.6406 | 0.8286 | | 0.2375 | 3.4 | 83500 | 0.6295 | 0.8312 | | 0.2147 | 3.42 | 84000 | 0.6305 | 0.8314 | | 0.2342 | 3.44 | 84500 | 0.6310 | 0.8294 | | 0.2286 | 3.46 | 85000 | 0.6614 | 0.8300 | | 0.2108 | 3.48 | 85500 | 0.6604 | 0.8312 | | 0.2288 | 3.5 | 86000 | 0.6229 | 0.8313 | | 0.2249 | 3.52 | 86500 | 0.6229 | 0.8241 | | 0.2191 | 3.54 | 87000 | 0.6363 | 0.8303 | | 0.2229 | 3.57 | 87500 | 0.6613 | 0.8224 | | 0.2252 | 3.59 | 88000 | 0.6487 | 0.8234 | | 0.2248 | 3.61 | 88500 | 0.6279 | 0.8302 | | 0.2227 | 3.63 | 89000 | 0.6609 | 0.8282 | | 0.2175 | 3.65 | 89500 | 0.6393 | 0.8300 | | 0.2126 | 3.67 | 90000 | 0.6208 | 0.8301 | | 0.2222 | 3.69 | 90500 | 0.5796 | 0.8315 | | 0.217 | 3.71 | 91000 | 0.6618 | 0.8327 | | 0.2249 | 3.73 | 91500 | 0.6246 | 0.8326 | | 0.2304 | 3.75 | 92000 | 0.5994 | 0.8328 | | 0.2263 | 3.77 | 92500 | 0.6466 | 0.8280 | | 0.2186 | 3.79 | 93000 | 0.6216 | 0.8265 | | 0.2243 | 3.81 | 93500 | 0.6460 | 0.8220 | | 0.2293 | 3.83 | 94000 | 0.6293 | 0.8242 | | 0.2258 | 3.85 | 94500 | 0.6152 | 0.8303 | | 0.2195 | 3.87 | 95000 | 0.6079 | 0.8275 | | 0.2067 | 3.89 | 95500 | 0.6661 | 0.8235 | | 0.2251 | 3.91 | 96000 | 0.6505 | 0.8268 | | 0.233 | 3.93 | 96500 | 0.6256 | 0.8261 | | 0.2406 | 3.95 | 97000 | 0.6000 | 0.8271 | | 0.2119 | 3.97 | 97500 | 0.6684 | 0.8339 | | 0.2368 | 3.99 | 98000 | 0.6182 | 0.8296 | | 0.1768 | 4.01 | 98500 | 0.7741 | 0.8288 | | 0.1459 | 4.03 | 99000 | 0.7897 | 0.8310 | | 0.1538 | 4.05 | 99500 | 0.8029 | 0.8306 | | 0.1507 | 4.07 | 100000 | 0.8052 | 0.8308 | | 0.1703 | 4.09 | 100500 | 0.8227 | 0.8288 | | 0.1602 | 4.12 | 101000 | 0.8112 | 0.8299 | | 0.1616 | 4.14 | 101500 | 0.7852 | 0.8288 | | 0.1689 | 4.16 | 102000 | 0.7107 | 0.8299 | | 0.1506 | 4.18 | 102500 | 0.8631 | 0.8292 | | 0.1725 | 4.2 | 103000 | 0.7750 | 0.8310 | | 0.1553 | 4.22 | 103500 | 0.8073 | 0.8313 | | 0.169 | 4.24 | 104000 | 0.7881 | 0.8345 | | 0.1712 | 4.26 | 104500 | 0.7733 | 0.8292 | | 0.1711 | 4.28 | 105000 | 0.7989 | 0.8302 | | 0.1599 | 4.3 | 105500 | 0.7935 | 0.8265 | | 0.1798 | 4.32 | 106000 | 0.7818 | 0.8269 | | 0.179 | 4.34 | 106500 | 0.7610 | 0.8220 | | 0.1655 | 4.36 | 107000 | 0.7380 | 0.8284 | | 0.1751 | 4.38 | 107500 | 0.7645 | 0.8269 | | 0.1831 | 4.4 | 108000 | 0.7583 | 0.8270 | | 0.1659 | 4.42 | 108500 | 0.7804 | 0.8286 | | 0.1622 | 4.44 | 109000 | 0.8106 | 0.8285 | | 0.1892 | 4.46 | 109500 | 0.7328 | 0.8302 | | 0.179 | 4.48 | 110000 | 0.7301 | 0.8270 | | 0.169 | 4.5 | 110500 | 0.7589 | 0.8309 | | 0.1781 | 4.52 | 111000 | 0.7714 | 0.8307 | | 0.1772 | 4.54 | 111500 | 0.7736 | 0.8262 | | 0.1706 | 4.56 | 112000 | 0.7867 | 0.8305 | | 0.1747 | 4.58 | 112500 | 0.7819 | 0.8249 | | 0.1814 | 4.6 | 113000 | 0.7312 | 0.8279 | | 0.1778 | 4.62 | 113500 | 0.7775 | 0.8262 | | 0.1816 | 4.64 | 114000 | 0.7786 | 0.8323 | | 0.1848 | 4.67 | 114500 | 0.7804 | 0.8238 | | 0.1787 | 4.69 | 115000 | 0.7459 | 0.8303 | | 0.1613 | 4.71 | 115500 | 0.8105 | 0.8287 | | 0.1864 | 4.73 | 116000 | 0.7469 | 0.8287 | | 0.1838 | 4.75 | 116500 | 0.7508 | 0.8294 | | 0.176 | 4.77 | 117000 | 0.7753 | 0.8294 | | 0.1978 | 4.79 | 117500 | 0.7815 | 0.8284 | | 0.1836 | 4.81 | 118000 | 0.7897 | 0.8225 | | 0.1708 | 4.83 | 118500 | 0.8350 | 0.8270 | | 0.1879 | 4.85 | 119000 | 0.7409 | 0.8285 | | 0.1823 | 4.87 | 119500 | 0.7831 | 0.8289 | | 0.19 | 4.89 | 120000 | 0.7433 | 0.8263 | | 0.1845 | 4.91 | 120500 | 0.7195 | 0.8256 | | 0.1829 | 4.93 | 121000 | 0.7550 | 0.8235 | | 0.1729 | 4.95 | 121500 | 0.7562 | 0.8245 | | 0.173 | 4.97 | 122000 | 0.7941 | 0.8276 | | 0.1872 | 4.99 | 122500 | 0.7613 | 0.8304 | | 0.1424 | 5.01 | 123000 | 0.8764 | 0.8258 | | 0.1088 | 5.03 | 123500 | 0.9407 | 0.8303 | | 0.1188 | 5.05 | 124000 | 0.9559 | 0.8290 | | 0.1132 | 5.07 | 124500 | 0.9672 | 0.8213 | | 0.1257 | 5.09 | 125000 | 0.9127 | 0.8291 | | 0.1201 | 5.11 | 125500 | 1.0327 | 0.8226 | | 0.1257 | 5.13 | 126000 | 0.9187 | 0.8272 | | 0.1336 | 5.15 | 126500 | 0.8971 | 0.8291 | | 0.1319 | 5.17 | 127000 | 0.9316 | 0.8256 | | 0.133 | 5.19 | 127500 | 0.9140 | 0.8273 | | 0.1294 | 5.22 | 128000 | 0.8752 | 0.8272 | | 0.1315 | 5.24 | 128500 | 0.9288 | 0.8296 | | 0.1234 | 5.26 | 129000 | 0.9681 | 0.8217 | | 0.1232 | 5.28 | 129500 | 0.9213 | 0.8282 | | 0.1309 | 5.3 | 130000 | 0.9321 | 0.8274 | | 0.147 | 5.32 | 130500 | 0.8256 | 0.8340 | | 0.1295 | 5.34 | 131000 | 0.9193 | 0.8246 | | 0.1239 | 5.36 | 131500 | 0.9145 | 0.8335 | | 0.1454 | 5.38 | 132000 | 0.8601 | 0.8294 | | 0.1345 | 5.4 | 132500 | 0.9294 | 0.8301 | | 0.1446 | 5.42 | 133000 | 0.8903 | 0.8299 | | 0.14 | 5.44 | 133500 | 0.9664 | 0.8188 | | 0.1412 | 5.46 | 134000 | 0.9688 | 0.8225 | | 0.1393 | 5.48 | 134500 | 0.9102 | 0.8256 | | 0.1487 | 5.5 | 135000 | 0.8585 | 0.8321 | | 0.1363 | 5.52 | 135500 | 0.8892 | 0.8273 | | 0.1388 | 5.54 | 136000 | 0.9253 | 0.8263 | | 0.1286 | 5.56 | 136500 | 0.9117 | 0.8272 | | 0.1406 | 5.58 | 137000 | 0.8725 | 0.8257 | | 0.1377 | 5.6 | 137500 | 0.9155 | 0.8294 | | 0.1407 | 5.62 | 138000 | 0.9327 | 0.8228 | | 0.1308 | 5.64 | 138500 | 0.9619 | 0.8266 | | 0.141 | 5.66 | 139000 | 0.9087 | 0.8290 | | 0.1451 | 5.68 | 139500 | 0.9083 | 0.8258 | | 0.1413 | 5.7 | 140000 | 0.9030 | 0.8253 | | 0.1472 | 5.72 | 140500 | 0.8330 | 0.8311 | | 0.1251 | 5.74 | 141000 | 0.9509 | 0.8239 | | 0.1546 | 5.77 | 141500 | 0.8113 | 0.8301 | | 0.1309 | 5.79 | 142000 | 0.9833 | 0.8238 | | 0.1587 | 5.81 | 142500 | 0.8817 | 0.8263 | | 0.1374 | 5.83 | 143000 | 0.8838 | 0.8273 | | 0.1508 | 5.85 | 143500 | 0.8769 | 0.8308 | | 0.1406 | 5.87 | 144000 | 0.8929 | 0.8309 | | 0.1366 | 5.89 | 144500 | 0.9235 | 0.8301 | | 0.1494 | 5.91 | 145000 | 0.8470 | 0.8289 | | 0.1335 | 5.93 | 145500 | 0.9032 | 0.8300 | | 0.1404 | 5.95 | 146000 | 0.8810 | 0.8306 | | 0.1482 | 5.97 | 146500 | 0.9043 | 0.8278 | | 0.1483 | 5.99 | 147000 | 0.9073 | 0.8283 | | 0.1186 | 6.01 | 147500 | 0.9628 | 0.8271 | | 0.078 | 6.03 | 148000 | 1.0228 | 0.8287 | | 0.0917 | 6.05 | 148500 | 0.9993 | 0.8299 | | 0.0898 | 6.07 | 149000 | 0.9852 | 0.8326 | | 0.0971 | 6.09 | 149500 | 0.9814 | 0.8346 | | 0.0882 | 6.11 | 150000 | 1.0538 | 0.8278 | | 0.0829 | 6.13 | 150500 | 1.0677 | 0.8293 | | 0.1079 | 6.15 | 151000 | 0.9678 | 0.8294 | | 0.0904 | 6.17 | 151500 | 1.0365 | 0.8266 | | 0.1016 | 6.19 | 152000 | 1.0043 | 0.8320 | | 0.098 | 6.21 | 152500 | 0.9861 | 0.8321 | | 0.0959 | 6.23 | 153000 | 0.9589 | 0.8350 | | 0.096 | 6.25 | 153500 | 0.9867 | 0.8328 | | 0.1019 | 6.27 | 154000 | 0.9825 | 0.8338 | | 0.1065 | 6.29 | 154500 | 0.9864 | 0.8317 | | 0.1077 | 6.32 | 155000 | 1.0051 | 0.8289 | | 0.1035 | 6.34 | 155500 | 0.9882 | 0.8334 | | 0.0873 | 6.36 | 156000 | 1.0419 | 0.8278 | | 0.1019 | 6.38 | 156500 | 1.0334 | 0.8258 | | 0.0993 | 6.4 | 157000 | 0.9954 | 0.8330 | | 0.1116 | 6.42 | 157500 | 0.9941 | 0.8301 | | 0.0996 | 6.44 | 158000 | 1.0413 | 0.8262 | | 0.0987 | 6.46 | 158500 | 1.0250 | 0.8273 | | 0.0968 | 6.48 | 159000 | 1.0690 | 0.8308 | | 0.1145 | 6.5 | 159500 | 0.9717 | 0.8303 | | 0.0991 | 6.52 | 160000 | 0.9853 | 0.8314 | | 0.0997 | 6.54 | 160500 | 0.9904 | 0.8289 | | 0.0984 | 6.56 | 161000 | 1.0008 | 0.8334 | | 0.1092 | 6.58 | 161500 | 0.9623 | 0.8309 | | 0.1035 | 6.6 | 162000 | 1.0328 | 0.8244 | | 0.1156 | 6.62 | 162500 | 0.9976 | 0.8264 | | 0.1067 | 6.64 | 163000 | 0.9704 | 0.8322 | | 0.1139 | 6.66 | 163500 | 1.0199 | 0.8295 | | 0.0948 | 6.68 | 164000 | 1.0384 | 0.8324 | | 0.1092 | 6.7 | 164500 | 0.9703 | 0.8302 | | 0.0987 | 6.72 | 165000 | 1.0323 | 0.8268 | | 0.0918 | 6.74 | 165500 | 1.0378 | 0.8333 | | 0.1062 | 6.76 | 166000 | 1.0150 | 0.8318 | | 0.1069 | 6.78 | 166500 | 0.9688 | 0.8300 | | 0.1046 | 6.8 | 167000 | 0.9943 | 0.8246 | | 0.107 | 6.82 | 167500 | 0.9878 | 0.8296 | | 0.0909 | 6.84 | 168000 | 1.0053 | 0.8287 | | 0.1067 | 6.87 | 168500 | 0.9886 | 0.8270 | | 0.1004 | 6.89 | 169000 | 0.9990 | 0.8296 | | 0.0992 | 6.91 | 169500 | 1.0214 | 0.8291 | | 0.1097 | 6.93 | 170000 | 1.0265 | 0.8242 | | 0.1027 | 6.95 | 170500 | 1.0186 | 0.8286 | | 0.1014 | 6.97 | 171000 | 0.9618 | 0.8286 | | 0.0897 | 6.99 | 171500 | 1.0270 | 0.8318 | | 0.087 | 7.01 | 172000 | 1.0727 | 0.8279 | | 0.0598 | 7.03 | 172500 | 1.1137 | 0.8312 | | 0.0641 | 7.05 | 173000 | 1.0740 | 0.8298 | | 0.0614 | 7.07 | 173500 | 1.1526 | 0.8271 | | 0.0589 | 7.09 | 174000 | 1.1481 | 0.8309 | | 0.0624 | 7.11 | 174500 | 1.1475 | 0.8270 | | 0.0679 | 7.13 | 175000 | 1.1185 | 0.8301 | | 0.0618 | 7.15 | 175500 | 1.1509 | 0.8296 | | 0.0765 | 7.17 | 176000 | 1.1362 | 0.8275 | | 0.069 | 7.19 | 176500 | 1.1990 | 0.8298 | | 0.0763 | 7.21 | 177000 | 1.1095 | 0.8292 | | 0.0649 | 7.23 | 177500 | 1.1579 | 0.8266 | | 0.0739 | 7.25 | 178000 | 1.1474 | 0.8299 | | 0.0777 | 7.27 | 178500 | 1.0821 | 0.8333 | | 0.0666 | 7.29 | 179000 | 1.1031 | 0.8311 | | 0.0647 | 7.31 | 179500 | 1.1386 | 0.8292 | | 0.0711 | 7.33 | 180000 | 1.1505 | 0.8271 | | 0.0575 | 7.35 | 180500 | 1.1494 | 0.8263 | | 0.0561 | 7.37 | 181000 | 1.2161 | 0.8270 | | 0.0699 | 7.39 | 181500 | 1.1285 | 0.8310 | | 0.0596 | 7.42 | 182000 | 1.1755 | 0.8291 | | 0.0757 | 7.44 | 182500 | 1.1167 | 0.8330 | | 0.0726 | 7.46 | 183000 | 1.1184 | 0.8298 | | 0.0711 | 7.48 | 183500 | 1.1207 | 0.8310 | | 0.07 | 7.5 | 184000 | 1.1214 | 0.8291 | | 0.0819 | 7.52 | 184500 | 1.0905 | 0.8285 | | 0.0647 | 7.54 | 185000 | 1.0851 | 0.8345 | | 0.0814 | 7.56 | 185500 | 1.0697 | 0.8337 | | 0.0764 | 7.58 | 186000 | 1.0697 | 0.8322 | | 0.0701 | 7.6 | 186500 | 1.0965 | 0.8279 | | 0.0651 | 7.62 | 187000 | 1.1121 | 0.8321 | | 0.0683 | 7.64 | 187500 | 1.1246 | 0.8321 | | 0.0682 | 7.66 | 188000 | 1.1285 | 0.8290 | | 0.063 | 7.68 | 188500 | 1.1397 | 0.8327 | | 0.0609 | 7.7 | 189000 | 1.1423 | 0.8306 | | 0.0689 | 7.72 | 189500 | 1.1382 | 0.8349 | | 0.0728 | 7.74 | 190000 | 1.1001 | 0.8380 | | 0.0626 | 7.76 | 190500 | 1.1121 | 0.8319 | | 0.0747 | 7.78 | 191000 | 1.0930 | 0.8288 | | 0.0739 | 7.8 | 191500 | 1.0975 | 0.8307 | | 0.0865 | 7.82 | 192000 | 1.0530 | 0.8309 | | 0.0794 | 7.84 | 192500 | 1.0670 | 0.8302 | | 0.0634 | 7.86 | 193000 | 1.0990 | 0.8348 | | 0.0725 | 7.88 | 193500 | 1.1087 | 0.8325 | | 0.0655 | 7.9 | 194000 | 1.0891 | 0.8360 | | 0.0678 | 7.92 | 194500 | 1.1428 | 0.8262 | | 0.0751 | 7.94 | 195000 | 1.1070 | 0.8326 | | 0.0644 | 7.97 | 195500 | 1.1279 | 0.8347 | | 0.0783 | 7.99 | 196000 | 1.0856 | 0.8357 | | 0.0597 | 8.01 | 196500 | 1.1556 | 0.8335 | | 0.0318 | 8.03 | 197000 | 1.2165 | 0.8292 | | 0.0335 | 8.05 | 197500 | 1.2328 | 0.8308 | | 0.0412 | 8.07 | 198000 | 1.2087 | 0.8293 | | 0.0441 | 8.09 | 198500 | 1.2074 | 0.8360 | | 0.048 | 8.11 | 199000 | 1.2072 | 0.8318 | | 0.0402 | 8.13 | 199500 | 1.1964 | 0.8338 | | 0.0553 | 8.15 | 200000 | 1.2450 | 0.8317 | | 0.0391 | 8.17 | 200500 | 1.1994 | 0.8365 | | 0.0428 | 8.19 | 201000 | 1.2593 | 0.8278 | | 0.042 | 8.21 | 201500 | 1.2365 | 0.8280 | | 0.0462 | 8.23 | 202000 | 1.2080 | 0.8294 | | 0.0468 | 8.25 | 202500 | 1.2052 | 0.8309 | | 0.0499 | 8.27 | 203000 | 1.1889 | 0.8314 | | 0.0386 | 8.29 | 203500 | 1.1998 | 0.8344 | | 0.0417 | 8.31 | 204000 | 1.2113 | 0.8306 | | 0.0449 | 8.33 | 204500 | 1.2147 | 0.8308 | | 0.0453 | 8.35 | 205000 | 1.2288 | 0.8298 | | 0.0461 | 8.37 | 205500 | 1.2139 | 0.8298 | | 0.0443 | 8.39 | 206000 | 1.2159 | 0.8305 | | 0.0414 | 8.41 | 206500 | 1.2352 | 0.8314 | | 0.0445 | 8.43 | 207000 | 1.2148 | 0.8317 | | 0.0467 | 8.45 | 207500 | 1.2142 | 0.8317 | | 0.0412 | 8.47 | 208000 | 1.2305 | 0.8326 | | 0.0488 | 8.49 | 208500 | 1.2000 | 0.8307 | | 0.0398 | 8.52 | 209000 | 1.2434 | 0.8308 | | 0.0376 | 8.54 | 209500 | 1.2225 | 0.8347 | | 0.0384 | 8.56 | 210000 | 1.2458 | 0.8322 | | 0.0427 | 8.58 | 210500 | 1.2666 | 0.8299 | | 0.046 | 8.6 | 211000 | 1.2675 | 0.8326 | | 0.0482 | 8.62 | 211500 | 1.2514 | 0.8313 | | 0.0428 | 8.64 | 212000 | 1.2442 | 0.8309 | | 0.0398 | 8.66 | 212500 | 1.2553 | 0.8331 | | 0.0467 | 8.68 | 213000 | 1.2608 | 0.8303 | | 0.0355 | 8.7 | 213500 | 1.2646 | 0.8313 | | 0.039 | 8.72 | 214000 | 1.2498 | 0.8329 | | 0.0395 | 8.74 | 214500 | 1.2579 | 0.8329 | | 0.0405 | 8.76 | 215000 | 1.2702 | 0.8326 | | 0.0401 | 8.78 | 215500 | 1.2618 | 0.8326 | | 0.0483 | 8.8 | 216000 | 1.2525 | 0.8300 | | 0.0421 | 8.82 | 216500 | 1.2417 | 0.8349 | | 0.0379 | 8.84 | 217000 | 1.2829 | 0.8275 | | 0.0377 | 8.86 | 217500 | 1.2609 | 0.8327 | | 0.0437 | 8.88 | 218000 | 1.2576 | 0.8306 | | 0.0473 | 8.9 | 218500 | 1.2613 | 0.8304 | | 0.041 | 8.92 | 219000 | 1.2588 | 0.8312 | | 0.0455 | 8.94 | 219500 | 1.2495 | 0.8303 | | 0.0439 | 8.96 | 220000 | 1.2259 | 0.8328 | | 0.0445 | 8.98 | 220500 | 1.2252 | 0.8303 | | 0.0475 | 9.0 | 221000 | 1.2289 | 0.8304 | | 0.0292 | 9.02 | 221500 | 1.2341 | 0.8332 | | 0.0273 | 9.04 | 222000 | 1.2633 | 0.8332 | | 0.0211 | 9.07 | 222500 | 1.3210 | 0.8291 | | 0.0183 | 9.09 | 223000 | 1.3403 | 0.8299 | | 0.0323 | 9.11 | 223500 | 1.3470 | 0.8290 | | 0.0287 | 9.13 | 224000 | 1.3351 | 0.8318 | | 0.0316 | 9.15 | 224500 | 1.3348 | 0.8301 | | 0.0314 | 9.17 | 225000 | 1.3089 | 0.8339 | | 0.0227 | 9.19 | 225500 | 1.3239 | 0.8329 | | 0.0322 | 9.21 | 226000 | 1.3147 | 0.8326 | | 0.0266 | 9.23 | 226500 | 1.3301 | 0.8325 | | 0.0296 | 9.25 | 227000 | 1.3318 | 0.8324 | | 0.0267 | 9.27 | 227500 | 1.3228 | 0.8341 | | 0.0258 | 9.29 | 228000 | 1.3154 | 0.8344 | | 0.0275 | 9.31 | 228500 | 1.3212 | 0.8324 | | 0.0242 | 9.33 | 229000 | 1.3314 | 0.8349 | | 0.0193 | 9.35 | 229500 | 1.3317 | 0.8349 | | 0.0241 | 9.37 | 230000 | 1.3180 | 0.8341 | | 0.0255 | 9.39 | 230500 | 1.3172 | 0.8348 | | 0.0193 | 9.41 | 231000 | 1.3233 | 0.8354 | | 0.0235 | 9.43 | 231500 | 1.3447 | 0.8321 | | 0.0241 | 9.45 | 232000 | 1.3474 | 0.8325 | | 0.024 | 9.47 | 232500 | 1.3381 | 0.8333 | | 0.0261 | 9.49 | 233000 | 1.3319 | 0.8333 | | 0.026 | 9.51 | 233500 | 1.3453 | 0.8327 | | 0.0264 | 9.53 | 234000 | 1.3304 | 0.8345 | | 0.0308 | 9.55 | 234500 | 1.3235 | 0.8338 | | 0.0226 | 9.57 | 235000 | 1.3160 | 0.8347 | | 0.0293 | 9.6 | 235500 | 1.3122 | 0.8330 | | 0.0256 | 9.62 | 236000 | 1.3295 | 0.8331 | | 0.0325 | 9.64 | 236500 | 1.3268 | 0.8310 | | 0.0281 | 9.66 | 237000 | 1.3304 | 0.8321 | | 0.0228 | 9.68 | 237500 | 1.3326 | 0.8318 | | 0.03 | 9.7 | 238000 | 1.3234 | 0.8321 | | 0.029 | 9.72 | 238500 | 1.3354 | 0.8324 | | 0.0212 | 9.74 | 239000 | 1.3303 | 0.8336 | | 0.0199 | 9.76 | 239500 | 1.3393 | 0.8330 | | 0.0254 | 9.78 | 240000 | 1.3396 | 0.8327 | | 0.0237 | 9.8 | 240500 | 1.3355 | 0.8336 | | 0.0229 | 9.82 | 241000 | 1.3368 | 0.8342 | | 0.0251 | 9.84 | 241500 | 1.3388 | 0.8329 | | 0.0255 | 9.86 | 242000 | 1.3362 | 0.8337 | | 0.0206 | 9.88 | 242500 | 1.3369 | 0.8341 | | 0.0352 | 9.9 | 243000 | 1.3408 | 0.8330 | | 0.0201 | 9.92 | 243500 | 1.3358 | 0.8330 | | 0.0252 | 9.94 | 244000 | 1.3379 | 0.8332 | | 0.0294 | 9.96 | 244500 | 1.3330 | 0.8337 | | 0.0222 | 9.98 | 245000 | 1.3340 | 0.8336 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.7.1 - Datasets 1.18.3 - Tokenizers 0.11.6
BritishLibraryLabs/bl-books-genre
[ "pytorch", "distilbert", "text-classification", "multilingual", "dataset:blbooksgenre", "transformers", "genre", "books", "library", "historic", "glam ", "lam", "license:mit", "has_space" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
76
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MiguelCosta/distilbert-finetuned-cisco results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MiguelCosta/distilbert-finetuned-cisco This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.4181 - Validation Loss: 4.2370 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -964, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.4181 | 4.2370 | 0 | ### Framework versions - Transformers 4.22.1 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
Broadus20/DialoGPT-small-joshua
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 246.00 +/- 104.47 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga michael20at -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga michael20at ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Brona/poc_de
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: finetuned_HelsinkiNLP-opus-mt-vi-en_PhoMT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_HelsinkiNLP-opus-mt-vi-en_PhoMT This model is a fine-tuned version of [Helsinki-NLP/opus-mt-vi-en](https://huggingface.co/Helsinki-NLP/opus-mt-vi-en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1783 - Bleu: 37.7741 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:------:|:---------------:|:-------:| | 1.3717 | 1.0 | 186125 | 1.2371 | 35.8549 | | 1.2926 | 2.0 | 372250 | 1.2113 | 36.7328 | | 1.2505 | 3.0 | 558375 | 1.1954 | 37.0998 | | 1.2025 | 4.0 | 744500 | 1.1847 | 37.5538 | | 1.1853 | 5.0 | 930625 | 1.1783 | 37.7761 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.10.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
BrunoNogueira/DialoGPT-kungfupanda
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: mit --- This model generates YouTube titles in the style of [VICE](https://www.youtube.com/c/VICE). Here's the GitHub repo associated with it: [![GitHub](https://img.shields.io/badge/-Github-000?style=flat&logo=Github&logoColor=white)](https://github.com/marcderbauer/bloom)
Brykee/BrykeeBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### m-geo on Stable Diffusion This is the `<m-geo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<m-geo> 0](https://huggingface.co/sd-concepts-library/m-geo/resolve/main/concept_images/3.jpeg) ![<m-geo> 1](https://huggingface.co/sd-concepts-library/m-geo/resolve/main/concept_images/0.jpeg) ![<m-geo> 2](https://huggingface.co/sd-concepts-library/m-geo/resolve/main/concept_images/1.jpeg) ![<m-geo> 3](https://huggingface.co/sd-concepts-library/m-geo/resolve/main/concept_images/2.jpeg)
Brykee/DialoGPT-medium-Morty
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
Access to model sd-concepts-library/Akitsuki is restricted and you are not in the authorized list. Visit https://huggingface.co/sd-concepts-library/Akitsuki to ask for access.
Bryson575x/riceboi
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 445.10 +/- 56.96 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
Bubb-les/DisloGPT-medium-HarryPotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4834 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7112 | 0.54 | 500 | 1.4834 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 2.4.0 - Tokenizers 0.10.3
BumBelDumBel/ZORK-AI-TEST
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
Access to model akira2001/DialoGPT-medium-harrypotter is restricted and you are not in the authorized list. Visit https://huggingface.co/akira2001/DialoGPT-medium-harrypotter to ask for access.
Buntan/BuntanAI
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - bigscience/xP3 license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript pipeline_tag: text-generation inference: false widget: - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?" example_title: "zh-en sentiment" - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?" example_title: "zh-zh sentiment" - text: "Suggest at least five related search terms to \"Mạng neural nhân tạo\"." example_title: "vi-en query" - text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»." example_title: "fr-fr query" - text: "Explain in a sentence in Telugu what is backpropagation in neural networks." example_title: "te-en qa" - text: "Why is the sky blue?" example_title: "en-en qa" - text: "Explain to me in Traditional Chinese what is the difference between Bitcoin and Ethereum." example_title: "zh-en qa" - text: "Write a code snippet with O(log(n)) computational complexity." example_title: "code-en" - text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):" example_title: "es-en fable" - text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):" example_title: "hi-en fable" - text: "How many sides does a rectangle and heptagon have, when combined? Answer this question with some math. Ein Rechteck hat 4 Seiten. Ein Siebeneck hat 7 Seiten. In Kombination haben sie 4 + 7 = 11 Seiten. كم عدد الأضلاع التي يجمعها المربع والمثلث؟ Répondez à cette question en chinois." example_title: "en-de-ar-fr-zh math" model-index: - name: bloomz results: - task: type: Coreference resolution dataset: type: winogrande name: Winogrande XL (xl) config: xl split: validation revision: a80f460359d1e9a67c006011c94de42a8759430c metrics: - type: Accuracy value: 59.27 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (en) config: en split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 69.08 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (fr) config: fr split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 68.67 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (jp) config: jp split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 59.65 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (pt) config: pt split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 64.26 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (ru) config: ru split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 60.95 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (zh) config: zh split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 70.24 - task: type: Natural language inference dataset: type: anli name: ANLI (r1) config: r1 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 48.6 - task: type: Natural language inference dataset: type: anli name: ANLI (r2) config: r2 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 44.1 - task: type: Natural language inference dataset: type: anli name: ANLI (r3) config: r3 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 45.5 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (cb) config: cb split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 82.14 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (rte) config: rte split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 85.56 - task: type: Natural language inference dataset: type: xnli name: XNLI (ar) config: ar split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 60.68 - task: type: Natural language inference dataset: type: xnli name: XNLI (bg) config: bg split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 48.43 - task: type: Natural language inference dataset: type: xnli name: XNLI (de) config: de split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 54.38 - task: type: Natural language inference dataset: type: xnli name: XNLI (el) config: el split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 47.43 - task: type: Natural language inference dataset: type: xnli name: XNLI (en) config: en split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 67.47 - task: type: Natural language inference dataset: type: xnli name: XNLI (es) config: es split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 61.24 - task: type: Natural language inference dataset: type: xnli name: XNLI (fr) config: fr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 61.37 - task: type: Natural language inference dataset: type: xnli name: XNLI (hi) config: hi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 60.2 - task: type: Natural language inference dataset: type: xnli name: XNLI (ru) config: ru split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 54.02 - task: type: Natural language inference dataset: type: xnli name: XNLI (sw) config: sw split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 52.09 - task: type: Natural language inference dataset: type: xnli name: XNLI (th) config: th split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 43.78 - task: type: Natural language inference dataset: type: xnli name: XNLI (tr) config: tr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 45.7 - task: type: Natural language inference dataset: type: xnli name: XNLI (ur) config: ur split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.8 - task: type: Natural language inference dataset: type: xnli name: XNLI (vi) config: vi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 61.0 - task: type: Natural language inference dataset: type: xnli name: XNLI (zh) config: zh split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 56.91 - task: type: Program synthesis dataset: type: openai_humaneval name: HumanEval config: None split: test revision: e8dc562f5de170c54b5481011dd9f4fa04845771 metrics: - type: Pass@1 value: 12.06 - type: Pass@10 value: 26.53 - type: Pass@100 value: 48.44 - task: type: Sentence completion dataset: type: story_cloze name: StoryCloze (2016) config: "2016" split: validation revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db metrics: - type: Accuracy value: 96.26 - task: type: Sentence completion dataset: type: super_glue name: SuperGLUE (copa) config: copa split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 91.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (et) config: et split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 51.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ht) config: ht split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 58.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (id) config: id split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 86.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (it) config: it split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 74.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (qu) config: qu split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 56.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (sw) config: sw split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 64.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ta) config: ta split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 69.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (th) config: th split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 58.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (tr) config: tr split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 57.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (vi) config: vi split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 87.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (zh) config: zh split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 90.0 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ar) config: ar split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 92.79 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (es) config: es split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 94.37 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (eu) config: eu split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 86.9 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (hi) config: hi split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 88.42 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (id) config: id split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 92.12 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (my) config: my split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 52.35 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ru) config: ru split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 81.73 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (sw) config: sw split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 79.81 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (te) config: te split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 81.2 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (zh) config: zh split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 93.12 --- ![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true) # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Limitations](#limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 7. [Citation](#citation) # Model Summary > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages. - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf) - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:[email protected]) - **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages. - **BLOOMZ & mT0 Model Family:** <div class="max-w-full overflow-auto"> <table> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English. </tr> <tr> <td>Parameters</td> <td>300M</td> <td>580M</td> <td>1.2B</td> <td>3.7B</td> <td>13B</td> <td>560M</td> <td>1.1B</td> <td>1.7B</td> <td>3B</td> <td>7.1B</td> <td>176B</td> </tr> <tr> <td>Finetuned Model</td> <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td> <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td> <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> </tr> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td> </tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td> </tr> <th colspan="12">Original pretrained checkpoints. Not recommended.</th> <tr> <td>Pretrained Model</td> <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td> <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td> <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td> <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td> <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td> </tr> </table> </div> # Use ## Intended use We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper: - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? - Suggest at least five related search terms to "Mạng neural nhân tạo". - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): - Explain in a sentence in Telugu what is backpropagation in neural networks. **Feel free to share your generations in the Community tab!** ## How to use ### CPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto") inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU in 8bit <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> <!-- Necessary for whitespace --> ### # Limitations **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*". # Training ## Model - **Architecture:** Same as [bloom](https://huggingface.co/bigscience/bloom), also refer to the `config.json` file - **Finetuning steps:** 498 - **Finetuning tokens:** 2.09 billion - **Finetuning layout:** 72x pipeline parallel, 1x tensor parallel, 4x data parallel - **Precision:** bfloat16 ## Hardware - **CPUs:** AMD CPUs with 512GB memory per node - **GPUs:** 288 A100 80GB GPUs with 8 GPUs per node (36 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links - **Communication:** NCCL-communications network with a fully dedicated subnet ## Software - **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed) - **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5) - **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex) # Evaluation We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config. # Citation ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Buntan/xlm-roberta-base-finetuned-marc-en
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - Muennighoff/P3 license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu programming_language: - C - C++ - C# - Go - Java - JavaScript - Lua - PHP - Python - Ruby - Rust - Scala - TypeScript pipeline_tag: text-generation widget: - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative?" example_title: "zh-en sentiment" - text: "一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?" example_title: "zh-zh sentiment" - text: "Suggest at least five related search terms to \"Mạng neural nhân tạo\"." example_title: "vi-en query" - text: "Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»." example_title: "fr-fr query" - text: "Explain in a sentence in Telugu what is backpropagation in neural networks." example_title: "te-en qa" - text: "Why is the sky blue?" example_title: "en-en qa" - text: "Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is \"Heroes Come in All Shapes and Sizes\". Story (in Spanish):" example_title: "es-en fable" - text: "Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is \"Violence is the last refuge of the incompetent\". Fable (in Hindi):" example_title: "hi-en fable" inference: false model-index: - name: bloomz-p3 results: - task: type: Coreference resolution dataset: type: winogrande name: Winogrande XL (xl) config: xl split: validation revision: a80f460359d1e9a67c006011c94de42a8759430c metrics: - type: Accuracy value: 57.06 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (en) config: en split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 60.65 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (fr) config: fr split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 59.04 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (jp) config: jp split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 56.0 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (pt) config: pt split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 60.46 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (ru) config: ru split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 57.14 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (zh) config: zh split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 60.71 - task: type: Natural language inference dataset: type: anli name: ANLI (r1) config: r1 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 41.7 - task: type: Natural language inference dataset: type: anli name: ANLI (r2) config: r2 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 39.3 - task: type: Natural language inference dataset: type: anli name: ANLI (r3) config: r3 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 42.83 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (cb) config: cb split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 85.71 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (rte) config: rte split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 85.2 - task: type: Natural language inference dataset: type: xnli name: XNLI (ar) config: ar split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 56.71 - task: type: Natural language inference dataset: type: xnli name: XNLI (bg) config: bg split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 46.63 - task: type: Natural language inference dataset: type: xnli name: XNLI (de) config: de split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.16 - task: type: Natural language inference dataset: type: xnli name: XNLI (el) config: el split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 43.05 - task: type: Natural language inference dataset: type: xnli name: XNLI (en) config: en split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 59.72 - task: type: Natural language inference dataset: type: xnli name: XNLI (es) config: es split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 59.32 - task: type: Natural language inference dataset: type: xnli name: XNLI (fr) config: fr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 57.99 - task: type: Natural language inference dataset: type: xnli name: XNLI (hi) config: hi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 55.02 - task: type: Natural language inference dataset: type: xnli name: XNLI (ru) config: ru split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.12 - task: type: Natural language inference dataset: type: xnli name: XNLI (sw) config: sw split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.04 - task: type: Natural language inference dataset: type: xnli name: XNLI (th) config: th split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 42.29 - task: type: Natural language inference dataset: type: xnli name: XNLI (tr) config: tr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 43.78 - task: type: Natural language inference dataset: type: xnli name: XNLI (ur) config: ur split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 51.81 - task: type: Natural language inference dataset: type: xnli name: XNLI (vi) config: vi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 57.27 - task: type: Natural language inference dataset: type: xnli name: XNLI (zh) config: zh split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 56.95 - task: type: Program synthesis dataset: type: openai_humaneval name: HumanEval config: None split: test revision: e8dc562f5de170c54b5481011dd9f4fa04845771 metrics: - type: Pass@1 value: 6.13 - type: Pass@10 value: 11.79 - type: Pass@100 value: 18.73 - task: type: Sentence completion dataset: type: story_cloze name: StoryCloze (2016) config: "2016" split: validation revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db metrics: - type: Accuracy value: 94.66 - task: type: Sentence completion dataset: type: super_glue name: SuperGLUE (copa) config: copa split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 91.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (et) config: et split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 53.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ht) config: ht split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 58.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (id) config: id split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 87.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (it) config: it split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 74.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (qu) config: qu split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 57.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (sw) config: sw split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 61.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ta) config: ta split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 69.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (th) config: th split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 61.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (tr) config: tr split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 56.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (vi) config: vi split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 81.0 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (zh) config: zh split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 83.0 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ar) config: ar split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 92.46 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (es) config: es split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 94.44 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (eu) config: eu split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 86.7 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (hi) config: hi split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 88.35 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (id) config: id split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 92.59 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (my) config: my split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 52.68 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ru) config: ru split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 79.62 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (sw) config: sw split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 77.76 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (te) config: te split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 79.88 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (zh) config: zh split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 92.26 --- ![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true) # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Limitations](#limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 7. [Citation](#citation) # Model Summary > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find the resulting models capable of crosslingual generalization to unseen tasks & languages. - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf) - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:[email protected]) - **Languages:** Refer to [bloom](https://huggingface.co/bigscience/bloom) for pretraining & [xP3](https://huggingface.co/datasets/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages. - **BLOOMZ & mT0 Model Family:** <div class="max-w-full overflow-auto"> <table> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English. </tr> <tr> <td>Parameters</td> <td>300M</td> <td>580M</td> <td>1.2B</td> <td>3.7B</td> <td>13B</td> <td>560M</td> <td>1.1B</td> <td>1.7B</td> <td>3B</td> <td>7.1B</td> <td>176B</td> </tr> <tr> <td>Finetuned Model</td> <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td> <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td> <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> </tr> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td> </tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td> </tr> <th colspan="12">Original pretrained checkpoints. Not recommended.</th> <tr> <td>Pretrained Model</td> <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td> <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td> <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td> <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td> <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td> </tr> </table> </div> # Use ## Intended use We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper: - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? - Suggest at least five related search terms to "Mạng neural nhân tạo". - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): - Explain in a sentence in Telugu what is backpropagation in neural networks. **Feel free to share your generations in the Community tab!** ## How to use ### CPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz-p3" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz-p3" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto") inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU in 8bit <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate bitsandbytes from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "bigscience/bloomz-p3" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> <!-- Necessary for whitespace --> ### # Limitations **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*". # Training ## Model - **Architecture:** Same as [bloom](https://huggingface.co/bigscience/bloom), also refer to the `config.json` file - **Finetuning steps:** 498 - **Finetuning tokens:** 2.09 billion - **Finetuning layout:** 72x pipeline parallel, 1x tensor parallel, 4x data parallel - **Precision:** bfloat16 ## Hardware - **CPUs:** AMD CPUs with 512GB memory per node - **GPUs:** 288 A100 80GB GPUs with 8 GPUs per node (36 nodes) using NVLink 4 inter-gpu connects, 4 OmniPath links - **Communication:** NCCL-communications network with a fully dedicated subnet ## Software - **Orchestration:** [Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed) - **Optimizer & parallelism:** [DeepSpeed](https://github.com/microsoft/DeepSpeed) - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch) (pytorch-1.11 w/ CUDA-11.5) - **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex) # Evaluation We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config. # Citation ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
CALM/backup
[ "lean_albert", "transformers" ]
null
{ "architectures": [ "LeanAlbertForPretraining", "LeanAlbertForTokenClassification", "LeanAlbertForSequenceClassification" ], "model_type": "lean_albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4738 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7 | 1.0 | 157 | 2.4988 | | 2.5821 | 2.0 | 314 | 2.4242 | | 2.541 | 3.0 | 471 | 2.4371 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.9.0+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16,451
null
--- language: en thumbnail: http://www.huggingtweets.com/arrington-jespow-lightcrypto/1663413092521/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1478019214212747264/LZmNClhs_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1484988558024720385/WAv0tlyD_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1481313178302754821/eeHGWpUF_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">light & Jesse Powell & Michael Arrington 🏴‍☠️</div> <div style="text-align: center; font-size: 14px;">@arrington-jespow-lightcrypto</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from light & Jesse Powell & Michael Arrington 🏴‍☠️. | Data | light | Jesse Powell | Michael Arrington 🏴‍☠️ | | --- | --- | --- | --- | | Tweets downloaded | 3237 | 3237 | 3243 | | Retweets | 352 | 490 | 892 | | Short tweets | 392 | 168 | 718 | | Tweets kept | 2493 | 2579 | 1633 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ozhl36a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @arrington-jespow-lightcrypto's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2vhxitdi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2vhxitdi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/arrington-jespow-lightcrypto') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 16.50 +/- 12.63 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5