Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
ghomasHudson/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ghomasHudson/style_change
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
ghomasHudson/style_change_detection
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Bangla-GPT2 ### A GPT-2 Model for the Bengali Language * Dataset- mc4 Bengali * Training time- ~40 hours * Written in- JAX If you use this model, please cite: ``` @misc{bangla-gpt2, author = {Ritobrata Ghosh}, year = {2016}, title = {Bangla GPT-2}, publisher = {Hugging Face} } ```
{"language": "bn", "tags": ["text-generation"], "widget": [{"text": "\u0986\u099c \u098f\u0995\u099f\u09bf \u09b8\u09c1\u09a8\u09cd\u09a6\u09b0 \u09a6\u09bf\u09a8 \u098f\u09ac\u0982 \u0986\u09ae\u09bf"}]}
ritog/bangla-gpt2
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "bn", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
ritog/bn-poets
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Robi Kobi ### Created by [Ritobrata Ghosh](https://ghosh-r.github.io) A model that writes Bengali poems in the style of Nobel Laureate poet Rabindranath Tagore. This model is fine-tuned on 1,400+ poems written by Rabindranath Tagore. This model leverages the [Bangla GPT-2](https://huggingface.co/ghosh-r/bangla-gpt2) pretrained model, trained on mc4-Bengali dataset.
{"language": "bn", "tags": ["text-generation"], "widget": [{"text": "\u09a4\u09cb\u09ae\u09be\u0995\u09c7 \u09a6\u09c7\u0996\u09c7\u099b\u09bf \u0986\u09ae\u09be\u09b0 \u09b9\u09c3\u09a6\u09df \u09ae\u09be\u099d\u09c7"}]}
ritog/robi-kobi
null
[ "transformers", "pytorch", "gpt2", "text-generation", "bn", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
giacobbm/bert-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
giacomomiolo/biobert_reupload
null
[ "transformers", "pytorch", "tf", "jax", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
giacomomiolo/bluebert_reupload
null
[ "transformers", "pytorch", "tf", "jax", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
giacomomiolo/electramed_base_scivocab_1M
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
giacomomiolo/electramed_base_scivocab_500k
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
giacomomiolo/electramed_base_scivocab_750
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
giacomomiolo/electramed_base_scivocab_970k
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
giacomomiolo/electramed_small
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
giacomomiolo/electramed_small_scivocab
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
giacomomiolo/scibert_reupload
null
[ "transformers", "pytorch", "tf", "jax", "bert", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
gias/prac_
null
[ "transformers", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
You can test this model online with the [**Space for Romanian Speech Recognition**](https://huggingface.co/spaces/gigant/romanian-speech-recognition) The model ranked **TOP-1** on Romanian Speech Recognition during HuggingFace's Robust Speech Challenge : * [**The 🤗 Speech Bench**](https://huggingface.co/spaces/huggingface/hf-speech-bench) * [**Speech Challenge Leaderboard**](https://huggingface.co/spaces/speech-recognition-community-v2/FinalLeaderboard) # Romanian Wav2Vec2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) dataset, with extra training data from [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) dataset. Without the 5-gram Language Model optimization, it achieves the following results on the evaluation set (Common Voice 8.0, Romanian subset, test split): - Loss: 0.1553 - Wer: 0.1174 - Cer: 0.0294 ## Model description The architecture is based on [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) with a speech recognition CTC head and an added 5-gram language model (using [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) and [kenlm](https://github.com/kpu/kenlm)) trained on the [Romanian Corpora Parliament](gigant/ro_corpora_parliament_processed) dataset. Those libraries are needed in order for the language model-boosted decoder to work. ## Intended uses & limitations The model is made for speech recognition in Romanian from audio clips sampled at **16kHz**. The predicted text is lowercased and does not contain any punctuation. ## How to use Make sure you have installed the correct dependencies for the language model-boosted version to work. You can just run this command to install the `kenlm` and `pyctcdecode` libraries : ```pip install https://github.com/kpu/kenlm/archive/master.zip pyctcdecode``` With the framework `transformers` you can load the model with the following code : ``` from transformers import AutoProcessor, AutoModelForCTC processor = AutoProcessor.from_pretrained("gigant/romanian-wav2vec2") model = AutoModelForCTC.from_pretrained("gigant/romanian-wav2vec2") ``` Or, if you want to test the model, you can load the automatic speech recognition pipeline from `transformers` with : ``` from transformers import pipeline asr = pipeline("automatic-speech-recognition", model="gigant/romanian-wav2vec2") ``` ## Example use with the `datasets` library First, you need to load your data We will use the [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) dataset in this example. ``` from datasets import load_dataset dataset = load_dataset("gigant/romanian_speech_synthesis_0_8_1") ``` You can listen to the samples with the `IPython.display` library : ``` from IPython.display import Audio i = 0 sample = dataset["train"][i] Audio(sample["audio"]["array"], rate = sample["audio"]["sampling_rate"]) ``` The model is trained to work with audio sampled at 16kHz, so if the sampling rate of the audio in the dataset is different, we will have to resample it. In the example, the audio is sampled at 48kHz. We can see this by checking `dataset["train"][0]["audio"]["sampling_rate"]` The following code resample the audio using the `torchaudio` library : ``` import torchaudio import torch i = 0 audio = sample["audio"]["array"] rate = sample["audio"]["sampling_rate"] resampler = torchaudio.transforms.Resample(rate, 16_000) audio_16 = resampler(torch.Tensor(audio)).numpy() ``` To listen to the resampled sample : ``` Audio(audio_16, rate=16000) ``` Know you can get the model prediction by running ``` predicted_text = asr(audio_16) ground_truth = dataset["train"][i]["sentence"] print(f"Predicted text : {predicted_text}") print(f"Ground truth : {ground_truth}") ``` ## Training and evaluation data Training data : - [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) : train + validation + other splits - [Romanian Speech Synthesis](https://huggingface.co/datasets/gigant/romanian_speech_synthesis_0_8_1) : train + test splits Evaluation data : - [Common Voice 8.0 - Romanian subset](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) : test split ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 2.9272 | 0.78 | 500 | 0.7603 | 0.7734 | 0.2355 | | 0.6157 | 1.55 | 1000 | 0.4003 | 0.4866 | 0.1247 | | 0.4452 | 2.33 | 1500 | 0.2960 | 0.3689 | 0.0910 | | 0.3631 | 3.11 | 2000 | 0.2580 | 0.3205 | 0.0796 | | 0.3153 | 3.88 | 2500 | 0.2465 | 0.2977 | 0.0747 | | 0.2795 | 4.66 | 3000 | 0.2274 | 0.2789 | 0.0694 | | 0.2615 | 5.43 | 3500 | 0.2277 | 0.2685 | 0.0675 | | 0.2389 | 6.21 | 4000 | 0.2135 | 0.2518 | 0.0627 | | 0.2229 | 6.99 | 4500 | 0.2054 | 0.2449 | 0.0614 | | 0.2067 | 7.76 | 5000 | 0.2096 | 0.2378 | 0.0597 | | 0.1977 | 8.54 | 5500 | 0.2042 | 0.2387 | 0.0600 | | 0.1896 | 9.32 | 6000 | 0.2110 | 0.2383 | 0.0595 | | 0.1801 | 10.09 | 6500 | 0.1909 | 0.2165 | 0.0548 | | 0.174 | 10.87 | 7000 | 0.1883 | 0.2206 | 0.0559 | | 0.1685 | 11.65 | 7500 | 0.1848 | 0.2097 | 0.0528 | | 0.1591 | 12.42 | 8000 | 0.1851 | 0.2039 | 0.0514 | | 0.1537 | 13.2 | 8500 | 0.1881 | 0.2065 | 0.0518 | | 0.1504 | 13.97 | 9000 | 0.1840 | 0.1972 | 0.0499 | | 0.145 | 14.75 | 9500 | 0.1845 | 0.2029 | 0.0517 | | 0.1417 | 15.53 | 10000 | 0.1884 | 0.2003 | 0.0507 | | 0.1364 | 16.3 | 10500 | 0.2010 | 0.2037 | 0.0517 | | 0.1331 | 17.08 | 11000 | 0.1838 | 0.1923 | 0.0483 | | 0.129 | 17.86 | 11500 | 0.1818 | 0.1922 | 0.0489 | | 0.1198 | 18.63 | 12000 | 0.1760 | 0.1861 | 0.0465 | | 0.1203 | 19.41 | 12500 | 0.1686 | 0.1839 | 0.0465 | | 0.1225 | 20.19 | 13000 | 0.1828 | 0.1920 | 0.0479 | | 0.1145 | 20.96 | 13500 | 0.1673 | 0.1784 | 0.0446 | | 0.1053 | 21.74 | 14000 | 0.1802 | 0.1810 | 0.0456 | | 0.1071 | 22.51 | 14500 | 0.1769 | 0.1775 | 0.0444 | | 0.1053 | 23.29 | 15000 | 0.1920 | 0.1783 | 0.0457 | | 0.1024 | 24.07 | 15500 | 0.1904 | 0.1775 | 0.0446 | | 0.0987 | 24.84 | 16000 | 0.1793 | 0.1762 | 0.0446 | | 0.0949 | 25.62 | 16500 | 0.1801 | 0.1766 | 0.0443 | | 0.0942 | 26.4 | 17000 | 0.1731 | 0.1659 | 0.0423 | | 0.0906 | 27.17 | 17500 | 0.1776 | 0.1698 | 0.0424 | | 0.0861 | 27.95 | 18000 | 0.1716 | 0.1600 | 0.0406 | | 0.0851 | 28.73 | 18500 | 0.1662 | 0.1630 | 0.0410 | | 0.0844 | 29.5 | 19000 | 0.1671 | 0.1572 | 0.0393 | | 0.0792 | 30.28 | 19500 | 0.1768 | 0.1599 | 0.0407 | | 0.0798 | 31.06 | 20000 | 0.1732 | 0.1558 | 0.0394 | | 0.0779 | 31.83 | 20500 | 0.1694 | 0.1544 | 0.0388 | | 0.0718 | 32.61 | 21000 | 0.1709 | 0.1578 | 0.0399 | | 0.0732 | 33.38 | 21500 | 0.1697 | 0.1523 | 0.0391 | | 0.0708 | 34.16 | 22000 | 0.1616 | 0.1474 | 0.0375 | | 0.0678 | 34.94 | 22500 | 0.1698 | 0.1474 | 0.0375 | | 0.0642 | 35.71 | 23000 | 0.1681 | 0.1459 | 0.0369 | | 0.0661 | 36.49 | 23500 | 0.1612 | 0.1411 | 0.0357 | | 0.0629 | 37.27 | 24000 | 0.1662 | 0.1414 | 0.0355 | | 0.0587 | 38.04 | 24500 | 0.1659 | 0.1408 | 0.0351 | | 0.0581 | 38.82 | 25000 | 0.1612 | 0.1382 | 0.0352 | | 0.0556 | 39.6 | 25500 | 0.1647 | 0.1376 | 0.0345 | | 0.0543 | 40.37 | 26000 | 0.1658 | 0.1335 | 0.0337 | | 0.052 | 41.15 | 26500 | 0.1716 | 0.1369 | 0.0343 | | 0.0513 | 41.92 | 27000 | 0.1600 | 0.1317 | 0.0330 | | 0.0491 | 42.7 | 27500 | 0.1671 | 0.1311 | 0.0328 | | 0.0463 | 43.48 | 28000 | 0.1613 | 0.1289 | 0.0324 | | 0.0468 | 44.25 | 28500 | 0.1599 | 0.1260 | 0.0315 | | 0.0435 | 45.03 | 29000 | 0.1556 | 0.1232 | 0.0308 | | 0.043 | 45.81 | 29500 | 0.1588 | 0.1240 | 0.0309 | | 0.0421 | 46.58 | 30000 | 0.1567 | 0.1217 | 0.0308 | | 0.04 | 47.36 | 30500 | 0.1533 | 0.1198 | 0.0302 | | 0.0389 | 48.14 | 31000 | 0.1582 | 0.1185 | 0.0297 | | 0.0387 | 48.91 | 31500 | 0.1576 | 0.1187 | 0.0297 | | 0.0376 | 49.69 | 32000 | 0.1560 | 0.1182 | 0.0295 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Tokenizers 0.11.0 - pyctcdecode 0.3.0 - kenlm
{"language": ["ro"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0", "gigant/romanian_speech_synthesis_0_8_1"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-ro-300m_01", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event", "type": "speech-recognition-community-v2/dev_data", "args": "ro"}, "metrics": [{"type": "wer", "value": 46.99, "name": "Dev WER (without LM)"}, {"type": "cer", "value": 16.04, "name": "Dev CER (without LM)"}, {"type": "wer", "value": 38.63, "name": "Dev WER (with LM)"}, {"type": "cer", "value": 14.52, "name": "Dev CER (with LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice", "type": "mozilla-foundation/common_voice_8_0", "args": "ro"}, "metrics": [{"type": "wer", "value": 11.73, "name": "Test WER (without LM)"}, {"type": "cer", "value": 2.93, "name": "Test CER (without LM)"}, {"type": "wer", "value": 7.31, "name": "Test WER (with LM)"}, {"type": "cer", "value": 2.17, "name": "Test CER (with LM)"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ro"}, "metrics": [{"type": "wer", "value": 43.23, "name": "Test WER"}]}]}]}
gigant/romanian-wav2vec2
null
[ "transformers", "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event", "ro", "dataset:mozilla-foundation/common_voice_8_0", "dataset:gigant/romanian_speech_synthesis_0_8_1", "base_model:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# StackOBERTflow-comments-small StackOBERTflow is a RoBERTa model trained on StackOverflow comments. A Byte-level BPE tokenizer with dropout was used (using the `tokenizers` package). The model is *small*, i.e. has only 6-layers and the maximum sequence length was restricted to 256 tokens. The model was trained for 6 epochs on several GBs of comments from the StackOverflow corpus. ## Quick start: masked language modeling prediction ```python from transformers import pipeline from pprint import pprint COMMENT = "You really should not do it this way, I would use <mask> instead." fill_mask = pipeline( "fill-mask", model="giganticode/StackOBERTflow-comments-small-v1", tokenizer="giganticode/StackOBERTflow-comments-small-v1" ) pprint(fill_mask(COMMENT)) # [{'score': 0.019997311756014824, # 'sequence': '<s> You really should not do it this way, I would use jQuery instead.</s>', # 'token': 1738}, # {'score': 0.01693696901202202, # 'sequence': '<s> You really should not do it this way, I would use arrays instead.</s>', # 'token': 2844}, # {'score': 0.013411642983555794, # 'sequence': '<s> You really should not do it this way, I would use CSS instead.</s>', # 'token': 2254}, # {'score': 0.013224546797573566, # 'sequence': '<s> You really should not do it this way, I would use it instead.</s>', # 'token': 300}, # {'score': 0.011984303593635559, # 'sequence': '<s> You really should not do it this way, I would use classes instead.</s>', # 'token': 1779}] ```
{}
giganticode/StackOBERTflow-comments-small-v1
null
[ "transformers", "pytorch", "jax", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
giganticode/bert-base-StackOverflow-comments_1M
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
giganticode/bert-base-StackOverflow-comments_2M
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
giganticode/bert-base-ar_miner
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
giganticode/bert-base-code_comments
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
giganticode/bert-large-StackOverflow-comments_1M
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
giganticode/roberta-base-ar_miner
null
[ "transformers", "pytorch", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
giladlandau/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
gilf/english-yelp-sentiment
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
## About The *french-camembert-postag-model* is a part of speech tagging model for French that was trained on the *free-french-treebank* dataset available on [github](https://github.com/nicolashernandez/free-french-treebank). The base tokenizer and model used for training is *'camembert-base'*. ## Supported Tags It uses the following tags: | Tag | Category | Extra Info | |----------|:------------------------------:|------------:| | ADJ | adjectif | | | ADJWH | adjectif | | | ADV | adverbe | | | ADVWH | adverbe | | | CC | conjonction de coordination | | | CLO | pronom | obj | | CLR | pronom | refl | | CLS | pronom | suj | | CS | conjonction de subordination | | | DET | déterminant | | | DETWH | déterminant | | | ET | mot étranger | | | I | interjection | | | NC | nom commun | | | NPP | nom propre | | | P | préposition | | | P+D | préposition + déterminant | | | PONCT | signe de ponctuation | | | PREF | préfixe | | | PRO | autres pronoms | | | PROREL | autres pronoms | rel | | PROWH | autres pronoms | int | | U | ? | | | V | verbe | | | VIMP | verbe imperatif | | | VINF | verbe infinitif | | | VPP | participe passé | | | VPR | participe présent | | | VS | subjonctif | | More information on the tags can be found here: http://alpage.inria.fr/statgram/frdep/Publications/crabbecandi-taln2008-final.pdf ## Usage The usage of this model follows the common transformers patterns. Here is a short example of its usage: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("gilf/french-camembert-postag-model") model = AutoModelForTokenClassification.from_pretrained("gilf/french-camembert-postag-model") from transformers import pipeline nlp_token_class = pipeline('ner', model=model, tokenizer=tokenizer, grouped_entities=True) nlp_token_class('Face à un choc inédit, les mesures mises en place par le gouvernement ont permis une protection forte et efficace des ménages') ``` The lines above would display something like this on a Jupyter notebook: ``` [{'entity_group': 'NC', 'score': 0.5760144591331482, 'word': '<s>'}, {'entity_group': 'U', 'score': 0.9946700930595398, 'word': 'Face'}, {'entity_group': 'P', 'score': 0.999615490436554, 'word': 'à'}, {'entity_group': 'DET', 'score': 0.9995906352996826, 'word': 'un'}, {'entity_group': 'NC', 'score': 0.9995531439781189, 'word': 'choc'}, {'entity_group': 'ADJ', 'score': 0.999183714389801, 'word': 'inédit'}, {'entity_group': 'P', 'score': 0.3710663616657257, 'word': ','}, {'entity_group': 'DET', 'score': 0.9995903968811035, 'word': 'les'}, {'entity_group': 'NC', 'score': 0.9995649456977844, 'word': 'mesures'}, {'entity_group': 'VPP', 'score': 0.9988670349121094, 'word': 'mises'}, {'entity_group': 'P', 'score': 0.9996246099472046, 'word': 'en'}, {'entity_group': 'NC', 'score': 0.9995329976081848, 'word': 'place'}, {'entity_group': 'P', 'score': 0.9996233582496643, 'word': 'par'}, {'entity_group': 'DET', 'score': 0.9995935559272766, 'word': 'le'}, {'entity_group': 'NC', 'score': 0.9995369911193848, 'word': 'gouvernement'}, {'entity_group': 'V', 'score': 0.9993771314620972, 'word': 'ont'}, {'entity_group': 'VPP', 'score': 0.9991101026535034, 'word': 'permis'}, {'entity_group': 'DET', 'score': 0.9995885491371155, 'word': 'une'}, {'entity_group': 'NC', 'score': 0.9995636343955994, 'word': 'protection'}, {'entity_group': 'ADJ', 'score': 0.9991781711578369, 'word': 'forte'}, {'entity_group': 'CC', 'score': 0.9991298317909241, 'word': 'et'}, {'entity_group': 'ADJ', 'score': 0.9992275238037109, 'word': 'efficace'}, {'entity_group': 'P+D', 'score': 0.9993300437927246, 'word': 'des'}, {'entity_group': 'NC', 'score': 0.8353511393070221, 'word': 'ménages</s>'}] ```
{"language": "fr", "widget": [{"text": "Face \u00e0 un choc in\u00e9dit, les mesures mises en place par le gouvernement ont permis une protection forte et efficace des m\u00e9nages"}]}
gilf/french-camembert-postag-model
null
[ "transformers", "pytorch", "tf", "safetensors", "camembert", "token-classification", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-J 6B ## Model Description GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters. <figure> | Hyperparameter | Value | |----------------------|------------| | \\(n_{parameters}\\) | 6053381344 | | \\(n_{layers}\\) | 28&ast; | | \\(d_{model}\\) | 4096 | | \\(d_{ff}\\) | 16384 | | \\(n_{heads}\\) | 16 | | \\(d_{head}\\) | 256 | | \\(n_{ctx}\\) | 2048 | | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) | | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p> <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure> The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Training data GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai). ## Training procedure This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. ## Intended Use and Limitations GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt. ### How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") ``` ### Limitations and Biases The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output. GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Evaluation results <figure> | Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) | |--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------| | Random Chance | &check; | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 | | GPT-3 Ada&ddagger; | &cross; | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- | | GPT-2 1.5B | &check; | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 | | GPT-Neo 1.3B&ddagger; | &check; | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 | | Megatron-2.5B&ast; | &cross; | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 | | GPT-Neo 2.7B&ddagger; | &check; | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 | | GPT-3 1.3B&ast;&ddagger; | &cross; | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 | | GPT-3 Babbage&ddagger; | &cross; | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- | | Megatron-8.3B&ast; | &cross; | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 | | GPT-3 2.7B&ast;&ddagger; | &cross; | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 | | Megatron-11B&dagger; | &check; | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 | | **GPT-J 6B&ddagger;** | **&check;** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** | | GPT-3 6.7B&ast;&ddagger; | &cross; | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 | | GPT-3 Curie&ddagger; | &cross; | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- | | GPT-3 13B&ast;&ddagger; | &cross; | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 | | GPT-3 175B&ast;&ddagger; | &cross; | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 | | GPT-3 Davinci&ddagger; | &cross; | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- | <figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p> <p><strong>&ast;</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more details.</p> <p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a> <a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>) Thus, evaluation was not attempted.</p> <p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure> ## Citation and Related Information ### BibTeX entry To cite this model: ```bibtex @misc{gpt-j, author = {Wang, Ben and Komatsuzaki, Aran}, title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` To cite the codebase that trained this model: ```bibtex @misc{mesh-transformer-jax, author = {Wang, Ben}, title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}}, howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}}, year = 2021, month = May } ``` If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email. ## Acknowledgements This project would not have been possible without compute generously provided by Google through the [TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. Thanks to everyone who have helped out one way or another (listed alphabetically): - [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues. - [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package. - [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table. - [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo. - [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts. - [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["The Pile"]}
gilparmentier/pokemon_gptj_model
null
[ "transformers", "pytorch", "gptj", "text-generation", "causal-lm", "en", "arxiv:2104.09864", "arxiv:2101.00027", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
giuliopaci/bert-base-italian-xxl-cased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
giuliopaci/electra-base-italian-xxl-cased-discriminator-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
giurfff/giu
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Jake Peralta DialoGPT model
{"tags": ["conversational"]}
gizmo-dev/DialoGPT-small-jake
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glafira/diodpepeep
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glancebe/modelscan
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# cse_resnet50 Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{}
glasses/cse_resnet50
null
[ "transformers", "pytorch", "arxiv:1512.03385", "arxiv:1812.01187", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# deit_base_patch16_224 Implementation of DeiT proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/pdf/2010.11929.pdf) An attention based distillation is proposed where a new token is added to the model, the [dist]{.title-ref} token. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/DeiT.png?raw=true) ``` {.sourceCode .} DeiT.deit_tiny_patch16_224() DeiT.deit_small_patch16_224() DeiT.deit_base_patch16_224() DeiT.deit_base_patch16_384() ```
{}
glasses/deit_base_patch16_224
null
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# deit_base_patch16_384 Implementation of DeiT proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/pdf/2010.11929.pdf) An attention based distillation is proposed where a new token is added to the model, the [dist]{.title-ref} token. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/DeiT.png?raw=true) ``` {.sourceCode .} DeiT.deit_tiny_patch16_224() DeiT.deit_small_patch16_224() DeiT.deit_base_patch16_224() DeiT.deit_base_patch16_384() ```
{}
glasses/deit_base_patch16_384
null
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# deit_small_patch16_224 Implementation of DeiT proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/pdf/2010.11929.pdf) An attention based distillation is proposed where a new token is added to the model, the [dist]{.title-ref} token. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/DeiT.png?raw=true) ``` {.sourceCode .} DeiT.deit_tiny_patch16_224() DeiT.deit_small_patch16_224() DeiT.deit_base_patch16_224() DeiT.deit_base_patch16_384() ```
{}
glasses/deit_small_patch16_224
null
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# deit_tiny_patch16_224 Implementation of DeiT proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/pdf/2010.11929.pdf) An attention based distillation is proposed where a new token is added to the model, the [dist]{.title-ref} token. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/DeiT.png?raw=true) ``` {.sourceCode .} DeiT.deit_tiny_patch16_224() DeiT.deit_small_patch16_224() DeiT.deit_base_patch16_224() DeiT.deit_base_patch16_384() ```
{}
glasses/deit_tiny_patch16_224
null
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/densenet121
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# densenet161 Implementation of DenseNet proposed in [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993) Create a default models ``` {.sourceCode .} DenseNet.densenet121() DenseNet.densenet161() DenseNet.densenet169() DenseNet.densenet201() ``` Examples: ``` {.sourceCode .} # change activation DenseNet.densenet121(activation = nn.SELU) # change number of classes (default is 1000 ) DenseNet.densenet121(n_classes=100) # pass a different block DenseNet.densenet121(block=...) # change the initial convolution model = DenseNet.densenet121() model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3) # store each feature x = torch.rand((1, 3, 224, 224)) model = DenseNet.densenet121() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])] ```
{}
glasses/densenet161
null
[ "transformers", "pytorch", "arxiv:1608.06993", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# densenet169 Implementation of DenseNet proposed in [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993) Create a default models ``` {.sourceCode .} DenseNet.densenet121() DenseNet.densenet161() DenseNet.densenet169() DenseNet.densenet201() ``` Examples: ``` {.sourceCode .} # change activation DenseNet.densenet121(activation = nn.SELU) # change number of classes (default is 1000 ) DenseNet.densenet121(n_classes=100) # pass a different block DenseNet.densenet121(block=...) # change the initial convolution model = DenseNet.densenet121() model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3) # store each feature x = torch.rand((1, 3, 224, 224)) model = DenseNet.densenet121() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])] ```
{}
glasses/densenet169
null
[ "transformers", "pytorch", "arxiv:1608.06993", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# densenet201 Implementation of DenseNet proposed in [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993) Create a default models ``` {.sourceCode .} DenseNet.densenet121() DenseNet.densenet161() DenseNet.densenet169() DenseNet.densenet201() ``` Examples: ``` {.sourceCode .} # change activation DenseNet.densenet121(activation = nn.SELU) # change number of classes (default is 1000 ) DenseNet.densenet121(n_classes=100) # pass a different block DenseNet.densenet121(block=...) # change the initial convolution model = DenseNet.densenet121() model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3) # store each feature x = torch.rand((1, 3, 224, 224)) model = DenseNet.densenet121() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7]), torch.Size([1, 1024, 7, 7])] ```
{}
glasses/densenet201
null
[ "transformers", "pytorch", "arxiv:1608.06993", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# ResNet Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{}
glasses/dummy
null
[ "transformers", "pytorch", "arxiv:1512.03385", "arxiv:1812.01187", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
glasses/eca_resnet101d
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# eca_resnet26t Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
glasses/eca_resnet26t
null
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
glasses/eca_resnet50d
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
glasses/eca_resnet50t
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# efficientnet_b0 Implementation of EfficientNet proposed in [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNet.png?raw=true) The basic architecture is similar to MobileNetV2 as was computed by using [Progressive Neural Architecture Search](https://arxiv.org/abs/1905.11946) . The following table shows the basic architecture (EfficientNet-efficientnet\_b0): ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetModelsTable.jpeg?raw=true) Then, the architecture is scaled up from [-efficientnet\_b0]{.title-ref} to [-efficientnet\_b7]{.title-ref} using compound scaling. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetScaling.jpg?raw=true) ``` python EfficientNet.efficientnet_b0() EfficientNet.efficientnet_b1() EfficientNet.efficientnet_b2() EfficientNet.efficientnet_b3() EfficientNet.efficientnet_b4() EfficientNet.efficientnet_b5() EfficientNet.efficientnet_b6() EfficientNet.efficientnet_b7() EfficientNet.efficientnet_b8() EfficientNet.efficientnet_l2() ``` Examples: ``` python EfficientNet.efficientnet_b0(activation = nn.SELU) # change number of classes (default is 1000 ) EfficientNet.efficientnet_b0(n_classes=100) # pass a different block EfficientNet.efficientnet_b0(block=...) # store each feature x = torch.rand((1, 3, 224, 224)) model = EfficientNet.efficientnet_b0() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 32, 112, 112]), torch.Size([1, 24, 56, 56]), torch.Size([1, 40, 28, 28]), torch.Size([1, 80, 14, 14])] ```
{}
glasses/efficientnet_b0
null
[ "transformers", "pytorch", "arxiv:1905.11946", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/efficientnet_b1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# efficientnet_b2 Implementation of EfficientNet proposed in [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNet.png?raw=true) The basic architecture is similar to MobileNetV2 as was computed by using [Progressive Neural Architecture Search](https://arxiv.org/abs/1905.11946) . The following table shows the basic architecture (EfficientNet-efficientnet\_b0): ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetModelsTable.jpeg?raw=true) Then, the architecture is scaled up from [-efficientnet\_b0]{.title-ref} to [-efficientnet\_b7]{.title-ref} using compound scaling. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetScaling.jpg?raw=true) ``` python EfficientNet.efficientnet_b0() EfficientNet.efficientnet_b1() EfficientNet.efficientnet_b2() EfficientNet.efficientnet_b3() EfficientNet.efficientnet_b4() EfficientNet.efficientnet_b5() EfficientNet.efficientnet_b6() EfficientNet.efficientnet_b7() EfficientNet.efficientnet_b8() EfficientNet.efficientnet_l2() ``` Examples: ``` python EfficientNet.efficientnet_b0(activation = nn.SELU) # change number of classes (default is 1000 ) EfficientNet.efficientnet_b0(n_classes=100) # pass a different block EfficientNet.efficientnet_b0(block=...) # store each feature x = torch.rand((1, 3, 224, 224)) model = EfficientNet.efficientnet_b0() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 32, 112, 112]), torch.Size([1, 24, 56, 56]), torch.Size([1, 40, 28, 28]), torch.Size([1, 80, 14, 14])] ```
{}
glasses/efficientnet_b2
null
[ "transformers", "pytorch", "arxiv:1905.11946", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# efficientnet_b3 Implementation of EfficientNet proposed in [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNet.png?raw=true) The basic architecture is similar to MobileNetV2 as was computed by using [Progressive Neural Architecture Search](https://arxiv.org/abs/1905.11946) . The following table shows the basic architecture (EfficientNet-efficientnet\_b0): ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetModelsTable.jpeg?raw=true) Then, the architecture is scaled up from [-efficientnet\_b0]{.title-ref} to [-efficientnet\_b7]{.title-ref} using compound scaling. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetScaling.jpg?raw=true) ``` python EfficientNet.efficientnet_b0() EfficientNet.efficientnet_b1() EfficientNet.efficientnet_b2() EfficientNet.efficientnet_b3() EfficientNet.efficientnet_b4() EfficientNet.efficientnet_b5() EfficientNet.efficientnet_b6() EfficientNet.efficientnet_b7() EfficientNet.efficientnet_b8() EfficientNet.efficientnet_l2() ``` Examples: ``` python EfficientNet.efficientnet_b0(activation = nn.SELU) # change number of classes (default is 1000 ) EfficientNet.efficientnet_b0(n_classes=100) # pass a different block EfficientNet.efficientnet_b0(block=...) # store each feature x = torch.rand((1, 3, 224, 224)) model = EfficientNet.efficientnet_b0() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 32, 112, 112]), torch.Size([1, 24, 56, 56]), torch.Size([1, 40, 28, 28]), torch.Size([1, 80, 14, 14])] ```
{}
glasses/efficientnet_b3
null
[ "transformers", "pytorch", "arxiv:1905.11946", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# efficientnet_b6 Implementation of EfficientNet proposed in [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946) ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNet.png?raw=true) The basic architecture is similar to MobileNetV2 as was computed by using [Progressive Neural Architecture Search](https://arxiv.org/abs/1905.11946) . The following table shows the basic architecture (EfficientNet-efficientnet\_b0): ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetModelsTable.jpeg?raw=true) Then, the architecture is scaled up from [-efficientnet\_b0]{.title-ref} to [-efficientnet\_b7]{.title-ref} using compound scaling. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/EfficientNetScaling.jpg?raw=true) ``` python EfficientNet.efficientnet_b0() EfficientNet.efficientnet_b1() EfficientNet.efficientnet_b2() EfficientNet.efficientnet_b3() EfficientNet.efficientnet_b4() EfficientNet.efficientnet_b5() EfficientNet.efficientnet_b6() EfficientNet.efficientnet_b7() EfficientNet.efficientnet_b8() EfficientNet.efficientnet_l2() ``` Examples: ``` python EfficientNet.efficientnet_b0(activation = nn.SELU) # change number of classes (default is 1000 ) EfficientNet.efficientnet_b0(n_classes=100) # pass a different block EfficientNet.efficientnet_b0(block=...) # store each feature x = torch.rand((1, 3, 224, 224)) model = EfficientNet.efficientnet_b0() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) # [torch.Size([1, 32, 112, 112]), torch.Size([1, 24, 56, 56]), torch.Size([1, 40, 28, 28]), torch.Size([1, 80, 14, 14])] ```
{}
glasses/efficientnet_b6
null
[ "transformers", "pytorch", "arxiv:1905.11946", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
glasses/efficientnet_lite0
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# regnetx_002 Implementation of RegNet proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) The main idea is to start with a high dimensional search space and iteratively reduce the search space by empirically apply constrains based on the best performing models sampled by the current search space. The resulting models are light, accurate, and faster than EfficientNets (up to 5x times!) For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the bottleneck ratio $b_i$ for all stage $i$. The following table shows all the restrictions applied from one search space to the next one. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/RegNetDesignSpaceTable.png?raw=true) The paper is really well written and very interesting, I highly recommended read it. ``` python ResNet.regnetx_002() ResNet.regnetx_004() ResNet.regnetx_006() ResNet.regnetx_008() ResNet.regnetx_016() ResNet.regnetx_040() ResNet.regnetx_064() ResNet.regnetx_080() ResNet.regnetx_120() ResNet.regnetx_160() ResNet.regnetx_320() # Y variants (with SE) ResNet.regnety_002() # ... ResNet.regnetx_320() You can easily customize your model ``` Examples: ``` python # change activation RegNet.regnetx_004(activation = nn.SELU) # change number of classes (default is 1000 ) RegNet.regnetx_004(n_classes=100) # pass a different block RegNet.regnetx_004(block=RegNetYBotteneckBlock) # change the steam model = RegNet.regnetx_004(stem=ResNetStemC) change shortcut model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = RegNet.regnetx_004() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])] ```
{}
glasses/regnetx_002
null
[ "transformers", "pytorch", "arxiv:2003.13678", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/regnetx_004
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# regnetx_006 Implementation of RegNet proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) The main idea is to start with a high dimensional search space and iteratively reduce the search space by empirically apply constrains based on the best performing models sampled by the current search space. The resulting models are light, accurate, and faster than EfficientNets (up to 5x times!) For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the bottleneck ratio $b_i$ for all stage $i$. The following table shows all the restrictions applied from one search space to the next one. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/RegNetDesignSpaceTable.png?raw=true) The paper is really well written and very interesting, I highly recommended read it. ``` python ResNet.regnetx_002() ResNet.regnetx_004() ResNet.regnetx_006() ResNet.regnetx_008() ResNet.regnetx_016() ResNet.regnetx_040() ResNet.regnetx_064() ResNet.regnetx_080() ResNet.regnetx_120() ResNet.regnetx_160() ResNet.regnetx_320() # Y variants (with SE) ResNet.regnety_002() # ... ResNet.regnetx_320() You can easily customize your model ``` Examples: ``` python # change activation RegNet.regnetx_004(activation = nn.SELU) # change number of classes (default is 1000 ) RegNet.regnetx_004(n_classes=100) # pass a different block RegNet.regnetx_004(block=RegNetYBotteneckBlock) # change the steam model = RegNet.regnetx_004(stem=ResNetStemC) change shortcut model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = RegNet.regnetx_004() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])] ```
{}
glasses/regnetx_006
null
[ "transformers", "pytorch", "arxiv:2003.13678", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/regnetx_008
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# regnetx_016 Implementation of RegNet proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) The main idea is to start with a high dimensional search space and iteratively reduce the search space by empirically apply constrains based on the best performing models sampled by the current search space. The resulting models are light, accurate, and faster than EfficientNets (up to 5x times!) For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the bottleneck ratio $b_i$ for all stage $i$. The following table shows all the restrictions applied from one search space to the next one. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/RegNetDesignSpaceTable.png?raw=true) The paper is really well written and very interesting, I highly recommended read it. ``` python ResNet.regnetx_002() ResNet.regnetx_004() ResNet.regnetx_006() ResNet.regnetx_008() ResNet.regnetx_016() ResNet.regnetx_040() ResNet.regnetx_064() ResNet.regnetx_080() ResNet.regnetx_120() ResNet.regnetx_160() ResNet.regnetx_320() # Y variants (with SE) ResNet.regnety_002() # ... ResNet.regnetx_320() You can easily customize your model ``` Examples: ``` python # change activation RegNet.regnetx_004(activation = nn.SELU) # change number of classes (default is 1000 ) RegNet.regnetx_004(n_classes=100) # pass a different block RegNet.regnetx_004(block=RegNetYBotteneckBlock) # change the steam model = RegNet.regnetx_004(stem=ResNetStemC) change shortcut model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = RegNet.regnetx_004() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])] ```
{}
glasses/regnetx_016
null
[ "transformers", "pytorch", "arxiv:2003.13678", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/regnetx_032
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
glasses/regnetx_040
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
glasses/regnetx_064
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# regnety_002 Implementation of RegNet proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) The main idea is to start with a high dimensional search space and iteratively reduce the search space by empirically apply constrains based on the best performing models sampled by the current search space. The resulting models are light, accurate, and faster than EfficientNets (up to 5x times!) For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the bottleneck ratio $b_i$ for all stage $i$. The following table shows all the restrictions applied from one search space to the next one. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/RegNetDesignSpaceTable.png?raw=true) The paper is really well written and very interesting, I highly recommended read it. ``` python ResNet.regnetx_002() ResNet.regnetx_004() ResNet.regnetx_006() ResNet.regnetx_008() ResNet.regnetx_016() ResNet.regnetx_040() ResNet.regnetx_064() ResNet.regnetx_080() ResNet.regnetx_120() ResNet.regnetx_160() ResNet.regnetx_320() # Y variants (with SE) ResNet.regnety_002() # ... ResNet.regnetx_320() You can easily customize your model ``` Examples: ``` python # change activation RegNet.regnetx_004(activation = nn.SELU) # change number of classes (default is 1000 ) RegNet.regnetx_004(n_classes=100) # pass a different block RegNet.regnetx_004(block=RegNetYBotteneckBlock) # change the steam model = RegNet.regnetx_004(stem=ResNetStemC) change shortcut model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = RegNet.regnetx_004() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])] ```
{}
glasses/regnety_002
null
[ "transformers", "pytorch", "arxiv:2003.13678", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# regnety_004 Implementation of RegNet proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) The main idea is to start with a high dimensional search space and iteratively reduce the search space by empirically apply constrains based on the best performing models sampled by the current search space. The resulting models are light, accurate, and faster than EfficientNets (up to 5x times!) For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the bottleneck ratio $b_i$ for all stage $i$. The following table shows all the restrictions applied from one search space to the next one. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/RegNetDesignSpaceTable.png?raw=true) The paper is really well written and very interesting, I highly recommended read it. ``` python ResNet.regnetx_002() ResNet.regnetx_004() ResNet.regnetx_006() ResNet.regnetx_008() ResNet.regnetx_016() ResNet.regnetx_040() ResNet.regnetx_064() ResNet.regnetx_080() ResNet.regnetx_120() ResNet.regnetx_160() ResNet.regnetx_320() # Y variants (with SE) ResNet.regnety_002() # ... ResNet.regnetx_320() You can easily customize your model ``` Examples: ``` python # change activation RegNet.regnetx_004(activation = nn.SELU) # change number of classes (default is 1000 ) RegNet.regnetx_004(n_classes=100) # pass a different block RegNet.regnetx_004(block=RegNetYBotteneckBlock) # change the steam model = RegNet.regnetx_004(stem=ResNetStemC) change shortcut model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = RegNet.regnetx_004() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])] ```
{}
glasses/regnety_004
null
[ "transformers", "pytorch", "arxiv:2003.13678", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# regnety_006 Implementation of RegNet proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) The main idea is to start with a high dimensional search space and iteratively reduce the search space by empirically apply constrains based on the best performing models sampled by the current search space. The resulting models are light, accurate, and faster than EfficientNets (up to 5x times!) For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the bottleneck ratio $b_i$ for all stage $i$. The following table shows all the restrictions applied from one search space to the next one. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/RegNetDesignSpaceTable.png?raw=true) The paper is really well written and very interesting, I highly recommended read it. ``` python ResNet.regnetx_002() ResNet.regnetx_004() ResNet.regnetx_006() ResNet.regnetx_008() ResNet.regnetx_016() ResNet.regnetx_040() ResNet.regnetx_064() ResNet.regnetx_080() ResNet.regnetx_120() ResNet.regnetx_160() ResNet.regnetx_320() # Y variants (with SE) ResNet.regnety_002() # ... ResNet.regnetx_320() You can easily customize your model ``` Examples: ``` python # change activation RegNet.regnetx_004(activation = nn.SELU) # change number of classes (default is 1000 ) RegNet.regnetx_004(n_classes=100) # pass a different block RegNet.regnetx_004(block=RegNetYBotteneckBlock) # change the steam model = RegNet.regnetx_004(stem=ResNetStemC) change shortcut model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = RegNet.regnetx_004() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])] ```
{}
glasses/regnety_006
null
[ "transformers", "pytorch", "arxiv:2003.13678", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# regnety_008 Implementation of RegNet proposed in [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) The main idea is to start with a high dimensional search space and iteratively reduce the search space by empirically apply constrains based on the best performing models sampled by the current search space. The resulting models are light, accurate, and faster than EfficientNets (up to 5x times!) For example, to go from $AnyNet_A$ to $AnyNet_B$ they fixed the bottleneck ratio $b_i$ for all stage $i$. The following table shows all the restrictions applied from one search space to the next one. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/RegNetDesignSpaceTable.png?raw=true) The paper is really well written and very interesting, I highly recommended read it. ``` python ResNet.regnetx_002() ResNet.regnetx_004() ResNet.regnetx_006() ResNet.regnetx_008() ResNet.regnetx_016() ResNet.regnetx_040() ResNet.regnetx_064() ResNet.regnetx_080() ResNet.regnetx_120() ResNet.regnetx_160() ResNet.regnetx_320() # Y variants (with SE) ResNet.regnety_002() # ... ResNet.regnetx_320() You can easily customize your model ``` Examples: ``` python # change activation RegNet.regnetx_004(activation = nn.SELU) # change number of classes (default is 1000 ) RegNet.regnetx_004(n_classes=100) # pass a different block RegNet.regnetx_004(block=RegNetYBotteneckBlock) # change the steam model = RegNet.regnetx_004(stem=ResNetStemC) change shortcut model = RegNet.regnetx_004(block=partial(RegNetYBotteneckBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = RegNet.regnetx_004() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 32, 112, 112]), torch.Size([1, 32, 56, 56]), torch.Size([1, 64, 28, 28]), torch.Size([1, 160, 14, 14])] ```
{}
glasses/regnety_008
null
[ "transformers", "pytorch", "arxiv:2003.13678", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/regnety_016
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
glasses/regnety_032
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
glasses/regnety_040
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
glasses/regnety_064
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
glasses/resnet101
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# resnet152 Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
glasses/resnet152
null
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# resnet18 Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
glasses/resnet18
null
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# resnet26 Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
glasses/resnet26
null
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# resnet26d Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
glasses/resnet26d
null
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# resnet34 Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
glasses/resnet34
null
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# resnet34d Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
glasses/resnet34d
null
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# resnet50 Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
glasses/resnet50
null
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
transformers
# resnet50d Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d() # You can construct your own one by chaning `stem` and `block` resnet101d = ResNet.resnet101(stem=ResNetStemC, block=partial(ResNetBottleneckBlock, shortcut=ResNetShorcutD)) ``` Examples: ``` python # change activation ResNet.resnet18(activation = nn.SELU) # change number of classes (default is 1000 ) ResNet.resnet18(n_classes=100) # pass a different block ResNet.resnet18(block=SENetBasicBlock) # change the steam model = ResNet.resnet18(stem=ResNetStemC) change shortcut model = ResNet.resnet18(block=partial(ResNetBasicBlock, shortcut=ResNetShorcutD)) # store each feature x = torch.rand((1, 3, 224, 224)) # get features model = ResNet.resnet18() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
glasses/resnet50d
null
[ "transformers", "pytorch", "image-classification", "dataset:imagenet", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# resnext101_32x8d Implementation of ResNetXt proposed in [\"Aggregated Residual Transformation for Deep Neural Networks\"](https://arxiv.org/pdf/1611.05431.pdf) Create a default model ``` python ResNetXt.resnext50_32x4d() ResNetXt.resnext101_32x8d() # create a resnetxt18_32x4d ResNetXt.resnet18(block=ResNetXtBottleNeckBlock, groups=32, base_width=4) ``` Examples: : ``` python # change activation ResNetXt.resnext50_32x4d(activation = nn.SELU) # change number of classes (default is 1000 ) ResNetXt.resnext50_32x4d(n_classes=100) # pass a different block ResNetXt.resnext50_32x4d(block=SENetBasicBlock) # change the initial convolution model = ResNetXt.resnext50_32x4d model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3) # store each feature x = torch.rand((1, 3, 224, 224)) model = ResNetXt.resnext50_32x4d() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{}
glasses/resnext101_32x8d
null
[ "transformers", "pytorch", "arxiv:1611.05431", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# resnext50_32x4d Implementation of ResNetXt proposed in [\"Aggregated Residual Transformation for Deep Neural Networks\"](https://arxiv.org/pdf/1611.05431.pdf) Create a default model ``` python ResNetXt.resnext50_32x4d() ResNetXt.resnext101_32x8d() # create a resnetxt18_32x4d ResNetXt.resnet18(block=ResNetXtBottleNeckBlock, groups=32, base_width=4) ``` Examples: : ``` python # change activation ResNetXt.resnext50_32x4d(activation = nn.SELU) # change number of classes (default is 1000 ) ResNetXt.resnext50_32x4d(n_classes=100) # pass a different block ResNetXt.resnext50_32x4d(block=SENetBasicBlock) # change the initial convolution model = ResNetXt.resnext50_32x4d model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3) # store each feature x = torch.rand((1, 3, 224, 224)) model = ResNetXt.resnext50_32x4d() # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[torch.Size([1, 64, 112, 112]), torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14])] ```
{}
glasses/resnext50_32x4d
null
[ "transformers", "pytorch", "arxiv:1611.05431", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/se_resnet50
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# vgg11 Implementation of VGG proposed in [Very Deep Convolutional Networks For Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf) ``` python VGG.vgg11() VGG.vgg13() VGG.vgg16() VGG.vgg19() VGG.vgg11_bn() VGG.vgg13_bn() VGG.vgg16_bn() VGG.vgg19_bn() ``` Please be aware that the [bn]{.title-ref} models uses BatchNorm but they are very old and people back then don\'t know the bias is superfluous in a conv followed by a batchnorm. Examples: ``` python # change activation VGG.vgg11(activation = nn.SELU) # change number of classes (default is 1000 ) VGG.vgg11(n_classes=100) # pass a different block from nn.models.classification.senet import SENetBasicBlock VGG.vgg11(block=SENetBasicBlock) # store the features tensor after every block ```
{}
glasses/vgg11
null
[ "transformers", "pytorch", "arxiv:1409.1556", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# vgg11_bn Implementation of VGG proposed in [Very Deep Convolutional Networks For Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf) ``` python VGG.vgg11() VGG.vgg13() VGG.vgg16() VGG.vgg19() VGG.vgg11_bn() VGG.vgg13_bn() VGG.vgg16_bn() VGG.vgg19_bn() ``` Please be aware that the [bn]{.title-ref} models uses BatchNorm but they are very old and people back then don\'t know the bias is superfluous in a conv followed by a batchnorm. Examples: ``` python # change activation VGG.vgg11(activation = nn.SELU) # change number of classes (default is 1000 ) VGG.vgg11(n_classes=100) # pass a different block from nn.models.classification.senet import SENetBasicBlock VGG.vgg11(block=SENetBasicBlock) # store the features tensor after every block ```
{}
glasses/vgg11_bn
null
[ "transformers", "pytorch", "arxiv:1409.1556", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/vgg13
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# vgg13_bn Implementation of VGG proposed in [Very Deep Convolutional Networks For Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf) ``` python VGG.vgg11() VGG.vgg13() VGG.vgg16() VGG.vgg19() VGG.vgg11_bn() VGG.vgg13_bn() VGG.vgg16_bn() VGG.vgg19_bn() ``` Please be aware that the [bn]{.title-ref} models uses BatchNorm but they are very old and people back then don\'t know the bias is superfluous in a conv followed by a batchnorm. Examples: ``` python # change activation VGG.vgg11(activation = nn.SELU) # change number of classes (default is 1000 ) VGG.vgg11(n_classes=100) # pass a different block from nn.models.classification.senet import SENetBasicBlock VGG.vgg11(block=SENetBasicBlock) # store the features tensor after every block ```
{}
glasses/vgg13_bn
null
[ "transformers", "pytorch", "arxiv:1409.1556", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/vgg16
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/vgg16_bn
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/vgg19
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# vgg19_bn Implementation of VGG proposed in [Very Deep Convolutional Networks For Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf) ``` python VGG.vgg11() VGG.vgg13() VGG.vgg16() VGG.vgg19() VGG.vgg11_bn() VGG.vgg13_bn() VGG.vgg16_bn() VGG.vgg19_bn() ``` Please be aware that the [bn]{.title-ref} models uses BatchNorm but they are very old and people back then don\'t know the bias is superfluous in a conv followed by a batchnorm. Examples: ``` python # change activation VGG.vgg11(activation = nn.SELU) # change number of classes (default is 1000 ) VGG.vgg11(n_classes=100) # pass a different block from nn.models.classification.senet import SENetBasicBlock VGG.vgg11(block=SENetBasicBlock) # store the features tensor after every block ```
{}
glasses/vgg19_bn
null
[ "transformers", "pytorch", "arxiv:1409.1556", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# vit_base_patch16_224 Implementation of Vision Transformer (ViT) proposed in [An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale](https://arxiv.org/pdf/2010.11929.pdf) The following image from the authors shows the architecture. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/ViT.png?raw=true) ``` python ViT.vit_small_patch16_224() ViT.vit_base_patch16_224() ViT.vit_base_patch16_384() ViT.vit_base_patch32_384() ViT.vit_huge_patch16_224() ViT.vit_huge_patch32_384() ViT.vit_large_patch16_224() ViT.vit_large_patch16_384() ViT.vit_large_patch32_384() ``` Examples: ``` python # change activation ViT.vit_base_patch16_224(activation = nn.SELU) # change number of classes (default is 1000 ) ViT.vit_base_patch16_224(n_classes=100) # pass a different block, default is TransformerEncoderBlock ViT.vit_base_patch16_224(block=MyCoolTransformerBlock) # get features model = ViT.vit_base_patch16_224 # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...] # change the tokens, you have to subclass ViTTokens class MyTokens(ViTTokens): def __init__(self, emb_size: int): super().__init__(emb_size) self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size)) ViT(tokens=MyTokens) ```
{}
glasses/vit_base_patch16_224
null
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# vit_base_patch16_384 Implementation of Vision Transformer (ViT) proposed in [An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale](https://arxiv.org/pdf/2010.11929.pdf) The following image from the authors shows the architecture. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/ViT.png?raw=true) ``` python ViT.vit_small_patch16_224() ViT.vit_base_patch16_224() ViT.vit_base_patch16_384() ViT.vit_base_patch32_384() ViT.vit_huge_patch16_224() ViT.vit_huge_patch32_384() ViT.vit_large_patch16_224() ViT.vit_large_patch16_384() ViT.vit_large_patch32_384() ``` Examples: ``` python # change activation ViT.vit_base_patch16_224(activation = nn.SELU) # change number of classes (default is 1000 ) ViT.vit_base_patch16_224(n_classes=100) # pass a different block, default is TransformerEncoderBlock ViT.vit_base_patch16_224(block=MyCoolTransformerBlock) # get features model = ViT.vit_base_patch16_224 # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...] # change the tokens, you have to subclass ViTTokens class MyTokens(ViTTokens): def __init__(self, emb_size: int): super().__init__(emb_size) self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size)) ViT(tokens=MyTokens) ```
{}
glasses/vit_base_patch16_384
null
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/vit_base_patch32_384
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# vit_huge_patch16_224 Implementation of Vision Transformer (ViT) proposed in [An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale](https://arxiv.org/pdf/2010.11929.pdf) The following image from the authors shows the architecture. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/ViT.png?raw=true) ``` python ViT.vit_small_patch16_224() ViT.vit_base_patch16_224() ViT.vit_base_patch16_384() ViT.vit_base_patch32_384() ViT.vit_huge_patch16_224() ViT.vit_huge_patch32_384() ViT.vit_large_patch16_224() ViT.vit_large_patch16_384() ViT.vit_large_patch32_384() ``` Examples: ``` python # change activation ViT.vit_base_patch16_224(activation = nn.SELU) # change number of classes (default is 1000 ) ViT.vit_base_patch16_224(n_classes=100) # pass a different block, default is TransformerEncoderBlock ViT.vit_base_patch16_224(block=MyCoolTransformerBlock) # get features model = ViT.vit_base_patch16_224 # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...] # change the tokens, you have to subclass ViTTokens class MyTokens(ViTTokens): def __init__(self, emb_size: int): super().__init__(emb_size) self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size)) ViT(tokens=MyTokens) ```
{}
glasses/vit_huge_patch16_224
null
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# vit_huge_patch32_384 Implementation of Vision Transformer (ViT) proposed in [An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale](https://arxiv.org/pdf/2010.11929.pdf) The following image from the authors shows the architecture. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/ViT.png?raw=true) ``` python ViT.vit_small_patch16_224() ViT.vit_base_patch16_224() ViT.vit_base_patch16_384() ViT.vit_base_patch32_384() ViT.vit_huge_patch16_224() ViT.vit_huge_patch32_384() ViT.vit_large_patch16_224() ViT.vit_large_patch16_384() ViT.vit_large_patch32_384() ``` Examples: ``` python # change activation ViT.vit_base_patch16_224(activation = nn.SELU) # change number of classes (default is 1000 ) ViT.vit_base_patch16_224(n_classes=100) # pass a different block, default is TransformerEncoderBlock ViT.vit_base_patch16_224(block=MyCoolTransformerBlock) # get features model = ViT.vit_base_patch16_224 # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...] # change the tokens, you have to subclass ViTTokens class MyTokens(ViTTokens): def __init__(self, emb_size: int): super().__init__(emb_size) self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size)) ViT(tokens=MyTokens) ```
{}
glasses/vit_huge_patch32_384
null
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# vit_large_patch16_224 Implementation of Vision Transformer (ViT) proposed in [An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale](https://arxiv.org/pdf/2010.11929.pdf) The following image from the authors shows the architecture. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/ViT.png?raw=true) ``` python ViT.vit_small_patch16_224() ViT.vit_base_patch16_224() ViT.vit_base_patch16_384() ViT.vit_base_patch32_384() ViT.vit_huge_patch16_224() ViT.vit_huge_patch32_384() ViT.vit_large_patch16_224() ViT.vit_large_patch16_384() ViT.vit_large_patch32_384() ``` Examples: ``` python # change activation ViT.vit_base_patch16_224(activation = nn.SELU) # change number of classes (default is 1000 ) ViT.vit_base_patch16_224(n_classes=100) # pass a different block, default is TransformerEncoderBlock ViT.vit_base_patch16_224(block=MyCoolTransformerBlock) # get features model = ViT.vit_base_patch16_224 # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...] # change the tokens, you have to subclass ViTTokens class MyTokens(ViTTokens): def __init__(self, emb_size: int): super().__init__(emb_size) self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size)) ViT(tokens=MyTokens) ```
{}
glasses/vit_large_patch16_224
null
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# vit_large_patch16_384 Implementation of Vision Transformer (ViT) proposed in [An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale](https://arxiv.org/pdf/2010.11929.pdf) The following image from the authors shows the architecture. ![image](https://github.com/FrancescoSaverioZuppichini/glasses/blob/develop/docs/_static/images/ViT.png?raw=true) ``` python ViT.vit_small_patch16_224() ViT.vit_base_patch16_224() ViT.vit_base_patch16_384() ViT.vit_base_patch32_384() ViT.vit_huge_patch16_224() ViT.vit_huge_patch32_384() ViT.vit_large_patch16_224() ViT.vit_large_patch16_384() ViT.vit_large_patch32_384() ``` Examples: ``` python # change activation ViT.vit_base_patch16_224(activation = nn.SELU) # change number of classes (default is 1000 ) ViT.vit_base_patch16_224(n_classes=100) # pass a different block, default is TransformerEncoderBlock ViT.vit_base_patch16_224(block=MyCoolTransformerBlock) # get features model = ViT.vit_base_patch16_224 # first call .features, this will activate the forward hooks and tells the model you'll like to get the features model.encoder.features model(torch.randn((1,3,224,224))) # get the features from the encoder features = model.encoder.features print([x.shape for x in features]) #[[torch.Size([1, 197, 768]), torch.Size([1, 197, 768]), ...] # change the tokens, you have to subclass ViTTokens class MyTokens(ViTTokens): def __init__(self, emb_size: int): super().__init__(emb_size) self.my_new_token = nn.Parameter(torch.randn(1, 1, emb_size)) ViT(tokens=MyTokens) ```
{}
glasses/vit_large_patch16_384
null
[ "transformers", "pytorch", "arxiv:2010.11929", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# wide_resnet101_2 Implementation of Wide ResNet proposed in [\"Wide Residual Networks\"](https://arxiv.org/pdf/1605.07146.pdf) Create a default model ``` python WideResNet.wide_resnet50_2() WideResNet.wide_resnet101_2() # create a wide_resnet18_4 WideResNet.resnet18(block=WideResNetBottleNeckBlock, width_factor=4) ``` Examples: ``` python # change activation WideResNet.resnext50_32x4d(activation = nn.SELU) # change number of classes (default is 1000 ) WideResNet.resnext50_32x4d(n_classes=100) # pass a different block WideResNet.resnext50_32x4d(block=SENetBasicBlock) # change the initial convolution model = WideResNet.resnext50_32x4d model.encoder.gate.conv1 = nn.Conv2d(3, 64, kernel_size=3) # store each feature x = torch.rand((1, 3, 224, 224)) model = WideResNet.wide_resnet50_2() features = [] x = model.encoder.gate(x) for block in model.encoder.layers: x = block(x) features.append(x) print([x.shape for x in features]) # [torch.Size([1, 64, 56, 56]), torch.Size([1, 128, 28, 28]), torch.Size([1, 256, 14, 14]), torch.Size([1, 512, 7, 7])] ```
{}
glasses/wide_resnet101_2
null
[ "transformers", "pytorch", "arxiv:1605.07146", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
glasses/wide_resnet50_2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00